WorldWideScience

Sample records for sample size ratio

  1. Dental arch dimensions, form and tooth size ratio among a Saudi sample

    Directory of Open Access Journals (Sweden)

    Haidi Omar

    2018-01-01

    Full Text Available Objectives: To determine the dental arch dimensions and arch forms in a sample of Saudi orthodontic patients, to investigate the prevalence of Bolton anterior and overall tooth size discrepancies, and to compare the effect of gender on the measured parameters. Methods: This study is a biometric analysis of dental casts of 149 young adults recruited from different orthodontic centers in Jeddah, Saudi Arabia. The dental arch dimensions were measured. The measured parameters were arch length, arch width, Bolton’s ratio, and arch form. The data were analyzed using IBM SPSS software version 22.0 (IBM Corporation, New York, USA; this cross-sectional study was conducted between April 2015 and May 2016. Results: Dental arch measurements, including inter-canine and inter-molar distance, were found to be significantly greater in males than females (p less than 0.05. The most prevalent dental arch forms were narrow tapered (50.3% and narrow ovoid (34.2%, respectively. The prevalence of tooth size discrepancy in all cases was 43.6% for anterior ratio and 24.8% for overall ratio. The mean Bolton’s anterior ratio in all malocclusion classes was 79.81%, whereas the mean Bolton’s overall ratio was 92.21%. There was no significant difference between males and females regarding Bolton’s ratio. Conclusion: The most prevalent arch form was narrow tapered, followed by narrow ovoid. Males generally had larger dental arch measurements than females, and the prevalence of tooth size discrepancy was more in Bolton’s anterior teeth ratio than in overall ratio.

  2. Optimum sample size allocation to minimize cost or maximize power for the two-sample trimmed mean test.

    Science.gov (United States)

    Guo, Jiin-Huarng; Luh, Wei-Ming

    2009-05-01

    When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.

  3. Arecibo Radar Observation of Near-Earth Asteroids: Expanded Sample Size, Determination of Radar Albedos, and Measurements of Polarization Ratios

    Science.gov (United States)

    Lejoly, Cassandra; Howell, Ellen S.; Taylor, Patrick A.; Springmann, Alessondra; Virkki, Anne; Nolan, Michael C.; Rivera-Valentin, Edgard G.; Benner, Lance A. M.; Brozovic, Marina; Giorgini, Jon D.

    2017-10-01

    The Near-Earth Asteroid (NEA) population ranges in size from a few meters to more than 10 kilometers. NEAs have a wide variety of taxonomic classes, surface features, and shapes, including spheroids, binary objects, contact binaries, elongated, as well as irregular bodies. Using the Arecibo Observatory planetary radar system, we have measured apparent rotation rate, radar reflectivity, apparent diameter, and radar albedos for over 350 NEAs. The radar albedo is defined as the radar cross-section divided by the geometric cross-section. If a shape model is available, the actual cross-section is known at the time of the observation. Otherwise we derive a geometric cross-section from a measured diameter. When radar imaging is available, the diameter was measured from the apparent range depth. However, when radar imaging was not available, we used the continuous wave (CW) bandwidth radar measurements in conjunction with the period of the object. The CW bandwidth provides apparent rotation rate, which, given an independent rotation measurement, such as from lightcurves, constrains the size of the object. We assumed an equatorial view unless we knew the pole orientation, which gives a lower limit on the diameter. The CW also provides the polarization ratio, which is the ratio of the SC and OC cross-sections.We confirm the trend found by Benner et al. (2008) that taxonomic types E and V have very high polarization ratios. We have obtained a larger sample and can analyze additional trends with spin, size, rotation rate, taxonomic class, polarization ratio, and radar albedo to interpret the origin of the NEAs and their dynamical processes. The distribution of radar albedo and polarization ratio at the smallest diameters (≤50 m) differs from the distribution of larger objects (>50 m), although the sample size is limited. Additionally, we find more moderate radar albedos for the smallest NEAs when compared to those with diameters 50-150 m. We will present additional trends we

  4. Estimating HIES Data through Ratio and Regression Methods for Different Sampling Designs

    Directory of Open Access Journals (Sweden)

    Faqir Muhammad

    2007-01-01

    Full Text Available In this study, comparison has been made for different sampling designs, using the HIES data of North West Frontier Province (NWFP for 2001-02 and 1998-99 collected from the Federal Bureau of Statistics, Statistical Division, Government of Pakistan, Islamabad. The performance of the estimators has also been considered using bootstrap and Jacknife. A two-stage stratified random sample design is adopted by HIES. In the first stage, enumeration blocks and villages are treated as the first stage Primary Sampling Units (PSU. The sample PSU’s are selected with probability proportional to size. Secondary Sampling Units (SSU i.e., households are selected by systematic sampling with a random start. They have used a single study variable. We have compared the HIES technique with some other designs, which are: Stratified Simple Random Sampling. Stratified Systematic Sampling. Stratified Ranked Set Sampling. Stratified Two Phase Sampling. Ratio and Regression methods were applied with two study variables, which are: Income (y and Household sizes (x. Jacknife and Bootstrap are used for variance replication. Simple Random Sampling with sample size (462 to 561 gave moderate variances both by Jacknife and Bootstrap. By applying Systematic Sampling, we received moderate variance with sample size (467. In Jacknife with Systematic Sampling, we obtained variance of regression estimator greater than that of ratio estimator for a sample size (467 to 631. At a sample size (952 variance of ratio estimator gets greater than that of regression estimator. The most efficient design comes out to be Ranked set sampling compared with other designs. The Ranked set sampling with jackknife and bootstrap, gives minimum variance even with the smallest sample size (467. Two Phase sampling gave poor performance. Multi-stage sampling applied by HIES gave large variances especially if used with a single study variable.

  5. The Statistics and Mathematics of High Dimension Low Sample Size Asymptotics.

    Science.gov (United States)

    Shen, Dan; Shen, Haipeng; Zhu, Hongtu; Marron, J S

    2016-10-01

    The aim of this paper is to establish several deep theoretical properties of principal component analysis for multiple-component spike covariance models. Our new results reveal an asymptotic conical structure in critical sample eigendirections under the spike models with distinguishable (or indistinguishable) eigenvalues, when the sample size and/or the number of variables (or dimension) tend to infinity. The consistency of the sample eigenvectors relative to their population counterparts is determined by the ratio between the dimension and the product of the sample size with the spike size. When this ratio converges to a nonzero constant, the sample eigenvector converges to a cone, with a certain angle to its corresponding population eigenvector. In the High Dimension, Low Sample Size case, the angle between the sample eigenvector and its population counterpart converges to a limiting distribution. Several generalizations of the multi-spike covariance models are also explored, and additional theoretical results are presented.

  6. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  7. Pengaruh Current Ratio, Asset Size, dan Earnings Variability terhadap Beta Pasar

    Directory of Open Access Journals (Sweden)

    Ahim Abdurahim

    2016-02-01

    Full Text Available The research objective was to determine the effect of variable accounting ie :, current ratio, asset size and earnings variability of the market beta. This study used 72 samples. Analyzer used to test the hypothesis that regression. Previous methods of Fowler and Rorke (1983 to adjust the market beta, and BLUE test is used to test classic assumptions of the independent variables are multikolinearitas, heteroskedasitas with Breushch-Pagan-Godfrey test, and autocorrelation with BG (The Breussh-Godfrey. The results found that the hypothesis H1a, H1b, H1c, and H2a powered means no influence current ratio, asset size and earnings variability of the market beta, both individually and simultaneously.

  8. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  9. Stable water isotopologue ratios in fog and cloud droplets of liquid clouds are not size-dependent

    Science.gov (United States)

    Spiegel, J.K.; Aemisegger, F.; Scholl, M.; Wienhold, F.G.; Collett, J.L.; Lee, T.; van Pinxteren, D.; Mertes, S.; Tilgner, A.; Herrmann, H.; Werner, Roland A.; Buchmann, N.; Eugster, W.

    2012-01-01

    In this work, we present the first observations of stable water isotopologue ratios in cloud droplets of different sizes collected simultaneously. We address the question whether the isotope ratio of droplets in a liquid cloud varies as a function of droplet size. Samples were collected from a ground intercepted cloud (= fog) during the Hill Cap Cloud Thuringia 2010 campaign (HCCT-2010) using a three-stage Caltech Active Strand Cloud water Collector (CASCC). An instrument test revealed that no artificial isotopic fractionation occurs during sample collection with the CASCC. Furthermore, we could experimentally confirm the hypothesis that the δ values of cloud droplets of the relevant droplet sizes (μm-range) were not significantly different and thus can be assumed to be in isotopic equilibrium immediately with the surrounding water vapor. However, during the dissolution period of the cloud, when the supersaturation inside the cloud decreased and the cloud began to clear, differences in isotope ratios of the different droplet sizes tended to be larger. This is likely to result from the cloud's heterogeneity, implying that larger and smaller cloud droplets have been collected at different moments in time, delivering isotope ratios from different collection times.

  10. Stable water isotopologue ratios in fog and cloud droplets of liquid clouds are not size-dependent

    Directory of Open Access Journals (Sweden)

    J. K. Spiegel

    2012-10-01

    Full Text Available In this work, we present the first observations of stable water isotopologue ratios in cloud droplets of different sizes collected simultaneously. We address the question whether the isotope ratio of droplets in a liquid cloud varies as a function of droplet size. Samples were collected from a ground intercepted cloud (= fog during the Hill Cap Cloud Thuringia 2010 campaign (HCCT-2010 using a three-stage Caltech Active Strand Cloud water Collector (CASCC. An instrument test revealed that no artificial isotopic fractionation occurs during sample collection with the CASCC. Furthermore, we could experimentally confirm the hypothesis that the δ values of cloud droplets of the relevant droplet sizes (μm-range were not significantly different and thus can be assumed to be in isotopic equilibrium immediately with the surrounding water vapor. However, during the dissolution period of the cloud, when the supersaturation inside the cloud decreased and the cloud began to clear, differences in isotope ratios of the different droplet sizes tended to be larger. This is likely to result from the cloud's heterogeneity, implying that larger and smaller cloud droplets have been collected at different moments in time, delivering isotope ratios from different collection times.

  11. Detecting spatial structures in throughfall data: The effect of extent, sample size, sampling design, and variogram estimation method

    Science.gov (United States)

    Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander

    2016-09-01

    In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous

  12. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  13. Sample Size Calculations for Population Size Estimation Studies Using Multiplier Methods With Respondent-Driven Sampling Surveys.

    Science.gov (United States)

    Fearon, Elizabeth; Chabata, Sungai T; Thompson, Jennifer A; Cowan, Frances M; Hargreaves, James R

    2017-09-14

    While guidance exists for obtaining population size estimates using multiplier methods with respondent-driven sampling surveys, we lack specific guidance for making sample size decisions. To guide the design of multiplier method population size estimation studies using respondent-driven sampling surveys to reduce the random error around the estimate obtained. The population size estimate is obtained by dividing the number of individuals receiving a service or the number of unique objects distributed (M) by the proportion of individuals in a representative survey who report receipt of the service or object (P). We have developed an approach to sample size calculation, interpreting methods to estimate the variance around estimates obtained using multiplier methods in conjunction with research into design effects and respondent-driven sampling. We describe an application to estimate the number of female sex workers in Harare, Zimbabwe. There is high variance in estimates. Random error around the size estimate reflects uncertainty from M and P, particularly when the estimate of P in the respondent-driven sampling survey is low. As expected, sample size requirements are higher when the design effect of the survey is assumed to be greater. We suggest a method for investigating the effects of sample size on the precision of a population size estimate obtained using multipler methods and respondent-driven sampling. Uncertainty in the size estimate is high, particularly when P is small, so balancing against other potential sources of bias, we advise researchers to consider longer service attendance reference periods and to distribute more unique objects, which is likely to result in a higher estimate of P in the respondent-driven sampling survey. ©Elizabeth Fearon, Sungai T Chabata, Jennifer A Thompson, Frances M Cowan, James R Hargreaves. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 14.09.2017.

  14. The large sample size fallacy.

    Science.gov (United States)

    Lantz, Björn

    2013-06-01

    Significance in the statistical sense has little to do with significance in the common practical sense. Statistical significance is a necessary but not a sufficient condition for practical significance. Hence, results that are extremely statistically significant may be highly nonsignificant in practice. The degree of practical significance is generally determined by the size of the observed effect, not the p-value. The results of studies based on large samples are often characterized by extreme statistical significance despite small or even trivial effect sizes. Interpreting such results as significant in practice without further analysis is referred to as the large sample size fallacy in this article. The aim of this article is to explore the relevance of the large sample size fallacy in contemporary nursing research. Relatively few nursing articles display explicit measures of observed effect sizes or include a qualitative discussion of observed effect sizes. Statistical significance is often treated as an end in itself. Effect sizes should generally be calculated and presented along with p-values for statistically significant results, and observed effect sizes should be discussed qualitatively through direct and explicit comparisons with the effects in related literature. © 2012 Nordic College of Caring Science.

  15. Sample size in qualitative interview studies

    DEFF Research Database (Denmark)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane

    2016-01-01

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... and during data collection of a qualitative study is discussed....

  16. Are fixed grain size ratios useful proxies for loess sedimentation dynamics? Experiences from Remizovka, Kazakhstan

    Science.gov (United States)

    Schulte, Philipp; Sprafke, Tobias; Rodrigues, Leonor; Fitzsimmons, Kathryn E.

    2018-04-01

    Loess-paleosol sequences (LPS) are sensitive terrestrial archives of past aeolian dynamics and paleoclimatic changes within the Quaternary. Grain size (GS) analysis is commonly used to interpret aeolian dynamics and climate influences on LPS, based on granulometric parameters such as specific GS classes, ratios of GS classes and statistical manipulation of GS data. However, the GS distribution of a loess sample is not solely a function of aeolian dynamics; rather complex polygenetic depositional and post-depositional processes must be taken into account. This study assesses the reliability of fixed GS ratios as proxies for past sedimentation dynamics using the case study of Remizovka in southeast Kazakhstan. Continuous sampling of the upper 8 m of the profile, which shows extremely weak pedogenic alteration and is therefore dominated by primary aeolian activity, indicates that fixed GS ratios do not adequately serve as proxies for loess sedimentation dynamics. We find through the calculation of single value parameters, that "true" variations within sensitive GS classes are masked by relative changes of the more frequent classes. Heatmap signatures provide the visualization of GS variability within LPS without significant data loss within the measured classes of a sample, or across all measured samples. We also examine the effect of two different commonly used laser diffraction devices on GS ratio calculation by duplicate measurements, the Beckman Coulter (LS13320) and a Malvern Mastersizer Hydro (MM2000), as well as the applicability and significance of the so-called "twin peak ratio" previously developed on samples from the same section. The LS13320 provides higher resolution results than the MM2000, nevertheless the GS ratios related to variations in the silt-sized fraction were comparable. However, we could not detect a twin peak within the coarse silt as detected in the original study using the same device. Our GS measurements differ from previous works at

  17. Concepts in sample size determination

    Directory of Open Access Journals (Sweden)

    Umadevi K Rao

    2012-01-01

    Full Text Available Investigators involved in clinical, epidemiological or translational research, have the drive to publish their results so that they can extrapolate their findings to the population. This begins with the preliminary step of deciding the topic to be studied, the subjects and the type of study design. In this context, the researcher must determine how many subjects would be required for the proposed study. Thus, the number of individuals to be included in the study, i.e., the sample size is an important consideration in the design of many clinical studies. The sample size determination should be based on the difference in the outcome between the two groups studied as in an analytical study, as well as on the accepted p value for statistical significance and the required statistical power to test a hypothesis. The accepted risk of type I error or alpha value, which by convention is set at the 0.05 level in biomedical research defines the cutoff point at which the p value obtained in the study is judged as significant or not. The power in clinical research is the likelihood of finding a statistically significant result when it exists and is typically set to >80%. This is necessary since the most rigorously executed studies may fail to answer the research question if the sample size is too small. Alternatively, a study with too large a sample size will be difficult and will result in waste of time and resources. Thus, the goal of sample size planning is to estimate an appropriate number of subjects for a given study design. This article describes the concepts in estimating the sample size.

  18. Finite mixture models for the computation of isotope ratios in mixed isotopic samples

    Science.gov (United States)

    Koffler, Daniel; Laaha, Gregor; Leisch, Friedrich; Kappel, Stefanie; Prohaska, Thomas

    2013-04-01

    Finite mixture models have been used for more than 100 years, but have seen a real boost in popularity over the last two decades due to the tremendous increase in available computing power. The areas of application of mixture models range from biology and medicine to physics, economics and marketing. These models can be applied to data where observations originate from various groups and where group affiliations are not known, as is the case for multiple isotope ratios present in mixed isotopic samples. Recently, the potential of finite mixture models for the computation of 235U/238U isotope ratios from transient signals measured in individual (sub-)µm-sized particles by laser ablation - multi-collector - inductively coupled plasma mass spectrometry (LA-MC-ICPMS) was demonstrated by Kappel et al. [1]. The particles, which were deposited on the same substrate, were certified with respect to their isotopic compositions. Here, we focus on the statistical model and its application to isotope data in ecogeochemistry. Commonly applied evaluation approaches for mixed isotopic samples are time-consuming and are dependent on the judgement of the analyst. Thus, isotopic compositions may be overlooked due to the presence of more dominant constituents. Evaluation using finite mixture models can be accomplished unsupervised and automatically. The models try to fit several linear models (regression lines) to subgroups of data taking the respective slope as estimation for the isotope ratio. The finite mixture models are parameterised by: • The number of different ratios. • Number of points belonging to each ratio-group. • The ratios (i.e. slopes) of each group. Fitting of the parameters is done by maximising the log-likelihood function using an iterative expectation-maximisation (EM) algorithm. In each iteration step, groups of size smaller than a control parameter are dropped; thereby the number of different ratios is determined. The analyst only influences some control

  19. Precision of quantization of the hall conductivity in a finite-size sample: Power law

    International Nuclear Information System (INIS)

    Greshnov, A. A.; Kolesnikova, E. N.; Zegrya, G. G.

    2006-01-01

    A microscopic calculation of the conductivity in the integer quantum Hall effect (IQHE) mode is carried out. The precision of quantization is analyzed for finite-size samples. The precision of quantization shows a power-law dependence on the sample size. A new scaling parameter describing this dependence is introduced. It is also demonstrated that the precision of quantization linearly depends on the ratio between the amplitude of the disorder potential and the cyclotron energy. The data obtained are compared with the results of magnetotransport measurements in mesoscopic samples

  20. Size, Book-to-Market Ratio and Relativity of Accounting Information Value: Empirical Research on the Chinese Listed Company

    Science.gov (United States)

    Yu, Jing; Cheng, Siwei; Xu, Bin

    Recently there are many literatures studying the effect of factors such as size or book-market ratio on fluctuation of accounting earnings, stock price or earnings respectively, but so far their affection on accounting information value relativity has been scarcely addressed. This paper presents the detail analyses of their effect of the two factors to the relativity of accounting information value respectively by taking Shanghai and Shenzhen stock markets as sample. And the analyses supports the following two hypotheses, (1) The relativity of accounting information value of big size corporation is more than that of small size corporation. (2) The relativity of accounting information value of low B/M ratio corporation is more than that of low B/M ratio corporation.

  1. Improved sample size determination for attributes and variables sampling

    International Nuclear Information System (INIS)

    Stirpe, D.; Picard, R.R.

    1985-01-01

    Earlier INMM papers have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, we have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed; and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments. 1 ref., 3 figs

  2. The study of the sample size on the transverse magnetoresistance of bismuth nanowires

    International Nuclear Information System (INIS)

    Zare, M.; Layeghnejad, R.; Sadeghi, E.

    2012-01-01

    The effects of sample size on the galvanomagnetice properties of semimetal nanowires are theoretically investigated. Transverse magnetoresistance (TMR) ratios have been calculated within a Boltzmann Transport Equation (BTE) approach by specular reflection approximation. Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. The obtained values are in good agreement with the experimental results, reported by Heremans et al. - Highlights: ► In this study effects of sample size on the galvanomagnetic properties of Bi. ► Nanowires were explained by Parrott theorem by solving the Boltzmann Transport Equation. ► Transverse magnetoresistance (TMR) ratios have been measured by specular reflection approximation. ► Temperature and radius dependence of the transverse magnetoresistance of cylindrical Bismuth nanowires are given. ► The obtained values are in good agreement with the experimental results, reported by Heremans et al.

  3. Field Sample Preparation Method Development for Isotope Ratio Mass Spectrometry

    International Nuclear Information System (INIS)

    Leibman, C.; Weisbrod, K.; Yoshida, T.

    2015-01-01

    Non-proliferation and International Security (NA-241) established a working group of researchers from Los Alamos National Laboratory (LANL), Pacific Northwest National Laboratory (PNNL) and Savannah River National Laboratory (SRNL) to evaluate the utilization of in-field mass spectrometry for safeguards applications. The survey of commercial off-the-shelf (COTS) mass spectrometers (MS) revealed no instrumentation existed capable of meeting all the potential safeguards requirements for performance, portability, and ease of use. Additionally, fieldable instruments are unlikely to meet the International Target Values (ITVs) for accuracy and precision for isotope ratio measurements achieved with laboratory methods. The major gaps identified for in-field actinide isotope ratio analysis were in the areas of: 1. sample preparation and/or sample introduction, 2. size reduction of mass analyzers and ionization sources, 3. system automation, and 4. decreased system cost. Development work in 2 through 4, numerated above continues, in the private and public sector. LANL is focusing on developing sample preparation/sample introduction methods for use with the different sample types anticipated for safeguard applications. Addressing sample handling and sample preparation methods for MS analysis will enable use of new MS instrumentation as it becomes commercially available. As one example, we have developed a rapid, sample preparation method for dissolution of uranium and plutonium oxides using ammonium bifluoride (ABF). ABF is a significantly safer and faster alternative to digestion with boiling combinations of highly concentrated mineral acids. Actinides digested with ABF yield fluorides, which can then be analyzed directly or chemically converted and separated using established column chromatography techniques as needed prior to isotope analysis. The reagent volumes and the sample processing steps associated with ABF sample digestion lend themselves to automation and field

  4. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Experimental determination of size distributions: analyzing proper sample sizes

    International Nuclear Information System (INIS)

    Buffo, A; Alopaeus, V

    2016-01-01

    The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used. (paper)

  6. [Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].

    Science.gov (United States)

    Suzukawa, Yumi; Toyoda, Hideki

    2012-04-01

    This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.

  7. Sample size calculations for case-control studies

    Science.gov (United States)

    This R package can be used to calculate the required samples size for unconditional multivariate analyses of unmatched case-control studies. The sample sizes are for a scalar exposure effect, such as binary, ordinal or continuous exposures. The sample sizes can also be computed for scalar interaction effects. The analyses account for the effects of potential confounder variables that are also included in the multivariate logistic model.

  8. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  9. Differentiating gold nanorod samples using particle size and shape distributions from transmission electron microscope images

    Science.gov (United States)

    Grulke, Eric A.; Wu, Xiaochun; Ji, Yinglu; Buhr, Egbert; Yamamoto, Kazuhiro; Song, Nam Woong; Stefaniak, Aleksandr B.; Schwegler-Berry, Diane; Burchett, Woodrow W.; Lambert, Joshua; Stromberg, Arnold J.

    2018-04-01

    Size and shape distributions of gold nanorod samples are critical to their physico-chemical properties, especially their longitudinal surface plasmon resonance. This interlaboratory comparison study developed methods for measuring and evaluating size and shape distributions for gold nanorod samples using transmission electron microscopy (TEM) images. The objective was to determine whether two different samples, which had different performance attributes in their application, were different with respect to their size and/or shape descriptor distributions. Touching particles in the captured images were identified using a ruggedness shape descriptor. Nanorods could be distinguished from nanocubes using an elongational shape descriptor. A non-parametric statistical test showed that cumulative distributions of an elongational shape descriptor, that is, the aspect ratio, were statistically different between the two samples for all laboratories. While the scale parameters of size and shape distributions were similar for both samples, the width parameters of size and shape distributions were statistically different. This protocol fulfills an important need for a standardized approach to measure gold nanorod size and shape distributions for applications in which quantitative measurements and comparisons are important. Furthermore, the validated protocol workflow can be automated, thus providing consistent and rapid measurements of nanorod size and shape distributions for researchers, regulatory agencies, and industry.

  10. Neuromuscular dose-response studies: determining sample size.

    Science.gov (United States)

    Kopman, A F; Lien, C A; Naguib, M

    2011-02-01

    Investigators planning dose-response studies of neuromuscular blockers have rarely used a priori power analysis to determine the minimal sample size their protocols require. Institutional Review Boards and peer-reviewed journals now generally ask for this information. This study outlines a proposed method for meeting these requirements. The slopes of the dose-response relationships of eight neuromuscular blocking agents were determined using regression analysis. These values were substituted for γ in the Hill equation. When this is done, the coefficient of variation (COV) around the mean value of the ED₅₀ for each drug is easily calculated. Using these values, we performed an a priori one-sample two-tailed t-test of the means to determine the required sample size when the allowable error in the ED₅₀ was varied from ±10-20%. The COV averaged 22% (range 15-27%). We used a COV value of 25% in determining the sample size. If the allowable error in finding the mean ED₅₀ is ±15%, a sample size of 24 is needed to achieve a power of 80%. Increasing 'accuracy' beyond this point requires increasing greater sample sizes (e.g. an 'n' of 37 for a ±12% error). On the basis of the results of this retrospective analysis, a total sample size of not less than 24 subjects should be adequate for determining a neuromuscular blocking drug's clinical potency with a reasonable degree of assurance.

  11. Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning

    Science.gov (United States)

    Li, Zhushan

    2014-01-01

    Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…

  12. Computing power and sample size for case-control association studies with copy number polymorphism: application of mixture-based likelihood ratio test.

    Directory of Open Access Journals (Sweden)

    Wonkuk Kim

    Full Text Available Recent studies suggest that copy number polymorphisms (CNPs may play an important role in disease susceptibility and onset. Currently, the detection of CNPs mainly depends on microarray technology. For case-control studies, conventionally, subjects are assigned to a specific CNP category based on the continuous quantitative measure produced by microarray experiments, and cases and controls are then compared using a chi-square test of independence. The purpose of this work is to specify the likelihood ratio test statistic (LRTS for case-control sampling design based on the underlying continuous quantitative measurement, and to assess its power and relative efficiency (as compared to the chi-square test of independence on CNP counts. The sample size and power formulas of both methods are given. For the latter, the CNPs are classified using the Bayesian classification rule. The LRTS is more powerful than this chi-square test for the alternatives considered, especially alternatives in which the at-risk CNP categories have low frequencies. An example of the application of the LRTS is given for a comparison of CNP distributions in individuals of Caucasian or Taiwanese ethnicity, where the LRTS appears to be more powerful than the chi-square test, possibly due to misclassification of the most common CNP category into a less common category.

  13. Estimating Sample Size for Usability Testing

    Directory of Open Access Journals (Sweden)

    Alex Cazañas

    2017-02-01

    Full Text Available One strategy used to assure that an interface meets user requirements is to conduct usability testing. When conducting such testing one of the unknowns is sample size. Since extensive testing is costly, minimizing the number of participants can contribute greatly to successful resource management of a project. Even though a significant number of models have been proposed to estimate sample size in usability testing, there is still not consensus on the optimal size. Several studies claim that 3 to 5 users suffice to uncover 80% of problems in a software interface. However, many other studies challenge this assertion. This study analyzed data collected from the user testing of a web application to verify the rule of thumb, commonly known as the “magic number 5”. The outcomes of the analysis showed that the 5-user rule significantly underestimates the required sample size to achieve reasonable levels of problem detection.

  14. Sample Size Determination for One- and Two-Sample Trimmed Mean Tests

    Science.gov (United States)

    Luh, Wei-Ming; Olejnik, Stephen; Guo, Jiin-Huarng

    2008-01-01

    Formulas to determine the necessary sample sizes for parametric tests of group comparisons are available from several sources and appropriate when population distributions are normal. However, in the context of nonnormal population distributions, researchers recommend Yuen's trimmed mean test, but formulas to determine sample sizes have not been…

  15. On efficiency of some ratio estimators in double sampling design ...

    African Journals Online (AJOL)

    In this paper, three sampling ratio estimators in double sampling design were proposed with the intention of finding an alternative double sampling design estimator to the conventional ratio estimator in double sampling design discussed by Cochran (1997), Okafor (2002) , Raj (1972) and Raj and Chandhok (1999).

  16. A novel approach for small sample size family-based association studies: sequential tests.

    Science.gov (United States)

    Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan

    2011-08-01

    In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.

  17. Sample size determination for mediation analysis of longitudinal data.

    Science.gov (United States)

    Pan, Haitao; Liu, Suyu; Miao, Danmin; Yuan, Ying

    2018-03-27

    Sample size planning for longitudinal data is crucial when designing mediation studies because sufficient statistical power is not only required in grant applications and peer-reviewed publications, but is essential to reliable research results. However, sample size determination is not straightforward for mediation analysis of longitudinal design. To facilitate planning the sample size for longitudinal mediation studies with a multilevel mediation model, this article provides the sample size required to achieve 80% power by simulations under various sizes of the mediation effect, within-subject correlations and numbers of repeated measures. The sample size calculation is based on three commonly used mediation tests: Sobel's method, distribution of product method and the bootstrap method. Among the three methods of testing the mediation effects, Sobel's method required the largest sample size to achieve 80% power. Bootstrapping and the distribution of the product method performed similarly and were more powerful than Sobel's method, as reflected by the relatively smaller sample sizes. For all three methods, the sample size required to achieve 80% power depended on the value of the ICC (i.e., within-subject correlation). A larger value of ICC typically required a larger sample size to achieve 80% power. Simulation results also illustrated the advantage of the longitudinal study design. The sample size tables for most encountered scenarios in practice have also been published for convenient use. Extensive simulations study showed that the distribution of the product method and bootstrapping method have superior performance to the Sobel's method, but the product method was recommended to use in practice in terms of less computation time load compared to the bootstrapping method. A R package has been developed for the product method of sample size determination in mediation longitudinal study design.

  18. The Stiles-Crawford Effect: spot-size ratio departure in retinitis pigmentosa

    Science.gov (United States)

    Sharma, Nachieketa K.; Lakshminarayanan, Vasudevan

    2016-04-01

    The Stiles-Crawford effect of the first kind is the retina's compensative response to loss of luminance efficiency for oblique stimulation manifested as the spot-size ratio departure from the perfect power coupling for a normal human eye. In a retinitis pigmentosa eye (RP), the normal cone photoreceptor morphology is affected due to foveal cone loss and disrupted cone mosaic spatial arrangement with reduction in directional sensitivity. We show that the flattened Stiles-Crawford function (SCF) in a RP eye is due to a different spot-size ratio departure profile, that is, for the same loss of luminance efficiency, a RP eye has a smaller departure from perfect power coupling compared to a normal eye. Again, the difference in spot-size ratio departure increases from the centre towards the periphery, having zero value for axial entry and maximum value for maximum peripheral entry indicating dispersal of photoreceptor alignment which prevents the retina to go for a bigger compensative response as it lacks both in number and appropriate cone morphology to tackle the loss of luminance efficiency for oblique stimulation. The slope of departure profile also testifies to the flattened SCF for a RP eye. Moreover, the discrepancy in spot-size ratio departure between a normal and a RP eye is shown to have a direct bearing on the Stiles-Crawford diminution of visibility.

  19. ANALISA PENGARUH BETA, SIZE PERUSAHAAN, DER DAN PBV RATIO TERHADAP RETURN SAHAM

    Directory of Open Access Journals (Sweden)

    Agung Sugiarto

    2012-03-01

    Full Text Available Penelitian ini bertujuan untuk memperlihatkan beberapa variabel yang menjadi pemrediksi stock return. Variabel tersebut adalah Beta, Company Size, DER ratio and PBV ratio. Berdasarkan analisis regresi, beta mempunyai dampak yang positif terhadap stock return tetapi tidak signifikan, besar kecilnya perusahaan dan rasio PBV mempunyai dampak positif dan signifikan, sedangkan rasio DER mempunyai dampak negative dan signifikan terhadap stock return. Dampak variabel-variabel pada stock return pada perusahaan-perusahaan yang terdaftar di Main Board Index (MBX lebih tinggi daripada perusahaan-perusahaan yang terdaftar pada Development Board Index (DBX. The research has a purpose for showing some factors that become the predictor for the stock return. They are Beta, Company Size, DER ratio and PBV ratio. From the reggresion analysis, the results say that beta have a positive effect to the stock return, but it is not significant; the company size and PBV ratio have a positive and significant effect to the stock return; while the DER ratio have a negative and significant effect to the stock return. The variables effect on stock return in companies that listed in Main Board Index (MBX is higher than in companies listed in Development Board Index (DBX.

  20. Flock sizes and sex ratios of canvasbacks in Chesapeake Bay and North Carolina

    Science.gov (United States)

    Haramis, G.M.; Derleth, E.L.; Link, W.A.

    1994-01-01

    Knowledge of the distribution, size, and sex ratios of flocks of wintering canvasbacks (Aythya valisineria) is fundamental to understanding the species' winter ecology and providing guidelines for management. Consequently, in winter 1986-87, we conducted 4 monthly aerial photographic surveys to investigate temporal changes in distribution, size, and sex ratios of canvasback flocks in traditional wintering areas of Chesapeake Bay and coastal North Carolina. Surveys yielded 35mm imagery of 194,664 canvasbacks in 842 flocks. Models revealed monthly patterns of flock size in North Carolina and Virginia, but no pattern of change in Maryland. A stepwise analysis of flock size and sex ratio fit a common positive slope (increasing proportion male) for all state-month datasets, except for North Carolina in February where the slope was larger (P lt 0.001). State and month effects on intercepts were significant (P lt 0.001) and confirmed a previously identified latitudinal gradient in sex ratio in the survey region. There was no relationship between flock purity (% canvasbacks vs. other species) and flock size except in North Carolina in January, February, and March when flock purity was related to flock size. Contrasting characteristics in North Carolina with regard to flock size (larger flocks) and flock purity suggested that proximate factors were reinforcing flocking behavior and possibly species fidelity there. Of possible factors, the need to locate foraging sites within this large, open-water environment was hypothesized to be of primary importance. Comparison of January 1981 and 1987 sex ratios indicated no change in Maryland, but lower (P lt 0.05) canvasback sex ratios (proportion male) in Virginia and North Carolina.

  1. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. 40 CFR 80.127 - Sample size guidelines.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing the...

  3. Determination of the optimal sample size for a clinical trial accounting for the population size.

    Science.gov (United States)

    Stallard, Nigel; Miller, Frank; Day, Simon; Hee, Siew Wan; Madan, Jason; Zohar, Sarah; Posch, Martin

    2017-07-01

    The problem of choosing a sample size for a clinical trial is a very common one. In some settings, such as rare diseases or other small populations, the large sample sizes usually associated with the standard frequentist approach may be infeasible, suggesting that the sample size chosen should reflect the size of the population under consideration. Incorporation of the population size is possible in a decision-theoretic approach either explicitly by assuming that the population size is fixed and known, or implicitly through geometric discounting of the gain from future patients reflecting the expected population size. This paper develops such approaches. Building on previous work, an asymptotic expression is derived for the sample size for single and two-arm clinical trials in the general case of a clinical trial with a primary endpoint with a distribution of one parameter exponential family form that optimizes a utility function that quantifies the cost and gain per patient as a continuous function of this parameter. It is shown that as the size of the population, N, or expected size, N∗ in the case of geometric discounting, becomes large, the optimal trial size is O(N1/2) or O(N∗1/2). The sample size obtained from the asymptotic expression is also compared with the exact optimal sample size in examples with responses with Bernoulli and Poisson distributions, showing that the asymptotic approximations can also be reasonable in relatively small sample sizes. © 2016 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Publication Bias in Psychology: A Diagnosis Based on the Correlation between Effect Size and Sample Size

    Science.gov (United States)

    Kühberger, Anton; Fritz, Astrid; Scherndl, Thomas

    2014-01-01

    Background The p value obtained from a significance test provides no information about the magnitude or importance of the underlying phenomenon. Therefore, additional reporting of effect size is often recommended. Effect sizes are theoretically independent from sample size. Yet this may not hold true empirically: non-independence could indicate publication bias. Methods We investigate whether effect size is independent from sample size in psychological research. We randomly sampled 1,000 psychological articles from all areas of psychological research. We extracted p values, effect sizes, and sample sizes of all empirical papers, and calculated the correlation between effect size and sample size, and investigated the distribution of p values. Results We found a negative correlation of r = −.45 [95% CI: −.53; −.35] between effect size and sample size. In addition, we found an inordinately high number of p values just passing the boundary of significance. Additional data showed that neither implicit nor explicit power analysis could account for this pattern of findings. Conclusion The negative correlation between effect size and samples size, and the biased distribution of p values indicate pervasive publication bias in the entire field of psychology. PMID:25192357

  5. [Practical aspects regarding sample size in clinical research].

    Science.gov (United States)

    Vega Ramos, B; Peraza Yanes, O; Herrera Correa, G; Saldívar Toraya, S

    1996-01-01

    The knowledge of the right sample size let us to be sure if the published results in medical papers had a suitable design and a proper conclusion according to the statistics analysis. To estimate the sample size we must consider the type I error, type II error, variance, the size of the effect, significance and power of the test. To decide what kind of mathematics formula will be used, we must define what kind of study we have, it means if its a prevalence study, a means values one or a comparative one. In this paper we explain some basic topics of statistics and we describe four simple samples of estimation of sample size.

  6. The relationship between size, book-to-market equity ratio, earnings–price ratio, and return for the Tehran stock Exchange

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Sadeghi Lafmejani

    2016-01-01

    Full Text Available This paper presents an empirical investigation to determine whether or there is any difference between the returns of two value and growth portfolios, sorted by price-to-earnings (P/E and price-to-book value (P/BV, in terms of the ratios of market sensitivity to index (β, firm size and market liquidity in listed firms in Tehran Stock Exchange (TSE over the period 2001-2008. The selected firms were collected from those with existing two-consecutive positive P/E and P/BV ratios and by excluding financial and holding firms. There were five independent variables for the proposed study of this paper including P/E, P/B, market size, market sensitivity beta (β and market liquidity. In each year, we first sort firms in non-decreasing order and setup four set of portfolios with equal firms. Therefore, the first portfolio with the lowest P/E ratio is called value portfolio and the last one with the highest P/E ratio is called growth portfolio. This process was repeated based on P/BV ratio to determine value and growth portfolios, accordingly. The study investigated the characteristics of two portfolios based on firm size, β and liquidity. The study has implemented t-student and Levin’s test to examine different hypotheses and the results have indicated mix effects of market sensitivity, firm size and market liquidity on returns of the firms in various periods.

  7. Sample size calculations for cluster randomised crossover trials in Australian and New Zealand intensive care research.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Pilcher, David; Bellomo, Rinaldo; Forbes, Andrew B

    2018-06-01

    The cluster randomised crossover (CRXO) design provides an opportunity to conduct randomised controlled trials to evaluate low risk interventions in the intensive care setting. Our aim is to provide a tutorial on how to perform a sample size calculation for a CRXO trial, focusing on the meaning of the elements required for the calculations, with application to intensive care trials. We use all-cause in-hospital mortality from the Australian and New Zealand Intensive Care Society Adult Patient Database clinical registry to illustrate the sample size calculations. We show sample size calculations for a two-intervention, two 12-month period, cross-sectional CRXO trial. We provide the formulae, and examples of their use, to determine the number of intensive care units required to detect a risk ratio (RR) with a designated level of power between two interventions for trials in which the elements required for sample size calculations remain constant across all ICUs (unstratified design); and in which there are distinct groups (strata) of ICUs that differ importantly in the elements required for sample size calculations (stratified design). The CRXO design markedly reduces the sample size requirement compared with the parallel-group, cluster randomised design for the example cases. The stratified design further reduces the sample size requirement compared with the unstratified design. The CRXO design enables the evaluation of routinely used interventions that can bring about small, but important, improvements in patient care in the intensive care setting.

  8. Sample size calculation in metabolic phenotyping studies.

    Science.gov (United States)

    Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J

    2015-09-01

    The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  9. Sample size determination and power

    CERN Document Server

    Ryan, Thomas P, Jr

    2013-01-01

    THOMAS P. RYAN, PhD, teaches online advanced statistics courses for Northwestern University and The Institute for Statistics Education in sample size determination, design of experiments, engineering statistics, and regression analysis.

  10. Sample size determination in clinical trials with multiple endpoints

    CERN Document Server

    Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R

    2015-01-01

    This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...

  11. Population structure and the evolution of sexual size dimorphism and sex ratios in an insular population of Florida box turtles (Terrapene carolina bauri)

    Science.gov (United States)

    Dodd, C.K.

    1997-01-01

    Hypotheses in the chelonian literature suggest that in species with sexual size dimorphism, the smaller sex will mature at a smaller size and a younger age than the larger sex, sex ratios should be biased in favor of the earlier maturing sex, and deviations from a 1:1 sex ratio result from maturation of the smaller sex at a younger age. I tested these hypotheses using data collected from 1991 to 1995 on an insular (Egmont Key) population of Florida box turtles, Terrapene carolina bauri. Contrary to predictions, the earlier maturing sex (males) grew to larger sizes than the late maturing sex. Males were significantly larger than females in mean carapace length but not mean body mass. Sex ratios were not balanced, favoring the earlier maturing sex (1.6 males:1 female), but the sex-ratio imbalance did not result from faster maturation of the smaller sex. The imbalance in the sex ratio in Egmont Key's box turtles is not the result of sampling biases; it may result from nest placement. Size-class structure and sex ratios can provide valuable insights into the status and trends of populations of long-lived turtles.

  12. Predicting sample size required for classification performance

    Directory of Open Access Journals (Sweden)

    Figueroa Rosa L

    2012-02-01

    Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.

  13. Estimation of sample size and testing power (Part 4).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-01-01

    Sample size estimation is necessary for any experimental or survey research. An appropriate estimation of sample size based on known information and statistical knowledge is of great significance. This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels, including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels. In addition, this article presents examples for analysis, which will play a leading role for researchers to implement the repetition principle during the research design phase.

  14. Sample size determination for equivalence assessment with multiple endpoints.

    Science.gov (United States)

    Sun, Anna; Dong, Xiaoyu; Tsong, Yi

    2014-01-01

    Equivalence assessment between a reference and test treatment is often conducted by two one-sided tests (TOST). The corresponding power function and sample size determination can be derived from a joint distribution of the sample mean and sample variance. When an equivalence trial is designed with multiple endpoints, it often involves several sets of two one-sided tests. A naive approach for sample size determination in this case would select the largest sample size required for each endpoint. However, such a method ignores the correlation among endpoints. With the objective to reject all endpoints and when the endpoints are uncorrelated, the power function is the production of all power functions for individual endpoints. With correlated endpoints, the sample size and power should be adjusted for such a correlation. In this article, we propose the exact power function for the equivalence test with multiple endpoints adjusted for correlation under both crossover and parallel designs. We further discuss the differences in sample size for the naive method without and with correlation adjusted methods and illustrate with an in vivo bioequivalence crossover study with area under the curve (AUC) and maximum concentration (Cmax) as the two endpoints.

  15. Preeminence and prerequisites of sample size calculations in clinical trials

    OpenAIRE

    Richa Singhal; Rakesh Rana

    2015-01-01

    The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary out...

  16. Dependence of ultrasound attenuation in rare earth metals on ratio of grain size and wavelength

    International Nuclear Information System (INIS)

    Kanevskij, I.N.; Nisnevich, M.M.; Spasskaya, A.A.; Kaz'mina, V.I.

    1978-01-01

    Results of investigation of dependences of ultrasound attenuation coefficient α on the ratio of grain average size D and wavelength lambda are presented. The investigations were carried out on rare earth metal samples produced by arc remelting in a vacuum furnace. It is shown that the way of α dependence curves of D/lambda for each of the rare earth metal is determined only by the D. This fact permits to use ultrasound measurement for control average diameter of the rare earth metal grain

  17. Optimal sample size for probability of detection curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2013-01-01

    Highlights: • We investigate sample size requirement to develop probability of detection curves. • We develop simulations to determine effective inspection target sizes, number and distribution. • We summarize these findings and provide guidelines for the NDE practitioner. -- Abstract: The use of probability of detection curves to quantify the reliability of non-destructive examination (NDE) systems is common in the aeronautical industry, but relatively less so in the nuclear industry, at least in European countries. Due to the nature of the components being inspected, sample sizes tend to be much lower. This makes the manufacturing of test pieces with representative flaws, in sufficient numbers, so to draw statistical conclusions on the reliability of the NDT system under investigation, quite costly. The European Network for Inspection and Qualification (ENIQ) has developed an inspection qualification methodology, referred to as the ENIQ Methodology. It has become widely used in many European countries and provides assurance on the reliability of NDE systems, but only qualitatively. The need to quantify the output of inspection qualification has become more important as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. A measure of the NDE reliability is necessary to quantify risk reduction after inspection and probability of detection (POD) curves provide such a metric. The Joint Research Centre, Petten, The Netherlands supported ENIQ by investigating the question of the sample size required to determine a reliable POD curve. As mentioned earlier manufacturing of test pieces with defects that are typically found in nuclear power plants (NPPs) is usually quite expensive. Thus there is a tendency to reduce sample sizes, which in turn increases the uncertainty associated with the resulting POD curve. The main question in conjunction with POS curves is the appropriate sample size. Not

  18. Sample size for morphological traits of pigeonpea

    Directory of Open Access Journals (Sweden)

    Giovani Facco

    2015-12-01

    Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons. 

  19. Preeminence and prerequisites of sample size calculations in clinical trials

    Directory of Open Access Journals (Sweden)

    Richa Singhal

    2015-01-01

    Full Text Available The key components while planning a clinical study are the study design, study duration, and sample size. These features are an integral part of planning a clinical trial efficiently, ethically, and cost-effectively. This article describes some of the prerequisites for sample size calculation. It also explains that sample size calculation is different for different study designs. The article in detail describes the sample size calculation for a randomized controlled trial when the primary outcome is a continuous variable and when it is a proportion or a qualitative variable.

  20. Revisiting sample size: are big trials the answer?

    Science.gov (United States)

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  1. Test of a sample container for shipment of small size plutonium samples with PAT-2

    International Nuclear Information System (INIS)

    Kuhn, E.; Aigner, H.; Deron, S.

    1981-11-01

    A light-weight container for the air transport of plutonium, to be designated PAT-2, has been developed in the USA and is presently undergoing licensing. The very limited effective space for bearing plutonium required the design of small size sample canisters to meet the needs of international safeguards for the shipment of plutonium samples. The applicability of a small canister for the sampling of small size powder and solution samples has been tested in an intralaboratory experiment. The results of the experiment, based on the concept of pre-weighed samples, show that the tested canister can successfully be used for the sampling of small size PuO 2 -powder samples of homogeneous source material, as well as for dried aliquands of plutonium nitrate solutions. (author)

  2. Causality in Statistical Power: Isomorphic Properties of Measurement, Research Design, Effect Size, and Sample Size

    Directory of Open Access Journals (Sweden)

    R. Eric Heidel

    2016-01-01

    Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.

  3. CT dose survey in adults: what sample size for what precision?

    International Nuclear Information System (INIS)

    Taylor, Stephen; Muylem, Alain van; Howarth, Nigel; Gevenois, Pierre Alain; Tack, Denis

    2017-01-01

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  4. CT dose survey in adults: what sample size for what precision?

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)

    2017-01-15

    To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)

  5. Sample-size dependence of diversity indices and the determination of sufficient sample size in a high-diversity deep-sea environment

    OpenAIRE

    Soetaert, K.; Heip, C.H.R.

    1990-01-01

    Diversity indices, although designed for comparative purposes, often cannot be used as such, due to their sample-size dependence. It is argued here that this dependence is more pronounced in high diversity than in low diversity assemblages and that indices more sensitive to rarer species require larger sample sizes to estimate diversity with reasonable precision than indices which put more weight on commoner species. This was tested for Hill's diversity number N sub(0) to N sub( proportional ...

  6. THE EFFECT OF FINANCIAL RATIOS, FIRM SIZE AND CASHFLOWSFROM OPERATING ACTIVITIES ON EARNINGS PER SHARE: (ANAPPLIED STUDY: ON JORDANIAN INDUSTRIAL SECTOR

    Directory of Open Access Journals (Sweden)

    Khalaf Taani

    2011-01-01

    Full Text Available The objective of this study is to examine the effect of accounting information onearning per share (EPS by using five categories offinancial ratios. A sample of40 companies listed in the Amman Stock Market was selected. To measure theimpact of financial ratios on EPS multiple regression method and stepwiseregression models are used by taking profitability,liquidity, debit to equity,market ratio, size which is derived from firm’s total assets, and cash flow fromoperation activities as independent variables ,andEPS (Earning Per Share asdependent variable. The results show that profitability ratio (ROE, Market ratio(PBV, cash flow from operation/sales, and leverageratio (DER has significantimpact on earning per share.

  7. Sample size calculation for comparing two negative binomial rates.

    Science.gov (United States)

    Zhu, Haiyuan; Lakkis, Hassan

    2014-02-10

    Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Estimation of sample size and testing power (part 5).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-02-01

    Estimation of sample size and testing power is an important component of research design. This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design, the paired design or the crossover design. To be specific, this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs, the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples, which will benefit researchers for implementing the repetition principle.

  9. Online Stable Isotope Analysis of Dissolved Organic Carbon Size Classes Using Size Exclusion Chromatography Coupled to an Isotope Ratio Mass Spectrometer

    Digital Repository Service at National Institute of Oceanography (India)

    Malik, A.; Scheibe, A.; LokaBharathi, P.A.; Gleixner, G.

    size classes by coupling high-performance liquid chromatography (HPLC) - size exclusion chromatography (SEC) to online isotope ratio mass spectrometry (IRMS). This represents a significant methodological contribution to DOC research. The interface...

  10. Frictional behaviour of sandstone: A sample-size dependent triaxial investigation

    Science.gov (United States)

    Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus

    2017-01-01

    Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.

  11. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    the sample size decreases – a result that could be interpreted as a size effect in the order– disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order–disorder transition induction in samples of different size. Keywords.

  12. Constrained statistical inference: sample-size tables for ANOVA and regression

    Directory of Open Access Journals (Sweden)

    Leonard eVanbrabant

    2015-01-01

    Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.

  13. Sample Size in Qualitative Interview Studies: Guided by Information Power.

    Science.gov (United States)

    Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit

    2015-11-27

    Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is "saturation." Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose the concept "information power" to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning and during data collection of a qualitative study is discussed. © The Author(s) 2015.

  14. Dilution correction equation revisited: The impact of stream slope, relief ratio and area size of basin on geochemical anomalies

    Science.gov (United States)

    Shahrestani, Shahed; Mokhtari, Ahmad Reza

    2017-04-01

    Stream sediment sampling is a well-known technique used to discover the geochemical anomalies in regional exploration activities. In an upstream catchment basin of stream sediment sample, the geochemical signals originating from probable mineralization could be diluted due to mixing with the weathering material coming from the non-anomalous sources. Hawkes's equation (1976) was an attempt to overcome the problem in which the area size of catchment basin was used to remove dilution from geochemical anomalies. However, the metal content of a stream sediment sample could be linked to several geomorphological, sedimentological, climatic and geological factors. The area size is not itself a comprehensive representative of dilution taking place in a catchment basin. The aim of the present study was to consider a number of geomorphological factors affecting the sediment supply, transportation processes, storage and in general, the geochemistry of stream sediments and their incorporation in the dilution correction procedure. This was organized through employing the concept of sediment yield and sediment delivery ratio and linking such characteristics to the dilution phenomenon in a catchment basin. Main stream slope (MSS), relief ratio (RR) and area size (Aa) of catchment basin were selected as the important proxies (PSDRa) for sediment delivery ratio and then entered to the Hawkes's equation. Then, Hawkes's and new equations were applied on the stream sediment dataset collected from Takhte-Soleyman district, west of Iran for Au, As and Sb values. A number of large and small gold, antimony and arsenic mineral occurrences were used to evaluate the results. Anomaly maps based on the new equations displayed improvement in anomaly delineation taking the spatial distribution of mineral deposits into account and could present new catchment basins containing known mineralization as the anomaly class, especially in the case of Au and As. Four catchment basins having Au and As

  15. On the Importance of Accounting for Competing Risks in Pediatric Brain Cancer: II. Regression Modeling and Sample Size

    International Nuclear Information System (INIS)

    Tai, Bee-Choo; Grundy, Richard; Machin, David

    2011-01-01

    Purpose: To accurately model the cumulative need for radiotherapy in trials designed to delay or avoid irradiation among children with malignant brain tumor, it is crucial to account for competing events and evaluate how each contributes to the timing of irradiation. An appropriate choice of statistical model is also important for adequate determination of sample size. Methods and Materials: We describe the statistical modeling of competing events (A, radiotherapy after progression; B, no radiotherapy after progression; and C, elective radiotherapy) using proportional cause-specific and subdistribution hazard functions. The procedures of sample size estimation based on each method are outlined. These are illustrated by use of data comparing children with ependymoma and other malignant brain tumors. The results from these two approaches are compared. Results: The cause-specific hazard analysis showed a reduction in hazards among infants with ependymoma for all event types, including Event A (adjusted cause-specific hazard ratio, 0.76; 95% confidence interval, 0.45-1.28). Conversely, the subdistribution hazard analysis suggested an increase in hazard for Event A (adjusted subdistribution hazard ratio, 1.35; 95% confidence interval, 0.80-2.30), but the reduction in hazards for Events B and C remained. Analysis based on subdistribution hazard requires a larger sample size than the cause-specific hazard approach. Conclusions: Notable differences in effect estimates and anticipated sample size were observed between methods when the main event showed a beneficial effect whereas the competing events showed an adverse effect on the cumulative incidence. The subdistribution hazard is the most appropriate for modeling treatment when its effects on both the main and competing events are of interest.

  16. Conservative Sample Size Determination for Repeated Measures Analysis of Covariance.

    Science.gov (United States)

    Morgan, Timothy M; Case, L Douglas

    2013-07-05

    In the design of a randomized clinical trial with one pre and multiple post randomized assessments of the outcome variable, one needs to account for the repeated measures in determining the appropriate sample size. Unfortunately, one seldom has a good estimate of the variance of the outcome measure, let alone the correlations among the measurements over time. We show how sample sizes can be calculated by making conservative assumptions regarding the correlations for a variety of covariance structures. The most conservative choice for the correlation depends on the covariance structure and the number of repeated measures. In the absence of good estimates of the correlations, the sample size is often based on a two-sample t-test, making the 'ultra' conservative and unrealistic assumption that there are zero correlations between the baseline and follow-up measures while at the same time assuming there are perfect correlations between the follow-up measures. Compared to the case of taking a single measurement, substantial savings in sample size can be realized by accounting for the repeated measures, even with very conservative assumptions regarding the parameters of the assumed correlation matrix. Assuming compound symmetry, the sample size from the two-sample t-test calculation can be reduced at least 44%, 56%, and 61% for repeated measures analysis of covariance by taking 2, 3, and 4 follow-up measures, respectively. The results offer a rational basis for determining a fairly conservative, yet efficient, sample size for clinical trials with repeated measures and a baseline value.

  17. Direct uranium isotope ratio analysis of single micrometer-sized glass particles

    OpenAIRE

    Kappel, Stefanie; Boulyga, Sergei F.; Prohaska, Thomas

    2012-01-01

    We present the application of nanosecond laser ablation (LA) coupled to a ‘Nu Plasma HR’ multi collector inductively coupled plasma mass spectrometer (MC-ICP-MS) for the direct analysis of U isotope ratios in single, 10–20 μm-sized, U-doped glass particles. Method development included studies with respect to (1) external correction of the measured U isotope ratios in glass particles, (2) the applied laser ablation carrier gas (i.e. Ar versus He) and (3) the accurate determination of lower abu...

  18. The Power of Low Back Pain Trials: A Systematic Review of Power, Sample Size, and Reporting of Sample Size Calculations Over Time, in Trials Published Between 1980 and 2012.

    Science.gov (United States)

    Froud, Robert; Rajendran, Dévan; Patel, Shilpa; Bright, Philip; Bjørkli, Tom; Eldridge, Sandra; Buchbinder, Rachelle; Underwood, Martin

    2017-06-01

    A systematic review of nonspecific low back pain trials published between 1980 and 2012. To explore what proportion of trials have been powered to detect different bands of effect size; whether there is evidence that sample size in low back pain trials has been increasing; what proportion of trial reports include a sample size calculation; and whether likelihood of reporting sample size calculations has increased. Clinical trials should have a sample size sufficient to detect a minimally important difference for a given power and type I error rate. An underpowered trial is one within which probability of type II error is too high. Meta-analyses do not mitigate underpowered trials. Reviewers independently abstracted data on sample size at point of analysis, whether a sample size calculation was reported, and year of publication. Descriptive analyses were used to explore ability to detect effect sizes, and regression analyses to explore the relationship between sample size, or reporting sample size calculations, and time. We included 383 trials. One-third were powered to detect a standardized mean difference of less than 0.5, and 5% were powered to detect less than 0.3. The average sample size was 153 people, which increased only slightly (∼4 people/yr) from 1980 to 2000, and declined slightly (∼4.5 people/yr) from 2005 to 2011 (P pain trials and the reporting of sample size calculations may need to be increased. It may be justifiable to power a trial to detect only large effects in the case of novel interventions. 3.

  19. Carbon Isotopic Ratios of Amino Acids in Stardust-Returned Samples

    Science.gov (United States)

    Elsila, Jamie E.; Glavin, Daniel P.; Dworkin, Jason P.

    2009-01-01

    NASA's Stardust spacecraft returned to Earth samples from comet 81P/Wild 2 in January 2006. Preliminary examinations revealed the presence of a suite of organic compounds including several amines and amino acids, but the origin of these compounds could not be identified. Here. we present the carbon isotopic ratios of glycine and E-aminocaproic acid (EACH), the two most abundant amino acids observed, in Stardust-returned foil samples measured by gas chromatography-combustion-isotope ratio crass spectrometry coupled with quadrupole mass spectrometry (GC-QMS/IRMS).

  20. A test of the mean density approximation for Lennard-Jones mixtures with large size ratios

    International Nuclear Information System (INIS)

    Ely, J.F.

    1986-01-01

    The mean density approximation for mixture radial distribution functions plays a central role in modern corresponding-states theories. This approximation is reasonably accurate for systems that do not differ widely in size and energy ratios and which are nearly equimolar. As the size ratio increases, however, or if one approaches an infinite dilution of one of the components, the approximation becomes progressively worse, especially for the small molecule pair. In an attempt to better understand and improve this approximation, isothermal molecular dynamics simulations have been performed on a series of Lennard-Jones mixtures. Thermodynamic properties, including the mixture radial distribution functions, have been obtained at seven compositions ranging from 5 to 95 mol%. In all cases the size ratio was fixed at two and three energy ratios were investigated, 22 / 11 =0.5, 1.0, and 1.5. The results of the simulations are compared with the mean density approximation and a modification to integrals evaluated with the mean density approximation is proposed

  1. Sex allocation and secondary sex ratio in Cuban boa ( Chilabothrus angulifer): mother's body size affects the ratio between sons and daughters

    Science.gov (United States)

    Frynta, Daniel; Vejvodová, Tereza; Šimková, Olga

    2016-06-01

    Secondary sex ratios of animals with genetically determined sex may considerably deviate from equality. These deviations may be attributed to several proximate and ultimate factors. Sex ratio theory explains some of them as strategic decisions of mothers improving their fitness by selective investment in sons or daughters, e.g. local resource competition hypothesis (LRC) suggests that philopatric females tend to produce litters with male-biased sex ratios to avoid future competition with their daughters. Until now, only little attention has been paid to examine predictions of sex ratio theory in snakes possessing genetic sex determination and exhibiting large variance in allocation of maternal investment. Cuban boa is an endemic viviparous snake producing large-bodied newborns (˜200 g). Extremely high maternal investment in each offspring increases importance of sex allocation. In a captive colony, we collected breeding records of 42 mothers, 62 litters and 306 newborns and examined secondary sex ratios (SR) and sexual size dimorphism (SSD) of newborns. None of the examined morphometric traits of neonates appeared sexually dimorphic. The sex ratio was slightly male biased (174 males versus 132 females) and litter sex ratio significantly decreased with female snout-vent length. We interpret this relationship as an additional support for LRC as competition between mothers and daughters increases with similarity of body sizes between competing snakes.

  2. Sample size choices for XRCT scanning of highly unsaturated soil mixtures

    Directory of Open Access Journals (Sweden)

    Smith Jonathan C.

    2016-01-01

    Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.

  3. Decision Support on Small size Passive Samples

    Directory of Open Access Journals (Sweden)

    Vladimir Popukaylo

    2018-05-01

    Full Text Available A construction technique of adequate mathematical models for small size passive samples, in conditions when classical probabilistic-statis\\-tical methods do not allow obtaining valid conclusions was developed.

  4. Measurement of stable isotope ratio of organic carbon in water samples

    International Nuclear Information System (INIS)

    Fujii, Toshihiro; Otsuki, Akira

    1977-01-01

    A new method for the measurement of stable isotope ratios was investigated and applied to organic carbon's isotope ratio measurements in water samples. A few river water samples from Tsuchiura city were tested. After the wet oxidation of organic carbons to carbon dioxide in a sealed ampoule, the isotope ratios were determined with the gas chromatograph-quadrupole mass spectrometer combined with a total organic carbon analyser, under the dynamic conditions. The GC-MS had been equipped with the multiple ion detector-digital integrator system. The ion intensities at m/e 44 and 45 were simultaneously measured at a switching rate of 1 ms. The measurements with carbon dioxide acquired from sodium carbonate (53 μg) gave the isotope ratios with the variation coefficient of 0.62%. However, the variation coefficients obtained from organic carbons in natural water samples were 2 to 3 times as high as that from sodium carbonate. This method is simple and rapid and may be applied to various fields especially in biology and medicine. (auth.)

  5. Determination of uranium and its isotopic ratios in environmental samples

    International Nuclear Information System (INIS)

    Flues Szeles, M.S.M.

    1990-01-01

    A method for the determination of uranium and its isotopic ratios ( sup(235)U/ sup(238)U and sup(234U/ sup(238)U) is established in the present work. The method can be applied in environmental monitoring programs of uranium enrichment facilities. The proposed method is based on the alpha spectrometry technique which is applied after a purification of the sample by using an ionic exchange resin. The total yield achieved was (91 + 5)% with a precision of 5%, an accuracy of 8% and a lower limit of detection of 7,9 x 10 sup(-4)Bq. The uranium determination in samples containing high concentration of iron, which is an interfering element present in environmental samples, particularly in soil and sediment, was also studied. The results obtained by using artificial samples containing iron and uranium in the ratio 1000:1, were considered satisfactory. (author)

  6. Simple and multiple linear regression: sample size considerations.

    Science.gov (United States)

    Hanley, James A

    2016-11-01

    The suggested "two subjects per variable" (2SPV) rule of thumb in the Austin and Steyerberg article is a chance to bring out some long-established and quite intuitive sample size considerations for both simple and multiple linear regression. This article distinguishes two of the major uses of regression models that imply very different sample size considerations, neither served well by the 2SPV rule. The first is etiological research, which contrasts mean Y levels at differing "exposure" (X) values and thus tends to focus on a single regression coefficient, possibly adjusted for confounders. The second research genre guides clinical practice. It addresses Y levels for individuals with different covariate patterns or "profiles." It focuses on the profile-specific (mean) Y levels themselves, estimating them via linear compounds of regression coefficients and covariates. By drawing on long-established closed-form variance formulae that lie beneath the standard errors in multiple regression, and by rearranging them for heuristic purposes, one arrives at quite intuitive sample size considerations for both research genres. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Sex Ratio And Size At First Maturity Of Blue Swimming Crab (Portunus pelagicus Salemo Island Pangkep Regency

    Directory of Open Access Journals (Sweden)

    Muh. Saleh Nurdin

    2016-03-01

    Full Text Available Blue swimming crab (Portunuspelagicus is aeconomical valuable fisheries importantcommodity  due to the high demand and availability jobs created for the fishermen. Due to their high demand blue swimming crab heavily exploited from Salemo Island. This study aimed at comparing the sex ratio and the size at first maturity of blue swimming crab caught in mangrove ecosystems, coral reefs, and seagrass. Sex ratio was analyzed using chi square test and the size at first maturity was analyzed using the Spearman-Karber formula. The results showed the sex ratio ofmales and femalessmall crab caught in every ecosystem is balanced. The size at first maturity of blue swimming crab caught in mangrove, seagrass and coral reefs, each to the male 81,08 mm, 102,36 mm and 102,87 mm in width and size of female 94,54 mm, 83,35 mm, 98,31 mm width. In a reference to government regulations, the blue male swimming crab caught in the coral reef and seagrass ecosystems have yet to size at first maturity is allowed to be captured. Keywords: blue swimming crab, sex ratio,size at first maturity, Salemo Island

  8. Isotope ratios of 240Pu/239Pu in soil samples from different areas

    International Nuclear Information System (INIS)

    Muramatsu, Yasuyuki; Yoshida, Satoshi; Yamazaki, Shinnosuke

    2003-01-01

    Plutonium concentrations and 240 Pu/ 239 Pu atom ratios in soil samples from Japan and other areas in the world (including IAEA standard reference materials) were determined by ICP-MS. The range of 240 Pu/ 239 Pu atom ratios observed in 21 Japanese soil samples was 0.155 - 0.194 and the average was 0.180 ± 0.011, which is comparable to the global fallout value. A low ratio of about 0.05, which is derived from Pu-bomb, was found in samples from Nishiyama (Nagasaki) and Mururoa Atoll (IAEA-368), while a high ratio of about 0.31 was found in a sample from Bikini Atoll (Marshall Islands). The ratio for Irish Sea sediment (IAEA-135) was 0.21, which was higher than the global fallout value, suggesting the influence by the contamination from the Sellafield facility. The 240 Pu/ 239 Pu atom ratios in soils from the Chernobyl area were determined, and the ratio was found to be very high (about 0.4), indicating the high burn-up grade of the reactor fuel. These results show that the 240 Pu/ 239 Pu ratio can be used as a finger print to identify the source of the contamination. (author)

  9. Determination of a representative volume element based on the variability of mechanical properties with sample size in bread.

    Science.gov (United States)

    Ramírez, Cristian; Young, Ashley; James, Bryony; Aguilera, José M

    2010-10-01

    Quantitative analysis of food structure is commonly obtained by image analysis of a small portion of the material that may not be the representative of the whole sample. In order to quantify structural parameters (air cells) of 2 types of bread (bread and bagel) the concept of representative volume element (RVE) was employed. The RVE for bread, bagel, and gelatin-gel (used as control) was obtained from the relationship between sample size and the coefficient of variation, calculated from the apparent Young's modulus measured on 25 replicates. The RVE was obtained when the coefficient of variation for different sample sizes converged to a constant value. In the 2 types of bread tested, the tendency of the coefficient of variation was to decrease as the sample size increased, while in the homogeneous gelatin-gel, it remained always constant around 2.3% to 2.4%. The RVE resulted to be cubes with sides of 45 mm for bread, 20 mm for bagels, and 10 mm for gelatin-gel (smallest sample tested). The quantitative image analysis as well as visual observation demonstrated that bread presented the largest dispersion of air-cell sizes. Moreover, both the ratio of maximum air-cell area/image area and maximum air-cell height/image height were greater for bread (values of 0.05 and 0.30, respectively) than for bagels (0.03 and 0.20, respectively). Therefore, the size and the size variation of air cells present in the structure determined the size of the RVE. It was concluded that RVE is highly dependent on the heterogeneity of the structure of the types of baked products.

  10. The attention-weighted sample-size model of visual short-term memory

    DEFF Research Database (Denmark)

    Smith, Philip L.; Lilburn, Simon D.; Corbett, Elaine A.

    2016-01-01

    exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items...

  11. Breaking Free of Sample Size Dogma to Perform Innovative Translational Research

    Science.gov (United States)

    Bacchetti, Peter; Deeks, Steven G.; McCune, Joseph M.

    2011-01-01

    Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows that small sample sizes for such research can produce more projected scientific value per dollar spent than larger sample sizes. Renouncing false dogma about sample size would remove a serious barrier to innovation and translation. PMID:21677197

  12. Effect of limestone particle size and calcium to non-phytate phosphorus ratio on true ileal calcium digestibility of limestone for broiler chickens.

    Science.gov (United States)

    Anwar, M N; Ravindran, V; Morel, P C H; Ravindran, G; Cowieson, A J

    2016-10-01

    The purpose of this study was to determine the effect of limestone particle size and calcium (Ca) to non-phytate phosphorus (P) ratio on the true ileal Ca digestibility of limestone for broiler chickens. A limestone sample was passed through a set of sieves and separated into fine (digestibility of Ca was calculated using the indicator method and corrected for basal endogenous losses to determine the true Ca digestibility. The basal ileal endogenous Ca losses were determined to be 127 mg/kg of dry matter intake. Increasing Ca:non-phytate P ratios reduced the true Ca digestibility of limestone. The true Ca digestibility coefficients of limestone with Ca:non-phytate P ratios of 1.5, 2.0 and 2.5 were 0.65, 0.57 and 0.49, respectively. Particle size of limestone had a marked effect on the Ca digestibility, with the digestibility being higher in coarse particles (0.71 vs. 0.43).

  13. Sample size re-assessment leading to a raised sample size does not inflate type I error rate under mild conditions.

    Science.gov (United States)

    Broberg, Per

    2013-07-19

    One major concern with adaptive designs, such as the sample size adjustable designs, has been the fear of inflating the type I error rate. In (Stat Med 23:1023-1038, 2004) it is however proven that when observations follow a normal distribution and the interim result show promise, meaning that the conditional power exceeds 50%, type I error rate is protected. This bound and the distributional assumptions may seem to impose undesirable restrictions on the use of these designs. In (Stat Med 30:3267-3284, 2011) the possibility of going below 50% is explored and a region that permits an increased sample size without inflation is defined in terms of the conditional power at the interim. A criterion which is implicit in (Stat Med 30:3267-3284, 2011) is derived by elementary methods and expressed in terms of the test statistic at the interim to simplify practical use. Mathematical and computational details concerning this criterion are exhibited. Under very general conditions the type I error rate is preserved under sample size adjustable schemes that permit a raise. The main result states that for normally distributed observations raising the sample size when the result looks promising, where the definition of promising depends on the amount of knowledge gathered so far, guarantees the protection of the type I error rate. Also, in the many situations where the test statistic approximately follows a normal law, the deviation from the main result remains negligible. This article provides details regarding the Weibull and binomial distributions and indicates how one may approach these distributions within the current setting. There is thus reason to consider such designs more often, since they offer a means of adjusting an important design feature at little or no cost in terms of error rate.

  14. Failure Characteristics of Granite Influenced by Sample Height-to-Width Ratios and Intermediate Principal Stress Under True-Triaxial Unloading Conditions

    Science.gov (United States)

    Li, Xibing; Feng, Fan; Li, Diyuan; Du, Kun; Ranjith, P. G.; Rostami, Jamal

    2018-05-01

    The failure modes and peak unloading strength of a typical hard rock, Miluo granite, with particular attention to the sample height-to-width ratio (between 2 and 0.5), and the intermediate principal stress was investigated using a true-triaxial test system. The experimental results indicate that both sample height-to-width ratios and intermediate principal stress have an impact on the failure modes, peak strength and severity of rockburst in hard rock under true-triaxial unloading conditions. For longer rectangular specimens, the transition of failure mode from shear to slabbing requires higher intermediate principal stress. With the decrease in sample height-to-width ratios, slabbing failure is more likely to occur under the condition of lower intermediate principal stress. For same intermediate principal stress, the peak unloading strength monotonically increases with the decrease in sample height-to-width. However, the peak unloading strength as functions of intermediate principal stress for different types of rock samples (with sample height-to-width ratio of 2, 1 and 0.5) all present the pattern of initial increase, followed by a subsequent decrease. The curves fitted to octahedral shear stress as a function of mean effective stress also validate the applicability of the Mogi-Coulomb failure criterion for all considered rock sizes under true-triaxial unloading conditions, and the corresponding cohesion C and internal friction angle φ are calculated. The severity of strainburst of granite depends on the sample height-to-width ratios and intermediate principal stress. Therefore, different supporting strategies are recommended in deep tunneling projects and mining activities. Moreover, the comparison of test results of different σ 2/ σ 3 also reveals the little influence of minimum principal stress on failure characteristics of granite during the true-triaxial unloading process.

  15. In vitro rumen feed degradability assessed with DaisyII and batch culture: effect of sample size

    Directory of Open Access Journals (Sweden)

    Stefano Schiavon

    2010-01-01

    Full Text Available In vitro degradability with DaisyII (D equipment is commonly performed with 0.5g of feed sample into each filter bag. Literature reported that a reduction of the ratio of sample size to bag surface could facilitate the release of soluble or fine particulate. A reduction of sample size to 0.25 g could improve the correlation between the measurements provided by D and the conventional batch culture (BC. This hypothesis was screened by analysing the results of 2 trials. In trial 1, 7 feeds were incubated for 48h with rumen fluid (3 runs x 4 replications both with D (0.5g/bag and BC; the regressions between the mean values provided for the various feeds in each run by the 2 methods either for NDF (NDFd and in vitro true DM (IVTDMD degradability, had R2 of 0.75 and 0.92 and RSD of 10.9 and 4.8%, respectively. In trial 2, 4 feeds were incubated (2 runs x 8 replications with D (0.25 g/bag and BC; the corresponding regressions for NDFd and IVTDMD showed R2 of 0.94 and 0.98 and RSD of 3.0 and 1.3%, respectively. A sample size of 0.25 g improved the precision of the measurements obtained with D.

  16. Queen-worker caste ratio depends on colony size in the pharaoh ant (Monomorium pharaonis)

    DEFF Research Database (Denmark)

    Schmidt, Anna Mosegaard; Linksvayer, Timothy Arnold; Boomsma, Jacobus Jan

    2011-01-01

    The success of an ant colony depends on the simultaneous presence of reproducing queens and nonreproducing workers in a ratio that will maximize colony growth and reproduction. Despite its presumably crucial role, queen–worker caste ratios (the ratio of adult queens to workers) and the factors...... affecting this variable remain scarcely studied. Maintaining polygynous pharaoh ant (Monomorium pharaonis) colonies in the laboratory has provided us with the opportunity to experimentally manipulate colony size, one of the key factors that can be expected to affect colony level queen–worker caste ratios...... species with budding colonies may adaptively adjust caste ratios to ensure rapid growth....

  17. Sample Size and Saturation in PhD Studies Using Qualitative Interviews

    Directory of Open Access Journals (Sweden)

    Mark Mason

    2010-08-01

    Full Text Available A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research. URN: urn:nbn:de:0114-fqs100387

  18. Determination of the Isotope Ratio for Metal Samples Using a Laser Ablation/Ionization Time-of-flight Mass Spectrometry

    International Nuclear Information System (INIS)

    Song, Kyu Seok; Cha, Hyung Ki; Kim, Duk Hyeon; Min, Ki Hyun

    2004-01-01

    The laser ablation/ionization time-of-flight mass spectrometry is applied to the isotopic analysis of solid samples using a home-made instrument. The technique is convenient for solid sample analysis due to the onestep process of vaporization and ionization of the samples. The analyzed samples were lead, cadmium, molybdenum, and ytterbium. To optimize the analytical conditions of the technique, several parameters, such as laser energy, laser wavelength, size of the laser beam on the samples surface, and high voltages applied on the ion source electrodes were varied. Low energy of laser light was necessary to obtain the optimal mass resolution of spectra. The 532 nm light generated mass spectra with the higher signal-to-noise ratio compared with the 355 nm light. The best mass resolution obtained in the present study is ∼1,500 for the ytterbium

  19. Lead isotope ratio analysis of bullet samples by using quadrupole ICP-MS

    International Nuclear Information System (INIS)

    Tamura, Shu-ichi; Hokura, Akiko; Nakai, Izumi; Oishi, Masahiro

    2006-01-01

    The measurement conditions for the precise analysis of the lead stable isotope ratio by using an ICP-MS equipped with a quadrupole mass spectrometer were studied in order to apply the technique to the forensic identification of bullet samples. The values of the relative standard deviation obtained for the ratio of 208 Pb/ 206 Pb, 207 Pb/ 206 Pb and 204 Pb/ 206 Pb were lower than 0.2% after optimization of the analytical conditions, including the optimum lead concentration of the sample solution to be about 70 ppb and an integration time for 1 m/s of 15 s. This method was applied to an analysis of lead in bullets for rifles and handguns; a stable isotope ratio of lead was found to be suitable for the identification of bullets. This study has demonstrated that the lead isotope ratio measured by using a quadrupole ICP-MS was useful for a practical analysis of bullet samples in forensic science. (author)

  20. Size ratio correlates with intracranial aneurysm rupture status: a prospective study.

    Science.gov (United States)

    Rahman, Maryam; Smietana, Janel; Hauck, Erik; Hoh, Brian; Hopkins, Nick; Siddiqui, Adnan; Levy, Elad I; Meng, Hui; Mocco, J

    2010-05-01

    The prediction of intracranial aneurysm (IA) rupture risk has generated significant controversy. The findings of the International Study of Unruptured Intracranial Aneurysms (ISUIA) that small anterior circulation aneurysms (IAs are small. These discrepancies have led to the search for better aneurysm parameters to predict rupture. We previously reported that size ratio (SR), IA size divided by parent vessel diameter, correlated strongly with IA rupture status (ruptured versus unruptured). These data were all collected retrospectively off 3-dimensional angiographic images. Therefore, we performed a blinded prospective collection and evaluation of SR data from 2-dimensional angiographic images for a consecutive series of patients with ruptured and unruptured IAs. We prospectively enrolled 40 consecutive patients presenting to a single institution with either ruptured IA or for first-time evaluation of an incidental IA. Blinded technologists acquired all measurements from 2-dimensional angiographic images. Aneurysm rupture status, location, IA maximum size, and parent vessel diameter were documented. The SR was calculated by dividing the aneurysm size (mm) by the average parent vessel size (mm). A 2-tailed Mann-Whitney test was performed to assess statistical significance between ruptured and unruptured groups. Fisher exact test was used to compare medical comorbidities between the ruptured and unruptured groups. Significant differences between the 2 groups were subsequently tested with logistic regression. SE and probability values are reported. Forty consecutive patients with 24 unruptured and 16 ruptured aneurysms met the inclusion criteria. No significant differences were found in age, gender, smoking status, or medical comorbidities between ruptured and unruptured groups. The average maximum size of the unruptured IAs (6.18 + or - 0.60 mm) was significantly smaller compared with the ruptured IAs (7.91 + or - 0.47 mm; P=0.03), and the unruptured group had

  1. Effect size measures in a two-independent-samples case with nonnormal and nonhomogeneous data.

    Science.gov (United States)

    Li, Johnson Ching-Hong

    2016-12-01

    In psychological science, the "new statistics" refer to the new statistical practices that focus on effect size (ES) evaluation instead of conventional null-hypothesis significance testing (Cumming, Psychological Science, 25, 7-29, 2014). In a two-independent-samples scenario, Cohen's (1988) standardized mean difference (d) is the most popular ES, but its accuracy relies on two assumptions: normality and homogeneity of variances. Five other ESs-the unscaled robust d (d r * ; Hogarty & Kromrey, 2001), scaled robust d (d r ; Algina, Keselman, & Penfield, Psychological Methods, 10, 317-328, 2005), point-biserial correlation (r pb ; McGrath & Meyer, Psychological Methods, 11, 386-401, 2006), common-language ES (CL; Cliff, Psychological Bulletin, 114, 494-509, 1993), and nonparametric estimator for CL (A w ; Ruscio, Psychological Methods, 13, 19-30, 2008)-may be robust to violations of these assumptions, but no study has systematically evaluated their performance. Thus, in this simulation study the performance of these six ESs was examined across five factors: data distribution, sample, base rate, variance ratio, and sample size. The results showed that A w and d r were generally robust to these violations, and A w slightly outperformed d r . Implications for the use of A w and d r in real-world research are discussed.

  2. Measures of effect size for chi-squared and likelihood-ratio goodness-of-fit tests.

    Science.gov (United States)

    Johnston, Janis E; Berry, Kenneth J; Mielke, Paul W

    2006-10-01

    A fundamental shift in editorial policy for psychological journals was initiated when the fourth edition of the Publication Manual of the American Psychological Association (1994) placed emphasis on reporting measures of effect size. This paper presents measures of effect size for the chi-squared and the likelihood-ratio goodness-of-fit statistic tests.

  3. A design aid for sizing filter strips using buffer area ratio

    Science.gov (United States)

    M.G. Dosskey; M.J. Helmers; D.E. Eisenhauer

    2011-01-01

    Nonuniform field runoff can reduce the effectiveness of filter strips that are a uniform size along a field margin. Effectiveness can be improved by placing more filter strip where the runoff load is greater and less where the load is smaller. A modeling analysis was conducted of the relationship between pollutant trapping efficiency and the ratio of filter strip area...

  4. Sample size allocation in multiregional equivalence studies.

    Science.gov (United States)

    Liao, Jason J Z; Yu, Ziji; Li, Yulan

    2018-06-17

    With the increasing globalization of drug development, the multiregional clinical trial (MRCT) has gained extensive use. The data from MRCTs could be accepted by regulatory authorities across regions and countries as the primary sources of evidence to support global marketing drug approval simultaneously. The MRCT can speed up patient enrollment and drug approval, and it makes the effective therapies available to patients all over the world simultaneously. However, there are many challenges both operationally and scientifically in conducting a drug development globally. One of many important questions to answer for the design of a multiregional study is how to partition sample size into each individual region. In this paper, two systematic approaches are proposed for the sample size allocation in a multiregional equivalence trial. A numerical evaluation and a biosimilar trial are used to illustrate the characteristics of the proposed approaches. Copyright © 2018 John Wiley & Sons, Ltd.

  5. Revealing the influence of water-cement ratio on the pore size distribution in hydrated cement paste by using cyclohexane

    Science.gov (United States)

    Bede, Andrea; Ardelean, Ioan

    2017-12-01

    Varying the amount of water in a concrete mix will influence its final properties considerably due to the changes in the capillary porosity. That is why a non-destructive technique is necessary for revealing the capillary pore distribution inside hydrated cement based materials and linking the capillary porosity with the macroscopic properties of these materials. In the present work, we demonstrate a simple approach for revealing the differences in capillary pore size distributions introduced by the preparation of cement paste with different water-to-cement ratios. The approach relies on monitoring the nuclear magnetic resonance transverse relaxation distribution of cyclohexane molecules confined inside the cement paste pores. The technique reveals the whole spectrum of pores inside the hydrated cement pastes, allowing a qualitative and quantitative analysis of different pore sizes. The cement pastes with higher water-to-cement ratios show an increase in capillary porosity, while for all the samples the intra-C-S-H and inter-C-S-H pores (also known as gel pores) remain unchanged. The technique can be applied to various porous materials with internal mineral surfaces.

  6. Sampling strategies for estimating brook trout effective population size

    Science.gov (United States)

    Andrew R. Whiteley; Jason A. Coombs; Mark Hudy; Zachary Robinson; Keith H. Nislow; Benjamin H. Letcher

    2012-01-01

    The influence of sampling strategy on estimates of effective population size (Ne) from single-sample genetic methods has not been rigorously examined, though these methods are increasingly used. For headwater salmonids, spatially close kin association among age-0 individuals suggests that sampling strategy (number of individuals and location from...

  7. Sample Size and Statistical Conclusions from Tests of Fit to the Rasch Model According to the Rasch Unidimensional Measurement Model (Rumm) Program in Health Outcome Measurement.

    Science.gov (United States)

    Hagell, Peter; Westergren, Albert

    Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).

  8. Sample Size Induced Brittle-to-Ductile Transition of Single-Crystal Aluminum Nitride

    Science.gov (United States)

    2015-08-01

    ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum...originator. ARL-RP-0528 ● AUG 2015 US Army Research Laboratory Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal...Sample Size Induced Brittle-to- Ductile Transition of Single-Crystal Aluminum Nitride 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT

  9. Sample size determination for logistic regression on a logit-normal distribution.

    Science.gov (United States)

    Kim, Seongho; Heath, Elisabeth; Heilbrun, Lance

    2017-06-01

    Although the sample size for simple logistic regression can be readily determined using currently available methods, the sample size calculation for multiple logistic regression requires some additional information, such as the coefficient of determination ([Formula: see text]) of a covariate of interest with other covariates, which is often unavailable in practice. The response variable of logistic regression follows a logit-normal distribution which can be generated from a logistic transformation of a normal distribution. Using this property of logistic regression, we propose new methods of determining the sample size for simple and multiple logistic regressions using a normal transformation of outcome measures. Simulation studies and a motivating example show several advantages of the proposed methods over the existing methods: (i) no need for [Formula: see text] for multiple logistic regression, (ii) available interim or group-sequential designs, and (iii) much smaller required sample size.

  10. Sample size reassessment for a two-stage design controlling the false discovery rate.

    Science.gov (United States)

    Zehetmayer, Sonja; Graf, Alexandra C; Posch, Martin

    2015-11-01

    Sample size calculations for gene expression microarray and NGS-RNA-Seq experiments are challenging because the overall power depends on unknown quantities as the proportion of true null hypotheses and the distribution of the effect sizes under the alternative. We propose a two-stage design with an adaptive interim analysis where these quantities are estimated from the interim data. The second stage sample size is chosen based on these estimates to achieve a specific overall power. The proposed procedure controls the power in all considered scenarios except for very low first stage sample sizes. The false discovery rate (FDR) is controlled despite of the data dependent choice of sample size. The two-stage design can be a useful tool to determine the sample size of high-dimensional studies if in the planning phase there is high uncertainty regarding the expected effect sizes and variability.

  11. Nomogram for sample size calculation on a straightforward basis for the kappa statistic.

    Science.gov (United States)

    Hong, Hyunsook; Choi, Yunhee; Hahn, Seokyung; Park, Sue Kyung; Park, Byung-Joo

    2014-09-01

    Kappa is a widely used measure of agreement. However, it may not be straightforward in some situation such as sample size calculation due to the kappa paradox: high agreement but low kappa. Hence, it seems reasonable in sample size calculation that the level of agreement under a certain marginal prevalence is considered in terms of a simple proportion of agreement rather than a kappa value. Therefore, sample size formulae and nomograms using a simple proportion of agreement rather than a kappa under certain marginal prevalences are proposed. A sample size formula was derived using the kappa statistic under the common correlation model and goodness-of-fit statistic. The nomogram for the sample size formula was developed using SAS 9.3. The sample size formulae using a simple proportion of agreement instead of a kappa statistic and nomograms to eliminate the inconvenience of using a mathematical formula were produced. A nomogram for sample size calculation with a simple proportion of agreement should be useful in the planning stages when the focus of interest is on testing the hypothesis of interobserver agreement involving two raters and nominal outcome measures. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Sample size optimization in nuclear material control. 1

    International Nuclear Information System (INIS)

    Gladitz, J.

    1982-01-01

    Equations have been derived and exemplified which allow the determination of the minimum variables sample size for given false alarm and detection probabilities of nuclear material losses and diversions, respectively. (author)

  13. Sample to moderator volume ratio effects in neutron yield from a PGNAA setup

    Energy Technology Data Exchange (ETDEWEB)

    Naqvi, A.A. [Department of Physics, King Fahd University of Petroleum and Minerals, KFUPM Box 1815, Dhahran-31261 (Saudi Arabia)]. E-mail: aanaqvi@kfupm.edu.sa; Fazal-ur-Rehman [Department of Physics, King Fahd University of Petroleum and Minerals, KFUPM Box 1815, Dhahran-31261 (Saudi Arabia); Nagadi, M.M. [Department of Physics, King Fahd University of Petroleum and Minerals, KFUPM Box 1815, Dhahran-31261 (Saudi Arabia); Khateeb-ur-Rehman [Department of Physics, King Fahd University of Petroleum and Minerals, KFUPM Box 1815, Dhahran-31261 (Saudi Arabia)

    2007-02-15

    Performance of a prompt gamma ray neutron activation analysis (PGNAA) setup depends upon thermal neutron yield at the PGNAA sample location. For a moderator, which encloses a sample, thermal neutron intensity depends upon the effective moderator volume excluding the void volume due to sample volume. A rectangular moderator assembly has been designed for the King Fahd University of Petroleum and Minerals (KFUPM) PGNAA setup. The thermal and fast neutron yield has been measured inside the sample cavity as a function of its front moderator thickness using alpha particle tracks density and recoil proton track density inside the CR-39 nuclear track detectors (NTDs). The thermal/fast neutron yield ratio, obtained from the alpha particle tracks density to proton tracks density ratio in the NTDs, shows an inverse correlation with sample to moderator volume ratio. Comparison of the present results with the previously published results of smaller moderators of the KFUPM PGNAA setup confirms the observation.

  14. Impact of shoe size in a sample of elderly individuals

    Directory of Open Access Journals (Sweden)

    Daniel López-López

    Full Text Available Summary Introduction: The use of an improper shoe size is common in older people and is believed to have a detrimental effect on the quality of life related to foot health. The objective is to describe and compare, in a sample of participants, the impact of shoes that fit properly or improperly, as well as analyze the scores related to foot health and health overall. Method: A sample of 64 participants, with a mean age of 75.3±7.9 years, attended an outpatient center where self-report data was recorded, the measurements of the size of the feet and footwear were determined and the scores compared between the group that wears the correct size of shoes and another group of individuals who do not wear the correct size of shoes, using the Spanish version of the Foot Health Status Questionnaire. Results: The group wearing an improper shoe size showed poorer quality of life regarding overall health and specifically foot health. Differences between groups were evaluated using a t-test for independent samples resulting statistically significant (p<0.05 for the dimension of pain, function, footwear, overall foot health, and social function. Conclusion: Inadequate shoe size has a significant negative impact on quality of life related to foot health. The degree of negative impact seems to be associated with age, sex, and body mass index (BMI.

  15. Direct uranium isotope ratio analysis of single micrometer-sized glass particles.

    Science.gov (United States)

    Kappel, Stefanie; Boulyga, Sergei F; Prohaska, Thomas

    2012-11-01

    We present the application of nanosecond laser ablation (LA) coupled to a 'Nu Plasma HR' multi collector inductively coupled plasma mass spectrometer (MC-ICP-MS) for the direct analysis of U isotope ratios in single, 10-20 μm-sized, U-doped glass particles. Method development included studies with respect to (1) external correction of the measured U isotope ratios in glass particles, (2) the applied laser ablation carrier gas (i.e. Ar versus He) and (3) the accurate determination of lower abundant (236)U/(238)U isotope ratios (i.e. 10(-5)). In addition, a data processing procedure was developed for evaluation of transient signals, which is of potential use for routine application of the developed method. We demonstrate that the developed method is reliable and well suited for determining U isotope ratios of individual particles. Analyses of twenty-eight S1 glass particles, measured under optimized conditions, yielded average biases of less than 0.6% from the certified values for (234)U/(238)U and (235)U/(238)U ratios. Experimental results obtained for (236)U/(238)U isotope ratios deviated by less than -2.5% from the certified values. Expanded relative total combined standard uncertainties U(c) (k = 2) of 2.6%, 1.4% and 5.8% were calculated for (234)U/(238)U, (235)U/(238)U and (236)U/(238)U, respectively. Copyright © 2012 Elsevier Ltd. All rights reserved.

  16. Optimal sample to tracer ratio for isotope dilution mass spectrometry: the polyisotopic case

    International Nuclear Information System (INIS)

    Laszlo, G.; Ridder, P. de; Goldman, A.; Cappis, J.; Bievre, P. de

    1991-01-01

    The Isotope Dilution Mass Spectrometry (IDMS) measurement technique provides a means for determining the unknown amount of various isotopes of an element in a sample solution of known mass. The sample solution is mixed with an auxiliary solution, or tracer, containing a known amount of the same element having the same isotopes but of different relative abundances or isotopic composition and the induced change in the isotopic composition measured by isotope mass spectrometry. The technique involves the measurement of the abundance ratio of each isotope to a (same) reference isotope in the sample solution, in the tracer solution and in the blend of the sample and tracer solution. These isotope ratio measurements, the known element amount in the tracer and the known mass of sample solution are used to calculate the unknown amount of one isotope in the sample solution. Subsequently the unknown amount of element is determined. The purpose of this paper is to examine the optimization of the ratio of the estimated unknown amount of element in the sample solution to the known amount of element in the tracer solution in order to minimize the relative uncertainty in the determination of the unknown amount of element

  17. Matching Ge detector element geometry to sample size and shape: One does not fit all exclamation point

    International Nuclear Information System (INIS)

    Keyser, R.M.; Twomey, T.R.; Sangsingkeow, P.

    1998-01-01

    For 25 yr, coaxial germanium detector performance has been specified using the methods and values specified in Ref. 1. These specifications are the full-width at half-maximum (FWHM), FW.1M, FW.02M, peak-to-Compton ratio, and relative efficiency. All of these measurements are made with a 60 Co source 25 cm from the cryostat endcap and centered on the axis of the detector. These measurements are easy to reproduce, both because they are simple to set up and use a common source. These standard tests have been useful in guiding the user to an appropriate detector choice for the intended measurement. Most users of germanium gamma-ray detectors do not make measurements in this simple geometry. Germanium detector manufacturers have worked over the years to make detectors with better resolution, better peak-to-Compton ratios, and higher efficiency--but all based on measurements using the IEEE standard. Advances in germanium crystal growth techniques have made it relatively easy to provide detector elements of different shapes and sizes. Many of these different shapes and sizes can give better results for a specific application than other shapes and sizes. But, the detector specifications must be changed to correspond to the actual application. Both the expected values and the actual parameters to be specified should be changed. In many cases, detection efficiency, peak shape, and minimum detectable limit for a particular detector/sample combination are valuable specifications of detector performance. For other situations, other parameters are important, such as peak shape as a function of count rate. In this work, different sample geometries were considered. The results show the variation in efficiency with energy for all of these sample and detector geometries. The point source at 25 cm from the endcap measurement allows the results to be compared with the currently given IEEE criteria. The best sample/detector configuration for a specific measurement requires more and

  18. Threshold-dependent sample sizes for selenium assessment with stream fish tissue

    Science.gov (United States)

    Hitt, Nathaniel P.; Smith, David R.

    2015-01-01

    Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased

  19. Optimum sample size to estimate mean parasite abundance in fish parasite surveys

    Directory of Open Access Journals (Sweden)

    Shvydka S.

    2018-03-01

    Full Text Available To reach ethically and scientifically valid mean abundance values in parasitological and epidemiological studies this paper considers analytic and simulation approaches for sample size determination. The sample size estimation was carried out by applying mathematical formula with predetermined precision level and parameter of the negative binomial distribution estimated from the empirical data. A simulation approach to optimum sample size determination aimed at the estimation of true value of the mean abundance and its confidence interval (CI was based on the Bag of Little Bootstraps (BLB. The abundance of two species of monogenean parasites Ligophorus cephali and L. mediterraneus from Mugil cephalus across the Azov-Black Seas localities were subjected to the analysis. The dispersion pattern of both helminth species could be characterized as a highly aggregated distribution with the variance being substantially larger than the mean abundance. The holistic approach applied here offers a wide range of appropriate methods in searching for the optimum sample size and the understanding about the expected precision level of the mean. Given the superior performance of the BLB relative to formulae with its few assumptions, the bootstrap procedure is the preferred method. Two important assessments were performed in the present study: i based on CIs width a reasonable precision level for the mean abundance in parasitological surveys of Ligophorus spp. could be chosen between 0.8 and 0.5 with 1.6 and 1x mean of the CIs width, and ii the sample size equal 80 or more host individuals allows accurate and precise estimation of mean abundance. Meanwhile for the host sample size in range between 25 and 40 individuals, the median estimates showed minimal bias but the sampling distribution skewed to the low values; a sample size of 10 host individuals yielded to unreliable estimates.

  20. Sample size for post-marketing safety studies based on historical controls.

    Science.gov (United States)

    Wu, Yu-te; Makuch, Robert W

    2010-08-01

    As part of a drug's entire life cycle, post-marketing studies are an important part in the identification of rare, serious adverse events. Recently, the US Food and Drug Administration (FDA) has begun to implement new post-marketing safety mandates as a consequence of increased emphasis on safety. The purpose of this research is to provide exact sample size formula for the proposed hybrid design, based on a two-group cohort study with incorporation of historical external data. Exact sample size formula based on the Poisson distribution is developed, because the detection of rare events is our outcome of interest. Performance of exact method is compared to its approximate large-sample theory counterpart. The proposed hybrid design requires a smaller sample size compared to the standard, two-group prospective study design. In addition, the exact method reduces the number of subjects required in the treatment group by up to 30% compared to the approximate method for the study scenarios examined. The proposed hybrid design satisfies the advantages and rationale of the two-group design with smaller sample sizes generally required. 2010 John Wiley & Sons, Ltd.

  1. Sample size computation for association studies using case–parents ...

    Indian Academy of Sciences (India)

    ple size needed to reach a given power (Knapp 1999; Schaid. 1999; Chen and Deng 2001; Brown 2004). In their seminal paper, Risch and Merikangas (1996) showed that for a mul- tiplicative mode of inheritance (MOI) for the susceptibility gene, sample size depends on two parameters: the frequency of the risk allele at the ...

  2. Absolute measurement of the isotopic ratio of a water sample with very low deuterium content

    International Nuclear Information System (INIS)

    Hagemann, R.; Nief, G.; Roth, E.

    1968-01-01

    The presence of H 3+ ions which are indistinguishable from HD + ions presents the principal difficulty encountered in the measurement of isotopic ratios of water samples with very low deuterium contents using a mass spectrometer. Thus, when the sample contains no deuterium, the mass spectrometer does not indicate zero. By producing, in situ, from the sample to be measured, water vapor with an isotopic ratio very close to zero using a small distilling column, this difficulty is overcome. This column, its operating parameters, as well as the way in which the measurements are made are described. An arrangement is employed in which the isotopic ratios can be measured with a sensitivity better than 0.01 x 10 -6 . The method is applied to the determination of the isotopic ratios of three low deuterium content water samples. The results obtained permit one to assign to the sample with the lowest deuterium content an absolute value equal to 1.71 ± 0.03 ppm. This water sample is a primary standard from which is determined the isotopic ratio of a natural water sample which serves as the laboratory standard. (author) [fr

  3. Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization

    OpenAIRE

    Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A.

    2017-01-01

    The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the co...

  4. Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.

    Science.gov (United States)

    Algina, James; Olejnik, Stephen

    2000-01-01

    Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)

  5. Sample size in psychological research over the past 30 years.

    Science.gov (United States)

    Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B

    2011-04-01

    The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.

  6. A flexible method for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, Ming-Shih; Sanborn, J.B.; Teichmann, T.

    1997-01-01

    This paper gives a flexible method to determine sample sizes for both systematic and random error models (this pertains to sampling problems in nuclear safeguard questions). In addition, the method allows different attribute rejection limits. The new method could assist achieving a higher detection probability and enhance inspection effectiveness

  7. Direct uranium isotope ratio analysis of single micrometer-sized glass particles

    International Nuclear Information System (INIS)

    Kappel, Stefanie; Boulyga, Sergei F.; Prohaska, Thomas

    2012-01-01

    We present the application of nanosecond laser ablation (LA) coupled to a ‘Nu Plasma HR’ multi collector inductively coupled plasma mass spectrometer (MC-ICP-MS) for the direct analysis of U isotope ratios in single, 10–20 μm-sized, U-doped glass particles. Method development included studies with respect to (1) external correction of the measured U isotope ratios in glass particles, (2) the applied laser ablation carrier gas (i.e. Ar versus He) and (3) the accurate determination of lower abundant 236 U/ 238 U isotope ratios (i.e. 10 −5 ). In addition, a data processing procedure was developed for evaluation of transient signals, which is of potential use for routine application of the developed method. We demonstrate that the developed method is reliable and well suited for determining U isotope ratios of individual particles. Analyses of twenty-eight S1 glass particles, measured under optimized conditions, yielded average biases of less than 0.6% from the certified values for 234 U/ 238 U and 235 U/ 238 U ratios. Experimental results obtained for 236 U/ 238 U isotope ratios deviated by less than −2.5% from the certified values. Expanded relative total combined standard uncertainties U c (k = 2) of 2.6%, 1.4% and 5.8% were calculated for 234 U/ 238 U, 235 U/ 238 U and 236 U/ 238 U, respectively. - Highlights: ► LA-MC-ICP-MS was fully validated for the direct analysis of individual particles. ► Traceability was established by using an IRMM glass particle reference material. ► Measured U isotope ratios were in agreement with the certified range. ► A comprehensive total combined uncertainty evaluation was performed. ► The analysis of 236 U/ 238 U isotope ratios was improved by using a deceleration filter.

  8. Sample Size for Tablet Compression and Capsule Filling Events During Process Validation.

    Science.gov (United States)

    Charoo, Naseem Ahmad; Durivage, Mark; Rahman, Ziyaur; Ayad, Mohamad Haitham

    2017-12-01

    During solid dosage form manufacturing, the uniformity of dosage units (UDU) is ensured by testing samples at 2 stages, that is, blend stage and tablet compression or capsule/powder filling stage. The aim of this work is to propose a sample size selection approach based on quality risk management principles for process performance qualification (PPQ) and continued process verification (CPV) stages by linking UDU to potential formulation and process risk factors. Bayes success run theorem appeared to be the most appropriate approach among various methods considered in this work for computing sample size for PPQ. The sample sizes for high-risk (reliability level of 99%), medium-risk (reliability level of 95%), and low-risk factors (reliability level of 90%) were estimated to be 299, 59, and 29, respectively. Risk-based assignment of reliability levels was supported by the fact that at low defect rate, the confidence to detect out-of-specification units would decrease which must be supplemented with an increase in sample size to enhance the confidence in estimation. Based on level of knowledge acquired during PPQ and the level of knowledge further required to comprehend process, sample size for CPV was calculated using Bayesian statistics to accomplish reduced sampling design for CPV. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  9. Sample Size Calculation for Controlling False Discovery Proportion

    Directory of Open Access Journals (Sweden)

    Shulian Shang

    2012-01-01

    Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.

  10. A normative inference approach for optimal sample sizes in decisions from experience

    Science.gov (United States)

    Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph

    2015-01-01

    “Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720

  11. The 4-parameter Compressible Packing Model (CPM) including a critical cavity size ratio

    Science.gov (United States)

    Roquier, Gerard

    2017-06-01

    The 4-parameter Compressible Packing Model (CPM) has been developed to predict the packing density of mixtures constituted by bidisperse spherical particles. The four parameters are: the wall effect and the loosening effect coefficients, the compaction index and a critical cavity size ratio. The two geometrical interactions have been studied theoretically on the basis of a spherical cell centered on a secondary class bead. For the loosening effect, a critical cavity size ratio, below which a fine particle can be inserted into a small cavity created by touching coarser particles, is introduced. This is the only parameter which requires adaptation to extend the model to other types of particles. The 4-parameter CPM demonstrates its efficiency on frictionless glass beads (300 values), spherical particles numerically simulated (20 values), round natural particles (125 values) and crushed particles (335 values) with correlation coefficients equal to respectively 99.0%, 98.7%, 97.8%, 96.4% and mean deviations equal to respectively 0.007, 0.006, 0.007, 0.010.

  12. Determination of the stoichiometric ratio uranium dioxide samples

    International Nuclear Information System (INIS)

    Moura, Sergio Carvalho

    1999-01-01

    The determination of the O/U stoichiometric ratio in uranium dioxide is an important parameter in order to qualify nuclear fuels. The excess oxygen in the crystallographic structure can cause changes in the physico-chemical properties of this compound such as variation of the thermal conductivity alterations, fuel plasticity and others, affecting the efficiency of this material when it is utilized as nuclear fuel in the reactor core. The purpose of this work is to evaluate methods for the determination of uranium oxide samples from two different production processes, using gravimetric, voltammetric and X-ray diffraction techniques. After the evaluation of these techniques, the main aspect of this work is to define a reliable methodology in order to characterize the behavior of uranium oxide. The methodology used in this work consisted of two different steps: utilization of gravimetric and volumetric methods in order to determine the ratio in uranium dioxide samples; utilization of X-ray diffraction technique in order to determine the lattice parameters using patterns and application of the Rietveld method during refining of the structural data. As a result of the experimental part of this work it was found that the X-ray diffraction analysis performs better and detects the presence of more phases than gravimetric and voltammetric techniques, not sensitive enough in this detection. (author)

  13. Rock sampling. [method for controlling particle size distribution

    Science.gov (United States)

    Blum, P. (Inventor)

    1971-01-01

    A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.

  14. Effects of sample size on the second magnetization peak in ...

    Indian Academy of Sciences (India)

    8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in ...

  15. Sample size for estimation of the Pearson correlation coefficient in cherry tomato tests

    Directory of Open Access Journals (Sweden)

    Bruno Giacomini Sari

    2017-09-01

    Full Text Available ABSTRACT: The aim of this study was to determine the required sample size for estimation of the Pearson coefficient of correlation between cherry tomato variables. Two uniformity tests were set up in a protected environment in the spring/summer of 2014. The observed variables in each plant were mean fruit length, mean fruit width, mean fruit weight, number of bunches, number of fruits per bunch, number of fruits, and total weight of fruits, with calculation of the Pearson correlation matrix between them. Sixty eight sample sizes were planned for one greenhouse and 48 for another, with the initial sample size of 10 plants, and the others were obtained by adding five plants. For each planned sample size, 3000 estimates of the Pearson correlation coefficient were obtained through bootstrap re-samplings with replacement. The sample size for each correlation coefficient was determined when the 95% confidence interval amplitude value was less than or equal to 0.4. Obtaining estimates of the Pearson correlation coefficient with high precision is difficult for parameters with a weak linear relation. Accordingly, a larger sample size is necessary to estimate them. Linear relations involving variables dealing with size and number of fruits per plant have less precision. To estimate the coefficient of correlation between productivity variables of cherry tomato, with a confidence interval of 95% equal to 0.4, it is necessary to sample 275 plants in a 250m² greenhouse, and 200 plants in a 200m² greenhouse.

  16. Effect of sample size on bias correction performance

    Science.gov (United States)

    Reiter, Philipp; Gutjahr, Oliver; Schefczyk, Lukas; Heinemann, Günther; Casper, Markus C.

    2014-05-01

    The output of climate models often shows a bias when compared to observed data, so that a preprocessing is necessary before using it as climate forcing in impact modeling (e.g. hydrology, species distribution). A common bias correction method is the quantile matching approach, which adapts the cumulative distribution function of the model output to the one of the observed data by means of a transfer function. Especially for precipitation we expect the bias correction performance to strongly depend on sample size, i.e. the length of the period used for calibration of the transfer function. We carry out experiments using the precipitation output of ten regional climate model (RCM) hindcast runs from the EU-ENSEMBLES project and the E-OBS observational dataset for the period 1961 to 2000. The 40 years are split into a 30 year calibration period and a 10 year validation period. In the first step, for each RCM transfer functions are set up cell-by-cell, using the complete 30 year calibration period. The derived transfer functions are applied to the validation period of the respective RCM precipitation output and the mean absolute errors in reference to the observational dataset are calculated. These values are treated as "best fit" for the respective RCM. In the next step, this procedure is redone using subperiods out of the 30 year calibration period. The lengths of these subperiods are reduced from 29 years down to a minimum of 1 year, only considering subperiods of consecutive years. This leads to an increasing number of repetitions for smaller sample sizes (e.g. 2 for a length of 29 years). In the last step, the mean absolute errors are statistically tested against the "best fit" of the respective RCM to compare the performances. In order to analyze if the intensity of the effect of sample size depends on the chosen correction method, four variations of the quantile matching approach (PTF, QUANT/eQM, gQM, GQM) are applied in this study. The experiments are further

  17. Isotope analytics for the evaluation of the feeding influence on the isotope ratio in beef samples

    International Nuclear Information System (INIS)

    Herwig, Nadine

    2010-01-01

    Information about the origin of food and associated production systems has a high significance for food control. An extremely promising approach to obtain such information is the determination of isotope ratios of different elements. In this study the correlation of the isotope ratios C-13/C-12, N-15/N-14, Mg-25/Mg-24, and Sr-87/Sr-86 in bovine samples (milk and urine) and the corresponding isotope ratios in feed was investigated. It was shown that in the bovine samples all four isotope ratios correlate with the isotope composition of the feed. The isotope ratios of strontium and magnesium have the advantage that they directly reflect the isotope ratios of the ingested feed since there is no isotope fractionation in the bovine organism which is in contrast to the case of carbon and nitrogen isotope ratios. From the present feeding study it is evident, that a feed change leads to a significant change in the delta C-13 values in milk and urine within 10 days already. For the deltaN-15 values the feed change was only visible in the bovine urine after 49 days. Investigations of cows from two different regions (Berlin/Germany and Goestling/Austria) kept at different feeding regimes revealed no differences in the N-15/N-14 and Mg-26/Mg-24 isotope ratios. The strongest correlation between the isotope ratio of the bovine samples and the kind of ingested feed was observed for the carbon isotope ratio. With this ratio even smallest differences in the feed composition were traceable in the bovine samples. Since different regions usually coincide with different feeding regimes, carbon isotope ratios can be used to distinguish bovine samples from different regions if the delta C-13 values of the ingested feed are different. Furthermore, the determination of strontium isotope ratios revealed significant differences between bovine and feed samples of Berlin and Goestling due to the different geologic realities. Hence the carbon and strontium isotope ratios allow the best

  18. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  19. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  20. Test of methods for retrospective activity size distribution determination from filter samples

    International Nuclear Information System (INIS)

    Meisenberg, Oliver; Tschiersch, Jochen

    2015-01-01

    Determining the activity size distribution of radioactive aerosol particles requires sophisticated and heavy equipment, which makes measurements at large number of sites difficult and expensive. Therefore three methods for a retrospective determination of size distributions from aerosol filter samples in the laboratory were tested for their applicability. Extraction into a carrier liquid with subsequent nebulisation showed size distributions with a slight but correctable bias towards larger diameters compared with the original size distribution. Yields in the order of magnitude of 1% could be achieved. Sonication-assisted extraction into a carrier liquid caused a coagulation mode to appear in the size distribution. Sonication-assisted extraction into the air did not show acceptable results due to small yields. The method of extraction into a carrier liquid without sonication was applied to aerosol samples from Chernobyl in order to calculate inhalation dose coefficients for 137 Cs based on the individual size distribution. The effective dose coefficient is about half of that calculated with a default reference size distribution. - Highlights: • Activity size distributions can be recovered after aerosol sampling on filters. • Extraction into a carrier liquid and subsequent nebulisation is appropriate. • This facilitates the determination of activity size distributions for individuals. • Size distributions from this method can be used for individual dose coefficients. • Dose coefficients were calculated for the workers at the new Chernobyl shelter

  1. Caution regarding the choice of standard deviations to guide sample size calculations in clinical trials.

    Science.gov (United States)

    Chen, Henian; Zhang, Nanhua; Lu, Xiaosun; Chen, Sophie

    2013-08-01

    The method used to determine choice of standard deviation (SD) is inadequately reported in clinical trials. Underestimations of the population SD may result in underpowered clinical trials. This study demonstrates how using the wrong method to determine population SD can lead to inaccurate sample sizes and underpowered studies, and offers recommendations to maximize the likelihood of achieving adequate statistical power. We review the practice of reporting sample size and its effect on the power of trials published in major journals. Simulated clinical trials were used to compare the effects of different methods of determining SD on power and sample size calculations. Prior to 1996, sample size calculations were reported in just 1%-42% of clinical trials. This proportion increased from 38% to 54% after the initial Consolidated Standards of Reporting Trials (CONSORT) was published in 1996, and from 64% to 95% after the revised CONSORT was published in 2001. Nevertheless, underpowered clinical trials are still common. Our simulated data showed that all minimal and 25th-percentile SDs fell below 44 (the population SD), regardless of sample size (from 5 to 50). For sample sizes 5 and 50, the minimum sample SDs underestimated the population SD by 90.7% and 29.3%, respectively. If only one sample was available, there was less than 50% chance that the actual power equaled or exceeded the planned power of 80% for detecting a median effect size (Cohen's d = 0.5) when using the sample SD to calculate the sample size. The proportions of studies with actual power of at least 80% were about 95%, 90%, 85%, and 80% when we used the larger SD, 80% upper confidence limit (UCL) of SD, 70% UCL of SD, and 60% UCL of SD to calculate the sample size, respectively. When more than one sample was available, the weighted average SD resulted in about 50% of trials being underpowered; the proportion of trials with power of 80% increased from 90% to 100% when the 75th percentile and the

  2. Sample sizes and model comparison metrics for species distribution models

    Science.gov (United States)

    B.B. Hanberry; H.S. He; D.C. Dey

    2012-01-01

    Species distribution models use small samples to produce continuous distribution maps. The question of how small a sample can be to produce an accurate model generally has been answered based on comparisons to maximum sample sizes of 200 observations or fewer. In addition, model comparisons often are made with the kappa statistic, which has become controversial....

  3. Influence of Sample Size on Automatic Positional Accuracy Assessment Methods for Urban Areas

    Directory of Open Access Journals (Sweden)

    Francisco J. Ariza-López

    2018-05-01

    Full Text Available In recent years, new approaches aimed to increase the automation level of positional accuracy assessment processes for spatial data have been developed. However, in such cases, an aspect as significant as sample size has not yet been addressed. In this paper, we study the influence of sample size when estimating the planimetric positional accuracy of urban databases by means of an automatic assessment using polygon-based methodology. Our study is based on a simulation process, which extracts pairs of homologous polygons from the assessed and reference data sources and applies two buffer-based methods. The parameter used for determining the different sizes (which range from 5 km up to 100 km has been the length of the polygons’ perimeter, and for each sample size 1000 simulations were run. After completing the simulation process, the comparisons between the estimated distribution functions for each sample and population distribution function were carried out by means of the Kolmogorov–Smirnov test. Results show a significant reduction in the variability of estimations when sample size increased from 5 km to 100 km.

  4. Sample size determination for disease prevalence studies with partially validated data.

    Science.gov (United States)

    Qiu, Shi-Fang; Poon, Wai-Yin; Tang, Man-Lai

    2016-02-01

    Disease prevalence is an important topic in medical research, and its study is based on data that are obtained by classifying subjects according to whether a disease has been contracted. Classification can be conducted with high-cost gold standard tests or low-cost screening tests, but the latter are subject to the misclassification of subjects. As a compromise between the two, many research studies use partially validated datasets in which all data points are classified by fallible tests, and some of the data points are validated in the sense that they are also classified by the completely accurate gold-standard test. In this article, we investigate the determination of sample sizes for disease prevalence studies with partially validated data. We use two approaches. The first is to find sample sizes that can achieve a pre-specified power of a statistical test at a chosen significance level, and the second is to find sample sizes that can control the width of a confidence interval with a pre-specified confidence level. Empirical studies have been conducted to demonstrate the performance of various testing procedures with the proposed sample sizes. The applicability of the proposed methods are illustrated by a real-data example. © The Author(s) 2012.

  5. Volume-of-fluid simulations in microfluidic T-junction devices: Influence of viscosity ratio on droplet size

    Science.gov (United States)

    Nekouei, Mehdi; Vanapalli, Siva A.

    2017-03-01

    We used volume-of-fluid (VOF) method to perform three-dimensional numerical simulations of droplet formation of Newtonian fluids in microfluidic T-junction devices. To evaluate the performance of the VOF method we examined the regimes of drop formation and determined droplet size as a function of system parameters. Comparison of the simulation results with four sets of experimental data from the literature showed good agreement, validating the VOF method. Motivated by the lack of adequate studies investigating the influence of viscosity ratio (λ) on the generated droplet size, we mapped the dependence of drop volume on capillary number (0.001 1. In addition, we find that at a given capillary number, the size of droplets does not vary appreciably when λ 1. We develop an analytical model for predicting the droplet size that includes a viscosity-dependent breakup time for the dispersed phase. This improved model successfully predicts the effects of the viscosity ratio observed in simulations. Results from this study are useful for the design of lab-on-chip technologies and manufacture of microfluidic emulsions, where there is a need to know how system parameters influence the droplet size.

  6. Optimal Sample Size for Probability of Detection Curves

    International Nuclear Information System (INIS)

    Annis, Charles; Gandossi, Luca; Martin, Oliver

    2012-01-01

    The use of Probability of Detection (POD) curves to quantify NDT reliability is common in the aeronautical industry, but relatively less so in the nuclear industry. The European Network for Inspection Qualification's (ENIQ) Inspection Qualification Methodology is based on the concept of Technical Justification, a document assembling all the evidence to assure that the NDT system in focus is indeed capable of finding the flaws for which it was designed. This methodology has become widely used in many countries, but the assurance it provides is usually of qualitative nature. The need to quantify the output of inspection qualification has become more important, especially as structural reliability modelling and quantitative risk-informed in-service inspection methodologies become more widely used. To credit the inspections in structural reliability evaluations, a measure of the NDT reliability is necessary. A POD curve provides such metric. In 2010 ENIQ developed a technical report on POD curves, reviewing the statistical models used to quantify inspection reliability. Further work was subsequently carried out to investigate the issue of optimal sample size for deriving a POD curve, so that adequate guidance could be given to the practitioners of inspection reliability. Manufacturing of test pieces with cracks that are representative of real defects found in nuclear power plants (NPP) can be very expensive. Thus there is a tendency to reduce sample sizes and in turn reduce the conservatism associated with the POD curve derived. Not much guidance on the correct sample size can be found in the published literature, where often qualitative statements are given with no further justification. The aim of this paper is to summarise the findings of such work. (author)

  7. On Using a Pilot Sample Variance for Sample Size Determination in the Detection of Differences between Two Means: Power Consideration

    Science.gov (United States)

    Shieh, Gwowen

    2013-01-01

    The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…

  8. Improving CT detection sensitivity for nodal metastases in oesophageal cancer with combination of smaller size and lymph node axial ratio

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Jianfang [Chinese Academy of Medical Sciences and Peking Union Medical College, National Cancer Center/Cancer Hospital, Beijing (China); Capital Medical University Electric Power Teaching Hospital, Beijing (China); Wang, Zhu; Qu, Dong; Yao, Libo [Chinese Academy of Medical Sciences and Peking Union Medical College, National Cancer Center/Cancer Hospital, Beijing (China); Shao, Huafei [Affiliated Yantai Yuhuangding Hospital of Qingdao University Medical College, Yantai (China); Liu, Jian [Meitan General Hospital, Beijing (China)

    2018-01-15

    To investigate the value of CT with inclusion of smaller lymph node (LN) sizes and axial ratio to improve the sensitivity in diagnosis of regional lymph node metastases in oesophageal squamous cell carcinoma (OSCC). The contrast-enhanced multidetector row spiral CT (MDCT) multiplanar reconstruction images of 204 patients with OSCC were retrospectively analysed. The long-axis and short-axis diameters of the regional LNs were measured and axial ratios were calculated (short-axis/long-axis diameters). Nodes were considered round if the axial ratio exceeded the optimal LN axial ratio, which was determined by receiver operating characteristic analysis. A positive predictive value (PPV) exceeding 50% is needed. This was achieved only with LNs larger than 9 mm in short-axis diameter, but nodes of this size were rare (sensitivity 37.3%, specificity 96.4%, accuracy 85.8%). If those round nodes (axial ratio exceeding 0.66) between 7 mm and 9 mm in size were considered metastases as well, it might improve the sensitivity to 67.2% with a PPV of 63.9% (specificity 91.6%, accuracy 87.2%). Combination of a smaller size and axial ratio for LNs in MDCT as criteria improves the detection sensitivity for LN metastases in OSCC. (orig.)

  9. What is the optimum sample size for the study of peatland testate amoeba assemblages?

    Science.gov (United States)

    Mazei, Yuri A; Tsyganov, Andrey N; Esaulov, Anton S; Tychkov, Alexander Yu; Payne, Richard J

    2017-10-01

    Testate amoebae are widely used in ecological and palaeoecological studies of peatlands, particularly as indicators of surface wetness. To ensure data are robust and comparable it is important to consider methodological factors which may affect results. One significant question which has not been directly addressed in previous studies is how sample size (expressed here as number of Sphagnum stems) affects data quality. In three contrasting locations in a Russian peatland we extracted samples of differing size, analysed testate amoebae and calculated a number of widely-used indices: species richness, Simpson diversity, compositional dissimilarity from the largest sample and transfer function predictions of water table depth. We found that there was a trend for larger samples to contain more species across the range of commonly-used sample sizes in ecological studies. Smaller samples sometimes failed to produce counts of testate amoebae often considered minimally adequate. It seems likely that analyses based on samples of different sizes may not produce consistent data. Decisions about sample size need to reflect trade-offs between logistics, data quality, spatial resolution and the disturbance involved in sample extraction. For most common ecological applications we suggest that samples of more than eight Sphagnum stems are likely to be desirable. Copyright © 2017 Elsevier GmbH. All rights reserved.

  10. [Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].

    Science.gov (United States)

    Fu, Yingkun; Xie, Yanming

    2011-10-01

    In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research.

  11. Determining Plane-Sweep Sampling Points in Image Space Using the Cross-Ratio for Image-Based Depth Estimation

    Science.gov (United States)

    Ruf, B.; Erdnuess, B.; Weinmann, M.

    2017-08-01

    With the emergence of small consumer Unmanned Aerial Vehicles (UAVs), the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM) optimization which is parallelized for general purpose computation on a GPU (GPGPU), reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that the relative

  12. DETERMINING PLANE-SWEEP SAMPLING POINTS IN IMAGE SPACE USING THE CROSS-RATIO FOR IMAGE-BASED DEPTH ESTIMATION

    Directory of Open Access Journals (Sweden)

    B. Ruf

    2017-08-01

    Full Text Available With the emergence of small consumer Unmanned Aerial Vehicles (UAVs, the importance and interest of image-based depth estimation and model generation from aerial images has greatly increased in the photogrammetric society. In our work, we focus on algorithms that allow an online image-based dense depth estimation from video sequences, which enables the direct and live structural analysis of the depicted scene. Therefore, we use a multi-view plane-sweep algorithm with a semi-global matching (SGM optimization which is parallelized for general purpose computation on a GPU (GPGPU, reaching sufficient performance to keep up with the key-frames of input sequences. One important aspect to reach good performance is the way to sample the scene space, creating plane hypotheses. A small step size between consecutive planes, which is needed to reconstruct details in the near vicinity of the camera may lead to ambiguities in distant regions, due to the perspective projection of the camera. Furthermore, an equidistant sampling with a small step size produces a large number of plane hypotheses, leading to high computational effort. To overcome these problems, we present a novel methodology to directly determine the sampling points of plane-sweep algorithms in image space. The use of the perspective invariant cross-ratio allows us to derive the location of the sampling planes directly from the image data. With this, we efficiently sample the scene space, achieving higher sampling density in areas which are close to the camera and a lower density in distant regions. We evaluate our approach on a synthetic benchmark dataset for quantitative evaluation and on a real-image dataset consisting of aerial imagery. The experiments reveal that an inverse sampling achieves equal and better results than a linear sampling, with less sampling points and thus less runtime. Our algorithm allows an online computation of depth maps for subsequences of five frames, provided that

  13. Isotope ratio measurements of pg-size plutonium samples using TIMS in combination with the 'Multiple Ion Counting' and filament carburization

    Energy Technology Data Exchange (ETDEWEB)

    Jakopic, Rozle; Richter, Stephan; Kuehn, Heinz; Aregbe, Yetunde [European Commission, Directorate General Joint Research Centre Institute for Reference Materials and Measurements, IRMM Retieseweg 111, B-2440 Geel (Belgium)

    2008-07-01

    A new sample preparation procedure for isotopic measurements using the Triton TIMS (Thermal Ionization Mass Spectrometer) was developed which employed the technique of carburization of rhenium filaments. Carburized filaments were prepared in a special vacuum chamber in which the filaments were heated and exposed to benzene vapor. Ionization efficiency was improved by an order of magnitude. Additionally, a new 'multi-dynamic' measurement technique was developed for Pu isotope ratio measurements using the 'multiple ion counting' (MIC) system. This technique was further combined with the filament carburization technique and applied to the NBL-137 isotopic standard and samples of the NUSIMEP 5 inter-laboratory comparison campaign. The results clearly show an improved precision and accuracy for the 'multi-dynamic' measurement procedure, compared to measurements carried out either in peak-jumping or in static mode using the MIC system with non-carburized filaments. (authors)

  14. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Directory of Open Access Journals (Sweden)

    Ian J Fiske

    Full Text Available BACKGROUND: Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. METHODOLOGY/PRINCIPAL FINDINGS: Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. CONCLUSIONS/SIGNIFICANCE: We found significant bias at small sample sizes when survival was low (survival = 0.5, and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high

  15. Effects of sample size on estimates of population growth rates calculated with matrix models.

    Science.gov (United States)

    Fiske, Ian J; Bruna, Emilio M; Bolker, Benjamin M

    2008-08-28

    Matrix models are widely used to study the dynamics and demography of populations. An important but overlooked issue is how the number of individuals sampled influences estimates of the population growth rate (lambda) calculated with matrix models. Even unbiased estimates of vital rates do not ensure unbiased estimates of lambda-Jensen's Inequality implies that even when the estimates of the vital rates are accurate, small sample sizes lead to biased estimates of lambda due to increased sampling variance. We investigated if sampling variability and the distribution of sampling effort among size classes lead to biases in estimates of lambda. Using data from a long-term field study of plant demography, we simulated the effects of sampling variance by drawing vital rates and calculating lambda for increasingly larger populations drawn from a total population of 3842 plants. We then compared these estimates of lambda with those based on the entire population and calculated the resulting bias. Finally, we conducted a review of the literature to determine the sample sizes typically used when parameterizing matrix models used to study plant demography. We found significant bias at small sample sizes when survival was low (survival = 0.5), and that sampling with a more-realistic inverse J-shaped population structure exacerbated this bias. However our simulations also demonstrate that these biases rapidly become negligible with increasing sample sizes or as survival increases. For many of the sample sizes used in demographic studies, matrix models are probably robust to the biases resulting from sampling variance of vital rates. However, this conclusion may depend on the structure of populations or the distribution of sampling effort in ways that are unexplored. We suggest more intensive sampling of populations when individual survival is low and greater sampling of stages with high elasticities.

  16. Determining sample size for assessing species composition in ...

    African Journals Online (AJOL)

    Species composition is measured in grasslands for a variety of reasons. Commonly, observations are made using the wheel-point apparatus, but the problem of determining optimum sample size has not yet been satisfactorily resolved. In this study the wheel-point apparatus was used to record 2 000 observations in each of ...

  17. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  18. The effect of clustering on lot quality assurance sampling: a probabilistic model to calculate sample sizes for quality assessments.

    Science.gov (United States)

    Hedt-Gauthier, Bethany L; Mitsunaga, Tisha; Hund, Lauren; Olives, Casey; Pagano, Marcello

    2013-10-26

    Traditional Lot Quality Assurance Sampling (LQAS) designs assume observations are collected using simple random sampling. Alternatively, randomly sampling clusters of observations and then individuals within clusters reduces costs but decreases the precision of the classifications. In this paper, we develop a general framework for designing the cluster(C)-LQAS system and illustrate the method with the design of data quality assessments for the community health worker program in Rwanda. To determine sample size and decision rules for C-LQAS, we use the beta-binomial distribution to account for inflated risk of errors introduced by sampling clusters at the first stage. We present general theory and code for sample size calculations.The C-LQAS sample sizes provided in this paper constrain misclassification risks below user-specified limits. Multiple C-LQAS systems meet the specified risk requirements, but numerous considerations, including per-cluster versus per-individual sampling costs, help identify optimal systems for distinct applications. We show the utility of C-LQAS for data quality assessments, but the method generalizes to numerous applications. This paper provides the necessary technical detail and supplemental code to support the design of C-LQAS for specific programs.

  19. The impact of sample size on the reproducibility of voxel-based lesion-deficit mappings.

    Science.gov (United States)

    Lorca-Puls, Diego L; Gajardo-Vidal, Andrea; White, Jitrachote; Seghier, Mohamed L; Leff, Alexander P; Green, David W; Crinion, Jenny T; Ludersdorfer, Philipp; Hope, Thomas M H; Bowman, Howard; Price, Cathy J

    2018-07-01

    This study investigated how sample size affects the reproducibility of findings from univariate voxel-based lesion-deficit analyses (e.g., voxel-based lesion-symptom mapping and voxel-based morphometry). Our effect of interest was the strength of the mapping between brain damage and speech articulation difficulties, as measured in terms of the proportion of variance explained. First, we identified a region of interest by searching on a voxel-by-voxel basis for brain areas where greater lesion load was associated with poorer speech articulation using a large sample of 360 right-handed English-speaking stroke survivors. We then randomly drew thousands of bootstrap samples from this data set that included either 30, 60, 90, 120, 180, or 360 patients. For each resample, we recorded effect size estimates and p values after conducting exactly the same lesion-deficit analysis within the previously identified region of interest and holding all procedures constant. The results show (1) how often small effect sizes in a heterogeneous population fail to be detected; (2) how effect size and its statistical significance varies with sample size; (3) how low-powered studies (due to small sample sizes) can greatly over-estimate as well as under-estimate effect sizes; and (4) how large sample sizes (N ≥ 90) can yield highly significant p values even when effect sizes are so small that they become trivial in practical terms. The implications of these findings for interpreting the results from univariate voxel-based lesion-deficit analyses are discussed. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  20. Sampling considerations when analyzing micrometric-sized particles in a liquid jet using laser induced breakdown spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Faye, C.B.; Amodeo, T.; Fréjafon, E. [Institut National de l' Environnement Industriel et des Risques (INERIS/DRC/CARA/NOVA), Parc Technologique Alata, BP 2, 60550 Verneuil-En-Halatte (France); Delepine-Gilon, N. [Institut des Sciences Analytiques, 5 rue de la Doua, 69100 Villeurbanne (France); Dutouquet, C., E-mail: christophe.dutouquet@ineris.fr [Institut National de l' Environnement Industriel et des Risques (INERIS/DRC/CARA/NOVA), Parc Technologique Alata, BP 2, 60550 Verneuil-En-Halatte (France)

    2014-01-01

    Pollution of water is a matter of concern all over the earth. Particles are known to play an important role in the transportation of pollutants in this medium. In addition, the emergence of new materials such as NOAA (Nano-Objects, their Aggregates and their Agglomerates) emphasizes the need to develop adapted instruments for their detection. Surveillance of pollutants in particulate form in waste waters in industries involved in nanoparticle manufacturing and processing is a telling example of possible applications of such instrumental development. The LIBS (laser-induced breakdown spectroscopy) technique coupled with the liquid jet as sampling mode for suspensions was deemed as a potential candidate for on-line and real time monitoring. With the final aim in view to obtain the best detection limits, the interaction of nanosecond laser pulses with the liquid jet was examined. The evolution of the volume sampled by laser pulses was estimated as a function of the laser energy applying conditional analysis when analyzing a suspension of micrometric-sized particles of borosilicate glass. An estimation of the sampled depth was made. Along with the estimation of the sampled volume, the evolution of the SNR (signal to noise ratio) as a function of the laser energy was investigated as well. Eventually, the laser energy and the corresponding fluence optimizing both the sampling volume and the SNR were determined. The obtained results highlight intrinsic limitations of the liquid jet sampling mode when using 532 nm nanosecond laser pulses with suspensions. - Highlights: • Micrometric-sized particles in suspensions are analyzed using LIBS and a liquid jet. • The evolution of the sampling volume is estimated as a function of laser energy. • The sampling volume happens to saturate beyond a certain laser fluence. • Its value was found much lower than the beam diameter times the jet thickness. • Particles proved not to be entirely vaporized.

  1. Does increasing the size of bi-weekly samples of records influence results when using the Global Trigger Tool? An observational study of retrospective record reviews of two different sample sizes.

    Science.gov (United States)

    Mevik, Kjersti; Griffin, Frances A; Hansen, Tonje E; Deilkås, Ellen T; Vonen, Barthold

    2016-04-25

    To investigate the impact of increasing sample of records reviewed bi-weekly with the Global Trigger Tool method to identify adverse events in hospitalised patients. Retrospective observational study. A Norwegian 524-bed general hospital trust. 1920 medical records selected from 1 January to 31 December 2010. Rate, type and severity of adverse events identified in two different samples sizes of records selected as 10 and 70 records, bi-weekly. In the large sample, 1.45 (95% CI 1.07 to 1.97) times more adverse events per 1000 patient days (39.3 adverse events/1000 patient days) were identified than in the small sample (27.2 adverse events/1000 patient days). Hospital-acquired infections were the most common category of adverse events in both the samples, and the distributions of the other categories of adverse events did not differ significantly between the samples. The distribution of severity level of adverse events did not differ between the samples. The findings suggest that while the distribution of categories and severity are not dependent on the sample size, the rate of adverse events is. Further studies are needed to conclude if the optimal sample size may need to be adjusted based on the hospital size in order to detect a more accurate rate of adverse events. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  2. Estimating age ratios and size of pacific walrus herds on coastal haulouts using video imaging.

    Directory of Open Access Journals (Sweden)

    Daniel H Monson

    Full Text Available During Arctic summers, sea ice provides resting habitat for Pacific walruses as it drifts over foraging areas in the eastern Chukchi Sea. Climate-driven reductions in sea ice have recently created ice-free conditions in the Chukchi Sea by late summer causing walruses to rest at coastal haulouts along the Chukotka and Alaska coasts, which provides an opportunity to study walruses at relatively accessible locations. Walrus age can be determined from the ratio of tusk length to snout dimensions. We evaluated use of images obtained from a gyro-stabilized video system mounted on a helicopter flying at high altitudes (to avoid disturbance to classify the sex and age of walruses hauled out on Alaska beaches in 2010-2011. We were able to classify 95% of randomly selected individuals to either an 8- or 3-category age class, and we found measurement-based age classifications were more repeatable than visual classifications when using images presenting the correct head profile. Herd density at coastal haulouts averaged 0.88 walruses/m(2 (std. err. = 0.02, herd size ranged from 8,300 to 19,400 (CV 0.03-0.06 and we documented ∼30,000 animals along ∼1 km of beach in 2011. Within the herds, dependent walruses (0-2 yr-olds tended to be located closer to water, and this tendency became more pronounced as the herd spent more time on the beach. Therefore, unbiased estimation of herd age-ratios will require a sampling design that allows for spatial and temporal structuring. In addition, randomly sampling walruses available at the edge of the herd for other purposes (e.g., tagging, biopsying will not sample walruses with an age structure representative of the herd. Sea ice losses are projected to continue, and population age structure data collected with aerial videography at coastal haulouts may provide demographic information vital to ongoing efforts to understand effects of climate change on this species.

  3. Predictors of Citation Rate in Psychology: Inconclusive Influence of Effect and Sample Size.

    Science.gov (United States)

    Hanel, Paul H P; Haase, Jennifer

    2017-01-01

    In the present article, we investigate predictors of how often a scientific article is cited. Specifically, we focus on the influence of two often neglected predictors of citation rate: effect size and sample size, using samples from two psychological topical areas. Both can be considered as indicators of the importance of an article and post hoc (or observed) statistical power, and should, especially in applied fields, predict citation rates. In Study 1, effect size did not have an influence on citation rates across a topical area, both with and without controlling for numerous variables that have been previously linked to citation rates. In contrast, sample size predicted citation rates, but only while controlling for other variables. In Study 2, sample and partly effect sizes predicted citation rates, indicating that the relations vary even between scientific topical areas. Statistically significant results had more citations in Study 2 but not in Study 1. The results indicate that the importance (or power) of scientific findings may not be as strongly related to citation rate as is generally assumed.

  4. An approach for measuring the {sup 129}I/{sup 127}I ratio in fish samples

    Energy Technology Data Exchange (ETDEWEB)

    Kusuno, Haruka, E-mail: kusuno@um.u-tokyo.ac.jp [The University Museum, The University of Tokyo, 3-7-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Matsuzaki, Hiroyuki [The University Museum, The University of Tokyo, 3-7-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Nagata, Toshi; Miyairi, Yosuke; Yokoyama, Yusuke [Atmosphere and Ocean Research Institute, The University of Tokyo, 5-1-5, Kashiwanoha, Kashiwa-shi, Chiba 277-8564 (Japan); Ohkouchi, Naohiko [Japan Agency for Marine-Earth Science and Technology, 2-15, Natsushima-cho, Yokosuka-city, Kanagawa 237-0061 (Japan)

    2015-10-15

    The {sup 129}I/{sup 127}I ratio in marine fish samples was measured employing accelerator mass spectrometry. The measurement was successful because of the low experimental background of {sup 129}I. Pyrohydrolysis was applied to extract iodine from fish samples. The experimental background of pyrohydrolysis was checked carefully and evaluated as 10{sup 4}–10{sup 5} atoms {sup 129}I/combustion. The methodology employed in the present study thus required only 0.05–0.2 g of dried fish samples. The methodology was then applied to obtain the {sup 129}I/{sup 127}I ratio of marine fish samples collected from the Western Pacific Ocean as (0.63–1.2) × 10{sup −10}. These values were similar to the ratio for the surface seawater collected at the same station, 0.4 × 10{sup −10}. The {sup 129}I/{sup 127}I ratio of IAEA-414, which was a mix of fish from the Irish Sea and the North Sea, was also measured and determined as 1.82 × 10{sup −7}. Consequently, fish from the Western Pacific Ocean and the North Sea were distinguished by their {sup 129}I/{sup 127}I ratios. The {sup 129}I/{sup 127}I ratio is thus a direct indicator of the area of habitat of fish.

  5. Selection Of Suitable Particle Size And Particle Ratio For Japanese Cucumber Cucumis Sativus L. Plants

    Directory of Open Access Journals (Sweden)

    Galahitigama GAH

    2015-08-01

    Full Text Available This study was conducted to select the best particle size of coco peat for cucumber nurseries as well as best particle ratio for optimum plant growth and development of cucumber. The experiment was carried out in International Foodstuff Company and Faculty of Agriculture University of Ruhuna Sri Lanka during 2015 to 2016. Under experiment one three types of different particle sizes were used namely fine amp88040.5mm T2 medium 3mm-0.5mm T3 and coarse 4mm T4 with normal coco peat T1 as treatments. Complete Randomized Design CRD used as experimental design with five replicates. Germination percentage number of leaves per seedling seedling height in frequent day intervals was taken as growth parameters. Analysis of variance procedure was applied to analyze the data at 5 probability level. The results revealed that medium size particle media sieve size 0.5mm -3mm of coco peat was the best particle size for cucumber nursery practice when considered the physical and chemical properties of medium particles of coco peat. In the experiment of selecting of suitable particle ratio for cucumber plants the compressed mixture of coco peat particles that contain 70 ww unsieved coco peat 20 ww coarse particles and 10 ww coconut husk chips 5 12mm has given best results for growth performances compared to other treatments and cucumber grown in this mixture has shown maximum growth and yield performances.

  6. Sample size calculation to externally validate scoring systems based on logistic regression models.

    Directory of Open Access Journals (Sweden)

    Antonio Palazón-Bru

    Full Text Available A sample size containing at least 100 events and 100 non-events has been suggested to validate a predictive model, regardless of the model being validated and that certain factors can influence calibration of the predictive model (discrimination, parameterization and incidence. Scoring systems based on binary logistic regression models are a specific type of predictive model.The aim of this study was to develop an algorithm to determine the sample size for validating a scoring system based on a binary logistic regression model and to apply it to a case study.The algorithm was based on bootstrap samples in which the area under the ROC curve, the observed event probabilities through smooth curves, and a measure to determine the lack of calibration (estimated calibration index were calculated. To illustrate its use for interested researchers, the algorithm was applied to a scoring system, based on a binary logistic regression model, to determine mortality in intensive care units.In the case study provided, the algorithm obtained a sample size with 69 events, which is lower than the value suggested in the literature.An algorithm is provided for finding the appropriate sample size to validate scoring systems based on binary logistic regression models. This could be applied to determine the sample size in other similar cases.

  7. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio.

    Directory of Open Access Journals (Sweden)

    João Fabrício Mota Rodrigues

    Full Text Available Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.

  8. Differences in Movement Pattern and Detectability between Males and Females Influence How Common Sampling Methods Estimate Sex Ratio.

    Science.gov (United States)

    Rodrigues, João Fabrício Mota; Coelho, Marco Túlio Pacheco

    2016-01-01

    Sampling the biodiversity is an essential step for conservation, and understanding the efficiency of sampling methods allows us to estimate the quality of our biodiversity data. Sex ratio is an important population characteristic, but until now, no study has evaluated how efficient are the sampling methods commonly used in biodiversity surveys in estimating the sex ratio of populations. We used a virtual ecologist approach to investigate whether active and passive capture methods are able to accurately sample a population's sex ratio and whether differences in movement pattern and detectability between males and females produce biased estimates of sex-ratios when using these methods. Our simulation allowed the recognition of individuals, similar to mark-recapture studies. We found that differences in both movement patterns and detectability between males and females produce biased estimates of sex ratios. However, increasing the sampling effort or the number of sampling days improves the ability of passive or active capture methods to properly sample sex ratio. Thus, prior knowledge regarding movement patterns and detectability for species is important information to guide field studies aiming to understand sex ratio related patterns.

  9. Predicted versus observed cosmic-ray-produced noble gases in lunar samples: improved Kr production ratios

    International Nuclear Information System (INIS)

    Regnier, S.; Hohenberg, C.M.; Marti, K.; Reedy, R.C.

    1979-01-01

    New sets of cross sections for the production of krypton isotopes from targets of Rb, Sr, Y, and Zr were constructed primarily on the bases of experimental excitation functions for Kr production from Y. These cross sections were used to calculate galactic-cosmic-ray and solar-proton production rates for Kr isotopes in the moon. Spallation Kr data obtained from ilmenite separates of rocks 10017 and 10047 are reported. Production rates and isotopic ratios for cosmogenic Kr observed in ten well-documented lunar samples and in ilmenite separates and bulk samples from several lunar rocks with long but unknown irradiation histories were compared with predicted rates and ratios. The agreements were generally quite good. Erosion of rock surfaces affected rates or ratios for only near-surface samples, where solar-proton production is important. There were considerable spreads in predicted-to-observed production rates of 83 Kr, due at least in part to uncertainties in chemical abundances. The 78 Kr/ 83 Kr ratios were predicted quite well for samples with a wide range of Zr/Sr abundance ratios. The calculated 80 Kr/ 83 Kr ratios were greater than the observed ratios when production by the 79 Br(n,γ) reaction was included, but were slightly undercalculated if the Br reaction was omitted; these results suggest that Br(n,γ)-produced Kr is not retained well by lunar rocks. The productions of 81 Kr and 82 Kr were overcalculated by approximately 10% relative to 83 Kr. Predicted-to-observed 84 Kr/ 83 ratios scattered considerably, possibly because of uncertainties in corrections for trapped and fission components and in cross sections for 84 Kr production. Most predicted 84 Kr and 86 Kr production rates were lower than observed. Shielding depths of several Apollo 11 rocks were determined from the measured 78 Kr/ 83 Kr ratios of ilmenite separates. 4 figures, 5 tables

  10. Influence of content and particle size of waste pet bottles on concrete behavior at different w/c ratios

    International Nuclear Information System (INIS)

    Albano, C.; Camacho, N.; Hernandez, M.; Matheus, A.; Gutierrez, A.

    2009-01-01

    The goal of this work was to study the mechanical behavior of concrete with recycled Polyethylene Therephtalate (PET), varying the water/cement ratio (0.50 and 0.60), PET content (10 and 20 vol%) and the particle size. Also, the influence of the thermal degradation of PET in the concrete was studied, when the blends were exposed to different temperatures (200, 400, 600 o C). Results indicate that PET-filled concrete, when volume proportion and particle size of PET increased, showed a decrease in compressive strength, splitting tensile strength, modulus of elasticity and ultrasonic pulse velocity; however, the water absorption increased. On the other hand, the flexural strength of concrete-PET when exposed to a heat source was strongly dependent on the temperature, water/cement ratio, as well as on the PET content and particle size. Moreover, the activation energy was affected by the temperature, PET particles location on the slabs and water/cement ratio.

  11. Influence of content and particle size of waste pet bottles on concrete behavior at different w/c ratios.

    Science.gov (United States)

    Albano, C; Camacho, N; Hernández, M; Matheus, A; Gutiérrez, A

    2009-10-01

    The goal of this work was to study the mechanical behavior of concrete with recycled Polyethylene Therephtalate (PET), varying the water/cement ratio (0.50 and 0.60), PET content (10 and 20 vol%) and the particle size. Also, the influence of the thermal degradation of PET in the concrete was studied, when the blends were exposed to different temperatures (200, 400, 600 degrees C). Results indicate that PET-filled concrete, when volume proportion and particle size of PET increased, showed a decrease in compressive strength, splitting tensile strength, modulus of elasticity and ultrasonic pulse velocity; however, the water absorption increased. On the other hand, the flexural strength of concrete-PET when exposed to a heat source was strongly dependent on the temperature, water/cement ratio, as well as on the PET content and particle size. Moreover, the activation energy was affected by the temperature, PET particles location on the slabs and water/cement ratio.

  12. Dynamic effect of total solid content, low substrate/inoculum ratio and particle size on solid-state anaerobic digestion.

    Science.gov (United States)

    Motte, J-C; Escudié, R; Bernet, N; Delgenes, J-P; Steyer, J-P; Dumas, C

    2013-09-01

    Among all the process parameters of solid-state anaerobic digestion (SS-AD), total solid content (TS), inoculation (S/X ratio) and size of the organic solid particles can be optimized to improve methane yield and process stability. To evaluate the effects of each parameter and their interactions on methane production, a three level Box-Behnken experimental design was implemented in SS-AD batch tests degrading wheat straw by adjusting: TS content from 15% to 25%, S/X ratio (in volatile solids) between 28 and 47 and particle size with a mean diameter ranging from 0.1 to 1.4mm. A dynamic analysis of the methane production indicates that the S/X ratio has only an effect during the start-up phase of the SS-AD. During the growing phase, TS content becomes the main parameter governing the methane production and its strong interaction with the particle size suggests the important role of water compartmentation on SS-AD. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Intrapopulational body size variation and cranial capacity variation in Middle Pleistocene humans: the Sima de los Huesos sample (Sierra de Atapuerca, Spain).

    Science.gov (United States)

    Lorenzo, C; Carretero, J M; Arsuaga, J L; Gracia, A; Martínez, I

    1998-05-01

    A sexual dimorphism more marked than in living humans has been claimed for European Middle Pleistocene humans, Neandertals and prehistoric modern humans. In this paper, body size and cranial capacity variation are studied in the Sima de los Huesos Middle Pleistocene sample. This is the largest sample of non-modern humans found to date from one single site, and with all skeletal elements represented. Since the techniques available to estimate the degree of sexual dimorphism in small palaeontological samples are all unsatisfactory, we have used the bootstraping method to asses the magnitude of the variation in the Sima de los Huesos sample compared to modern human intrapopulational variation. We analyze size variation without attempting to sex the specimens a priori. Anatomical regions investigated are scapular glenoid fossa; acetabulum; humeral proximal and distal epiphyses; ulnar proximal epiphysis; radial neck; proximal femur; humeral, femoral, ulnar and tibial shaft; lumbosacral joint; patella; calcaneum; and talar trochlea. In the Sima de los Huesos sample only the humeral midshaft perimeter shows an unusual high variation (only when it is expressed by the maximum ratio, not by the coefficient of variation). In spite of that the cranial capacity range at Sima de los Huesos almost spans the rest of the European and African Middle Pleistocene range. The maximum ratio is in the central part of the distribution of modern human samples. Thus, the hypothesis of a greater sexual dimorphism in Middle Pleistocene populations than in modern populations is not supported by either cranial or postcranial evidence from Sima de los Huesos.

  14. Size selective isocyanate aerosols personal air sampling using porous plastic foams

    International Nuclear Information System (INIS)

    Cong Khanh Huynh; Trinh Vu Duc

    2009-01-01

    As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.

  15. An integrated approach for multi-level sample size determination

    International Nuclear Information System (INIS)

    Lu, M.S.; Teichmann, T.; Sanborn, J.B.

    1997-01-01

    Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization

  16. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  17. Speeding Up Non-Parametric Bootstrap Computations for Statistics Based on Sample Moments in Small/Moderate Sample Size Applications.

    Directory of Open Access Journals (Sweden)

    Elias Chaibub Neto

    Full Text Available In this paper we propose a vectorized implementation of the non-parametric bootstrap for statistics based on sample moments. Basically, we adopt the multinomial sampling formulation of the non-parametric bootstrap, and compute bootstrap replications of sample moment statistics by simply weighting the observed data according to multinomial counts instead of evaluating the statistic on a resampled version of the observed data. Using this formulation we can generate a matrix of bootstrap weights and compute the entire vector of bootstrap replications with a few matrix multiplications. Vectorization is particularly important for matrix-oriented programming languages such as R, where matrix/vector calculations tend to be faster than scalar operations implemented in a loop. We illustrate the application of the vectorized implementation in real and simulated data sets, when bootstrapping Pearson's sample correlation coefficient, and compared its performance against two state-of-the-art R implementations of the non-parametric bootstrap, as well as a straightforward one based on a for loop. Our investigations spanned varying sample sizes and number of bootstrap replications. The vectorized bootstrap compared favorably against the state-of-the-art implementations in all cases tested, and was remarkably/considerably faster for small/moderate sample sizes. The same results were observed in the comparison with the straightforward implementation, except for large sample sizes, where the vectorized bootstrap was slightly slower than the straightforward implementation due to increased time expenditures in the generation of weight matrices via multinomial sampling.

  18. Effects of crystalline grain size and packing ratio of self-forming core/shell nanoparticles on magnetic properties at up to GHz bands

    International Nuclear Information System (INIS)

    Suetsuna, Tomohiro; Suenaga, Seiichi; Sakurada, Shinya; Harada, Koichi; Tomimatsu, Maki; Takahashi, Toshihide

    2011-01-01

    Self-forming core/shell nanoparticles of magnetic metal/oxide with crystalline grain size of less than 40 nm were synthesized. The nanoparticles were highly concentrated in an insulating matrix to fabricate a nanocomposite, whose magnetic properties were investigated. The crystalline grain size of the nanoparticles strongly influenced the magnetic anisotropy field, magnetic coercivity, relative permeability, and loss factor (tan δ=μ''/μ') at high frequency. The packing ratio of the magnetic metallic phase in the nanocomposite also influenced those properties. High permeability with low tan δ of less than 1.5% at up to 1 GHz was obtained in the case of the nanoparticles with crystalline grain size of around 15 nm with large packing ratio of the nanoparticles. - Research highlights: → Self-forming core/shell nanoparticles of magnetic metal/oxide were synthesized. → Crystalline grain size of the nanoparticle and its packing ratio were controlled. → Magnetic properties changed according to the size and packing ratio.

  19. Influence of Ba/Fe mole ratios on magnetic properties, crystallite size and shifting of X-ray diffraction peaks of nanocrystalline BaFe12O19 powder, prepared by sol gel auto combu

    Science.gov (United States)

    Suastiyanti, Dwita; Sudarmaji, Arif; Soegijono, Bambang

    2012-06-01

    Barium hexaferrite BaFe12O19 (BFO) is of great importance as permanent magnets, particularly for magnetic recording as well as in microwave devices. Nano-crystalline BFO powders were prepared by sol gel auto combustion method in citric acid - metal nitrates system. Hence the mole ratios of Ba/Fe were variated at 1:12; 1:11.5 and 1:11. Ratio of cation to fuel was fixed at 1:1. An appropriate amount of amonia solution was added dropwise to this solution with constant stirring until the PH reached 7 in all cases. Heating at 850oC for 10 hours for each sample to get final formation of BFO nanocrystalline. The data from XRD showing the lattice parameters a,c and the unit-cell volume V, confirm that BFO with ratio 1:12 has same crystall parameters with ratio 1:11. Ratio of Ba/Fe 1:12 and 1:11 have diffraction pattern similarly at almost each 2 θ for each samples. Ratio of Ba/Fe 1: 11.5 has the finest crystallite size 22 nm. Almost diffraction pattern peaks of Ba/Fe 1:11.5 move to the left from of Ba/Fe 1:12 then return to diffraction pattern of Ba/Fe 1:12 for Ba/Fe 1:11. SEM observations show the particle size less than 100 nm and the same shape for each sample. Ratio of Ba/Fe 1: 12 gives the highest intrinsic coercive Hc = 427.3 kA/m. The highest remanent magnetization is at ratio 1:11 with Mr = 0.170 T. BFO with mole ratio 1:11.5 has the finest grain 22 nm, good magnetic properties and the highest value of best FoM 89%.

  20. Computing Confidence Bounds for Power and Sample Size of the General Linear Univariate Model

    OpenAIRE

    Taylor, Douglas J.; Muller, Keith E.

    1995-01-01

    The power of a test, the probability of rejecting the null hypothesis in favor of an alternative, may be computed using estimates of one or more distributional parameters. Statisticians frequently fix mean values and calculate power or sample size using a variance estimate from an existing study. Hence computed power becomes a random variable for a fixed sample size. Likewise, the sample size necessary to achieve a fixed power varies randomly. Standard statistical practice requires reporting ...

  1. Estimation of sample size and testing power (Part 3).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2011-12-01

    This article introduces the definition and sample size estimation of three special tests (namely, non-inferiority test, equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable. Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug. Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy. Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug. By specific examples, this article introduces formulas of sample size estimation for the three special tests, and their SAS realization in detail.

  2. DNA-based hair sampling to identify road crossings and estimate population size of black bears in Great Dismal Swamp National Wildlife Refuge, Virginia

    OpenAIRE

    Wills, Johnny

    2008-01-01

    The planned widening of U.S. Highway 17 along the east boundary of Great Dismal Swamp National Wildlife Refuge (GDSNWR) and a lack of knowledge about the refugeâ s bear population created the need to identify potential sites for wildlife crossings and estimate the size of the refugeâ s bear population. I collected black bear hair in order to collect DNA samples to estimate population size, density, and sex ratio, and determine road crossing locations for black bears (Ursus americanus) in G...

  3. Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.

    Science.gov (United States)

    Youssef, Noha H; Elshahed, Mostafa S

    2008-09-01

    Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.

  4. PENGARUH NPM, FDR, KOMITE AUDIT, PERTUMBUHAN USAHA, LEVERAGE DAN SIZE TERHADAP MANAJEMEN LABA

    Directory of Open Access Journals (Sweden)

    Mahfudzotun Nahar

    2017-04-01

    Full Text Available The purpose of this study was to know the influence of NPM , FDR, Audit Committee, the sales growth (growth, leverage and size of the company earnings management practices of Islamic banking in Indonesia. The dependent variable used in this study was calculated using the earnings management of discretionary accruals. The independent variables used in this study is the net profit margin ratio, the ratio of Financing to Deposit Ratio, the Audit Committee, Sales Growth (Growth, Leverage and Firm Size.             The sample in the study of Islamic banking, comprising both Sharia Bank or Sharia in commercial banks by the Financial Services Authority statistics as of June 2015. The sample was selected using purposive sampling was then obtained 6 Islamic Banks and 12 Sharia sampled in this study ,             The results of this study indicate that there is significant influence between NPM ratio to earnings management of Islamic banking. As for the ratio of FDR, the Audit Committee, Growth, Leverage and Size (size of the company had no significant effect on earnings management practices in Islamic banking. Keywords: earnings management, NPM, FDR,audit committee, Growth, Leverage, Company Size

  5. [Formal sample size calculation and its limited validity in animal studies of medical basic research].

    Science.gov (United States)

    Mayer, B; Muche, R

    2013-01-01

    Animal studies are highly relevant for basic medical research, although their usage is discussed controversially in public. Thus, an optimal sample size for these projects should be aimed at from a biometrical point of view. Statistical sample size calculation is usually the appropriate methodology in planning medical research projects. However, required information is often not valid or only available during the course of an animal experiment. This article critically discusses the validity of formal sample size calculation for animal studies. Within the discussion, some requirements are formulated to fundamentally regulate the process of sample size determination for animal experiments.

  6. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Jamshid Jamali

    2017-01-01

    Full Text Available Evaluating measurement equivalence (also known as differential item functioning (DIF is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  7. The Effect of Small Sample Size on Measurement Equivalence of Psychometric Questionnaires in MIMIC Model: A Simulation Study.

    Science.gov (United States)

    Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman

    2017-01-01

    Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.

  8. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  9. Mapping soil particle-size fractions: A comparison of compositional kriging and log-ratio kriging

    Science.gov (United States)

    Wang, Zong; Shi, Wenjiao

    2017-03-01

    Soil particle-size fractions (psf) as basic physical variables need to be accurately predicted for regional hydrological, ecological, geological, agricultural and environmental studies frequently. Some methods had been proposed to interpolate the spatial distributions of soil psf, but the performance of compositional kriging and different log-ratio kriging methods is still unclear. Four log-ratio transformations, including additive log-ratio (alr), centered log-ratio (clr), isometric log-ratio (ilr), and symmetry log-ratio (slr), combined with ordinary kriging (log-ratio kriging: alr_OK, clr_OK, ilr_OK and slr_OK) were selected to be compared with compositional kriging (CK) for the spatial prediction of soil psf in Tianlaochi of Heihe River Basin, China. Root mean squared error (RMSE), Aitchison's distance (AD), standardized residual sum of squares (STRESS) and right ratio of the predicted soil texture types (RR) were chosen to evaluate the accuracy for different interpolators. The results showed that CK had a better accuracy than the four log-ratio kriging methods. The RMSE (sand, 9.27%; silt, 7.67%; clay, 4.17%), AD (0.45), STRESS (0.60) of CK were the lowest and the RR (58.65%) was the highest in the five interpolators. The clr_OK achieved relatively better performance than the other log-ratio kriging methods. In addition, CK presented reasonable and smooth transition on mapping soil psf according to the environmental factors. The study gives insights for mapping soil psf accurately by comparing different methods for compositional data interpolation. Further researches of methods combined with ancillary variables are needed to be implemented to improve the interpolation performance.

  10. Hafnium isotope ratios of nine GSJ reference samples

    International Nuclear Information System (INIS)

    Hanyu, Takeshi; Nakai, Shun'ichi; Tatsuta, Riichiro

    2005-01-01

    176 Hf/ 177 Hf ratios of nine geochemical reference rocks from the Geological Survey of Japan, together with BIR-1 and BCR-2, were determined using multi-collector inductively coupled plasma mass spectrometry. Our data for BIR-1, BCR-2 and JB-1 are in agreement with those previously reported, demonstrating the appropriateness of the chemical procedure and isotopic measurement employed in this study. The reference rocks have a wide range of 176 Hf/ 177 Hf covering the field defined by various volcanic rocks, such as mid-ocean ridge basalts, ocean island basalts, and subduction related volcanic rocks. They are therefore suitable as rock standards for Hf isotope measurement of geological samples. (author)

  11. On sample size and different interpretations of snow stability datasets

    Science.gov (United States)

    Schirmer, M.; Mitterer, C.; Schweizer, J.

    2009-04-01

    Interpretations of snow stability variations need an assessment of the stability itself, independent of the scale investigated in the study. Studies on stability variations at a regional scale have often chosen stability tests such as the Rutschblock test or combinations of various tests in order to detect differences in aspect and elevation. The question arose: ‘how capable are such stability interpretations in drawing conclusions'. There are at least three possible errors sources: (i) the variance of the stability test itself; (ii) the stability variance at an underlying slope scale, and (iii) that the stability interpretation might not be directly related to the probability of skier triggering. Various stability interpretations have been proposed in the past that provide partly different results. We compared a subjective one based on expert knowledge with a more objective one based on a measure derived from comparing skier-triggered slopes vs. slopes that have been skied but not triggered. In this study, the uncertainties are discussed and their effects on regional scale stability variations will be quantified in a pragmatic way. An existing dataset with very large sample sizes was revisited. This dataset contained the variance of stability at a regional scale for several situations. The stability in this dataset was determined using the subjective interpretation scheme based on expert knowledge. The question to be answered was how many measurements were needed to obtain similar results (mainly stability differences in aspect or elevation) as with the complete dataset. The optimal sample size was obtained in several ways: (i) assuming a nominal data scale the sample size was determined with a given test, significance level and power, and by calculating the mean and standard deviation of the complete dataset. With this method it can also be determined if the complete dataset consists of an appropriate sample size. (ii) Smaller subsets were created with similar

  12. Support vector regression to predict porosity and permeability: Effect of sample size

    Science.gov (United States)

    Al-Anazi, A. F.; Gates, I. D.

    2012-02-01

    Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function

  13. The PowerAtlas: a power and sample size atlas for microarray experimental design and research

    Directory of Open Access Journals (Sweden)

    Wang Jelai

    2006-02-01

    Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.

  14. Size effects in foams : Experiments and modeling

    NARCIS (Netherlands)

    Tekoglu, C.; Gibson, L. J.; Pardoen, T.; Onck, P. R.

    Mechanical properties of cellular solids depend on the ratio of the sample size to the cell size at length scales where the two are of the same order of magnitude. Considering that the cell size of many cellular solids used in engineering applications is between 1 and 10 mm, it is not uncommon to

  15. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  16. Bayesian sample size determination for cost-effectiveness studies with censored data.

    Directory of Open Access Journals (Sweden)

    Daniel P Beavers

    Full Text Available Cost-effectiveness models are commonly utilized to determine the combined clinical and economic impact of one treatment compared to another. However, most methods for sample size determination of cost-effectiveness studies assume fully observed costs and effectiveness outcomes, which presents challenges for survival-based studies in which censoring exists. We propose a Bayesian method for the design and analysis of cost-effectiveness data in which costs and effectiveness may be censored, and the sample size is approximated for both power and assurance. We explore two parametric models and demonstrate the flexibility of the approach to accommodate a variety of modifications to study assumptions.

  17. The measurement of mass spectrometric peak height ratio of helium isotope in trace samples

    International Nuclear Information System (INIS)

    Sun Mingliang

    1989-01-01

    An experiment study on the measurement of mass spectrometric peak height ratio of helium isotope in the trace gaseous sample is discussed by using the gas purification line designed by the authors and model VG-5400 static-vacuum noble gas mass spectrometer imported and air helium as a standard. The results show that the amount of He and Ne in natural gas sample is 99% after purification. When the amount of He in Mass Spectrometer is more than 4 x 10 -7 cm 3 STP, it's sensitivity remains stable, about 10 -4 A/cm 3 STP He and the precision of 3 He/ 4 He ratio within the following 17 days is 1.32%. The 'ABA' pattern and experiment condition in the measurement of mass spectrometric peak height ratio of He isotope are presented

  18. Development of sample size allocation program using hypergeometric distribution

    International Nuclear Information System (INIS)

    Kim, Hyun Tae; Kwack, Eun Ho; Park, Wan Soo; Min, Kyung Soo; Park, Chan Sik

    1996-01-01

    The objective of this research is the development of sample allocation program using hypergeometric distribution with objected-oriented method. When IAEA(International Atomic Energy Agency) performs inspection, it simply applies a standard binomial distribution which describes sampling with replacement instead of a hypergeometric distribution which describes sampling without replacement in sample allocation to up to three verification methods. The objective of the IAEA inspection is the timely detection of diversion of significant quantities of nuclear material, therefore game theory is applied to its sampling plan. It is necessary to use hypergeometric distribution directly or approximate distribution to secure statistical accuracy. Improved binomial approximation developed by Mr. J. L. Jaech and correctly applied binomial approximation are more closer to hypergeometric distribution in sample size calculation than the simply applied binomial approximation of the IAEA. Object-oriented programs of 1. sample approximate-allocation with correctly applied standard binomial approximation, 2. sample approximate-allocation with improved binomial approximation, and 3. sample approximate-allocation with hypergeometric distribution were developed with Visual C ++ and corresponding programs were developed with EXCEL(using Visual Basic for Application). 8 tabs., 15 refs. (Author)

  19. Determination of /sup 240/Pu//sup 239/Pu ratio in the environmental samples based on the measurement of Lx/. cap alpha. -ray activity ratio

    Energy Technology Data Exchange (ETDEWEB)

    Komura, K.; Sakanoue, M.; Yamamoto, M.

    1984-06-01

    The determination of the /sup 240/Pu//sup 239/Pu isotopic ratio in environmental samples has been attempted by the measurement of the Lx/..cap alpha..-ray activity ratio using a Ge-LEPS (low-energy photon spectrometer) and a surface-barrier Si detector. By this method, interesting data were obtained for various samples collected from Thule, Greenland, Bikini Atoll and Nagasaki, as well as for some soils collected from near and off-site locations of atomic power stations.

  20. High aspect ratio problem in simulation of a fault current limiter based on superconducting tapes

    Energy Technology Data Exchange (ETDEWEB)

    Velichko, A V; Coombs, T A [Electrical Engineering Division, University of Cambridge (United Kingdom)

    2006-06-15

    We are offering a solution for the high-aspect-ratio problem relevant to the numerical simulation of AC loss in superconductors and metals with high aspect (width-to-thickness) ratio. This is particularly relevant to simulation of fault current limiters (FCLs) based on second generation YBCO tapes on RABiTS. By assuming a linear scaling of the electric and thermal properties with the size of the structure, we can replace the real sample with an effective sample of a reduced aspect ratio by introducing size multipliers into the equations that govern the physics of the system. The simulation is performed using both a proprietary equivalent circuit software and a commercial FEM software. The correctness of the procedure is verified by simulating temperature and current distributions for samples with all three dimensions varying within 10{sup -3}-10{sup 3} of the original size. Qualitatively the distributions for the original and scaled samples are indistinguishable, whereas quantitative differences in the worst case do not exceed 10%.

  1. High aspect ratio problem in simulation of a fault current limiter based on superconducting tapes

    International Nuclear Information System (INIS)

    Velichko, A V; Coombs, T A

    2006-01-01

    We are offering a solution for the high-aspect-ratio problem relevant to the numerical simulation of AC loss in superconductors and metals with high aspect (width-to-thickness) ratio. This is particularly relevant to simulation of fault current limiters (FCLs) based on second generation YBCO tapes on RABiTS. By assuming a linear scaling of the electric and thermal properties with the size of the structure, we can replace the real sample with an effective sample of a reduced aspect ratio by introducing size multipliers into the equations that govern the physics of the system. The simulation is performed using both a proprietary equivalent circuit software and a commercial FEM software. The correctness of the procedure is verified by simulating temperature and current distributions for samples with all three dimensions varying within 10 -3 -10 3 of the original size. Qualitatively the distributions for the original and scaled samples are indistinguishable, whereas quantitative differences in the worst case do not exceed 10%

  2. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2018-03-01

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H 0 : ES = 0 versus alternative hypotheses H 1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  3. Effects of sample size on robustness and prediction accuracy of a prognostic gene signature

    Directory of Open Access Journals (Sweden)

    Kim Seon-Young

    2009-05-01

    Full Text Available Abstract Background Few overlap between independently developed gene signatures and poor inter-study applicability of gene signatures are two of major concerns raised in the development of microarray-based prognostic gene signatures. One recent study suggested that thousands of samples are needed to generate a robust prognostic gene signature. Results A data set of 1,372 samples was generated by combining eight breast cancer gene expression data sets produced using the same microarray platform and, using the data set, effects of varying samples sizes on a few performances of a prognostic gene signature were investigated. The overlap between independently developed gene signatures was increased linearly with more samples, attaining an average overlap of 16.56% with 600 samples. The concordance between predicted outcomes by different gene signatures also was increased with more samples up to 94.61% with 300 samples. The accuracy of outcome prediction also increased with more samples. Finally, analysis using only Estrogen Receptor-positive (ER+ patients attained higher prediction accuracy than using both patients, suggesting that sub-type specific analysis can lead to the development of better prognostic gene signatures Conclusion Increasing sample sizes generated a gene signature with better stability, better concordance in outcome prediction, and better prediction accuracy. However, the degree of performance improvement by the increased sample size was different between the degree of overlap and the degree of concordance in outcome prediction, suggesting that the sample size required for a study should be determined according to the specific aims of the study.

  4. A more powerful test based on ratio distribution for retention noninferiority hypothesis.

    Science.gov (United States)

    Deng, Ling; Chen, Gang

    2013-03-11

    Rothmann et al. ( 2003 ) proposed a method for the statistical inference of fraction retention noninferiority (NI) hypothesis. A fraction retention hypothesis is defined as a ratio of the new treatment effect verse the control effect in the context of a time to event endpoint. One of the major concerns using this method in the design of an NI trial is that with a limited sample size, the power of the study is usually very low. This makes an NI trial not applicable particularly when using time to event endpoint. To improve power, Wang et al. ( 2006 ) proposed a ratio test based on asymptotic normality theory. Under a strong assumption (equal variance of the NI test statistic under null and alternative hypotheses), the sample size using Wang's test was much smaller than that using Rothmann's test. However, in practice, the assumption of equal variance is generally questionable for an NI trial design. This assumption is removed in the ratio test proposed in this article, which is derived directly from a Cauchy-like ratio distribution. In addition, using this method, the fundamental assumption used in Rothmann's test, that the observed control effect is always positive, that is, the observed hazard ratio for placebo over the control is greater than 1, is no longer necessary. Without assuming equal variance under null and alternative hypotheses, the sample size required for an NI trial can be significantly reduced if using the proposed ratio test for a fraction retention NI hypothesis.

  5. Volatile and non-volatile elements in grain-size separated samples of Apollo 17 lunar soils

    International Nuclear Information System (INIS)

    Giovanoli, R.; Gunten, H.R. von; Kraehenbuehl, U.; Meyer, G.; Wegmueller, F.; Gruetter, A.; Wyttenbach, A.

    1977-01-01

    Three samples of Apollo 17 lunar soils (75081, 72501 and 72461) were separated into 9 grain-size fractions between 540 and 1 μm mean diameter. In order to detect mineral fractionations caused during the separation procedures major elements were determined by instrumental neutron activation analyses performed on small aliquots of the separated samples. Twenty elements were measured in each size fraction using instrumental and radiochemical neutron activation techniques. The concentration of the main elements in sample 75081 does not change with the grain-size. Exceptions are Fe and Ti which decrease slightly and Al which increases slightly with the decrease in the grain-size. These changes in the composition in main elements suggest a decrease in Ilmenite and an increase in Anorthite with decreasing grain-size. However, it can be concluded that the mineral composition of the fractions changes less than a factor of 2. Samples 72501 and 72461 are not yet analyzed for the main elements. (Auth.)

  6. A modified approach to estimating sample size for simple logistic regression with one continuous covariate.

    Science.gov (United States)

    Novikov, I; Fund, N; Freedman, L S

    2010-01-15

    Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.

  7. Effects of sampling design on age ratios of migrants captured at stopover sites

    Science.gov (United States)

    Jeffrey F. Kelly; Deborah M. Finch

    2000-01-01

    Age classes of migrant songbirds often differ in migration timing. This difference creates the potential for age-ratios recorded at stopover sites to vary with the amount and distribution of sampling effort used. To test for these biases, we sub-sampled migrant capture data from the Middle Rio Grande Valley of New Mexico. We created data sets that reflected the age...

  8. Three-year-olds obey the sample size principle of induction: the influence of evidence presentation and sample size disparity on young children's generalizations.

    Science.gov (United States)

    Lawson, Chris A

    2014-07-01

    Three experiments with 81 3-year-olds (M=3.62years) examined the conditions that enable young children to use the sample size principle (SSP) of induction-the inductive rule that facilitates generalizations from large rather than small samples of evidence. In Experiment 1, children exhibited the SSP when exemplars were presented sequentially but not when exemplars were presented simultaneously. Results from Experiment 3 suggest that the advantage of sequential presentation is not due to the additional time to process the available input from the two samples but instead may be linked to better memory for specific individuals in the large sample. In addition, findings from Experiments 1 and 2 suggest that adherence to the SSP is mediated by the disparity between presented samples. Overall, these results reveal that the SSP appears early in development and is guided by basic cognitive processes triggered during the acquisition of input. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Sample size methods for estimating HIV incidence from cross-sectional surveys.

    Science.gov (United States)

    Konikoff, Jacob; Brookmeyer, Ron

    2015-12-01

    Understanding HIV incidence, the rate at which new infections occur in populations, is critical for tracking and surveillance of the epidemic. In this article, we derive methods for determining sample sizes for cross-sectional surveys to estimate incidence with sufficient precision. We further show how to specify sample sizes for two successive cross-sectional surveys to detect changes in incidence with adequate power. In these surveys biomarkers such as CD4 cell count, viral load, and recently developed serological assays are used to determine which individuals are in an early disease stage of infection. The total number of individuals in this stage, divided by the number of people who are uninfected, is used to approximate the incidence rate. Our methods account for uncertainty in the durations of time spent in the biomarker defined early disease stage. We find that failure to account for this uncertainty when designing surveys can lead to imprecise estimates of incidence and underpowered studies. We evaluated our sample size methods in simulations and found that they performed well in a variety of underlying epidemics. Code for implementing our methods in R is available with this article at the Biometrics website on Wiley Online Library. © 2015, The International Biometric Society.

  10. Estimating the ratio of pond size to irrigated soybean land in Mississippi: a case study

    Science.gov (United States)

    Ying Ouyang; G. Feng; J. Read; T. D. Leininger; J. N. Jenkins

    2016-01-01

    Although more on-farm storage ponds have been constructed in recent years to mitigate groundwater resources depletion in Mississippi, little effort has been devoted to estimating the ratio of on-farm water storage pond size to irrigated crop land based on pond metric and its hydrogeological conditions.  In this study, two simulation scenarios were chosen to...

  11. Variation of the 18O/16O ratio in water samples from branches

    International Nuclear Information System (INIS)

    Foerstel, H.; Huetzen, H.

    1979-06-01

    The studies of the water turnover of plants may use the labelling of water by its natural variation of the 18 O/ 16 O ratio. The basic value of such a study is the isotope ratio in soil water, which is represented by the 18 O/ 16 O ratio in water samples from stem and branches, too. During the water transport from the soil water reservoir to the leaves of trees, no fractionation of the oxygen isotopes occurs. The oxygen isotope ratio within a single twig varies about +- 0 / 00 (variation given as standard deviation of the delta-values), within the stem of a large tree about +- 2 0 / 00 . The results of water from stems of different trees at the site of the Nuclear Research Center Juelich scatter about +- 1 0 / 00 . The delta-values from a larger area (Rur valley-Eifel hills-Mosel valley), which were collected in October 1978 during the end of the vegetation period, showed a standard deviation between +- 2.2 (Rur valley) and +- 3.6 0 / 00 (Eifel hills). The 18 O/ 16 O-delta-values of a beech wood from Juelich site are in the range of - 7.3 and - 10.1 0 / 00 (mean local precipitation 1974 - 1977: - 7.4 0 / 00 ). At the hill site near Cologne (Bergisches Land, late September 1978) we observed an oxygen isotope ratio of - 9.1 0 / 00 (groundwater at the neighbourhood between - 7.6 and 8.7 0 / 00 ). In October 1978 at an area from the Netherlands to the Mosel valley we found delta-values of branch water between - 13.9 (lower Ruhr valley) and - 13.1 (Eifel hills to Mosel valley) in comparison to groundwater samples from the same region: - 7.55 and - 8.39. There was no significant difference between delta-values from various species or locations within this area. Groundwater samples should normally represent the 18 O/ 16 O ratio of local precipitation. The low delta-values of branch water could be due to the rapid uptake of precipitation water of low 18 O content in autumn to the water transport system of plants. (orig.) [de

  12. The Macdonald and Savage titrimetric procedure scaled down to 4 mg sized plutonium samples. P. 1

    International Nuclear Information System (INIS)

    Kuvik, V.; Lecouteux, C.; Doubek, N.; Ronesch, K.; Jammet, G.; Bagliano, G.; Deron, S.

    1992-01-01

    The original Macdonald and Savage amperometric method scaled down to milligram-sized plutonium samples was further modified. The electro-chemical process of each redox step and the end-point of the final titration were monitored potentiometrically. The method is designed to determine 4 mg of plutonium dissolved in nitric acid solution. It is suitable for the direct determination of plutonium in non-irradiated fuel with a uranium-to-plutonium ratio of up to 30. The precision and accuracy are ca. 0.05-0.1% (relative standard deviation). Although the procedure is very selective, the following species interfere: vanadyl(IV) and vanadate (almost quantitatively), neptunium (one electron exchange per mole), nitrites, fluorosilicates (milligram amounts yield a slight bias) and iodates. (author). 15 refs.; 8 figs.; 7 tabs

  13. Evaluation of pump pulsation in respirable size-selective sampling: part II. Changes in sampling efficiency.

    Science.gov (United States)

    Lee, Eun Gyung; Lee, Taekhee; Kim, Seung Won; Lee, Larry; Flemmer, Michael M; Harper, Martin

    2014-01-01

    This second, and concluding, part of this study evaluated changes in sampling efficiency of respirable size-selective samplers due to air pulsations generated by the selected personal sampling pumps characterized in Part I (Lee E, Lee L, Möhlmann C et al. Evaluation of pump pulsation in respirable size-selective sampling: Part I. Pulsation measurements. Ann Occup Hyg 2013). Nine particle sizes of monodisperse ammonium fluorescein (from 1 to 9 μm mass median aerodynamic diameter) were generated individually by a vibrating orifice aerosol generator from dilute solutions of fluorescein in aqueous ammonia and then injected into an environmental chamber. To collect these particles, 10-mm nylon cyclones, also known as Dorr-Oliver (DO) cyclones, were used with five medium volumetric flow rate pumps. Those were the Apex IS, HFS513, GilAir5, Elite5, and Basic5 pumps, which were found in Part I to generate pulsations of 5% (the lowest), 25%, 30%, 56%, and 70% (the highest), respectively. GK2.69 cyclones were used with the Legacy [pump pulsation (PP) = 15%] and Elite12 (PP = 41%) pumps for collection at high flows. The DO cyclone was also used to evaluate changes in sampling efficiency due to pulse shape. The HFS513 pump, which generates a more complex pulse shape, was compared to a single sine wave fluctuation generated by a piston. The luminescent intensity of the fluorescein extracted from each sample was measured with a luminescence spectrometer. Sampling efficiencies were obtained by dividing the intensity of the fluorescein extracted from the filter placed in a cyclone with the intensity obtained from the filter used with a sharp-edged reference sampler. Then, sampling efficiency curves were generated using a sigmoid function with three parameters and each sampling efficiency curve was compared to that of the reference cyclone by constructing bias maps. In general, no change in sampling efficiency (bias under ±10%) was observed until pulsations exceeded 25% for the

  14. Sample-size effects in fast-neutron gamma-ray production measurements: solid-cylinder samples

    International Nuclear Information System (INIS)

    Smith, D.L.

    1975-09-01

    The effects of geometry, absorption and multiple scattering in (n,Xγ) reaction measurements with solid-cylinder samples are investigated. Both analytical and Monte-Carlo methods are employed in the analysis. Geometric effects are shown to be relatively insignificant except in definition of the scattering angles. However, absorption and multiple-scattering effects are quite important; accurate microscopic differential cross sections can be extracted from experimental data only after a careful determination of corrections for these processes. The results of measurements performed using several natural iron samples (covering a wide range of sizes) confirm validity of the correction procedures described herein. It is concluded that these procedures are reliable whenever sufficiently accurate neutron and photon cross section and angular distribution information is available for the analysis. (13 figures, 5 tables) (auth)

  15. Subclinical delusional ideation and appreciation of sample size and heterogeneity in statistical judgment.

    Science.gov (United States)

    Galbraith, Niall D; Manktelow, Ken I; Morris, Neil G

    2010-11-01

    Previous studies demonstrate that people high in delusional ideation exhibit a data-gathering bias on inductive reasoning tasks. The current study set out to investigate the factors that may underpin such a bias by examining healthy individuals, classified as either high or low scorers on the Peters et al. Delusions Inventory (PDI). More specifically, whether high PDI scorers have a relatively poor appreciation of sample size and heterogeneity when making statistical judgments. In Expt 1, high PDI scorers made higher probability estimates when generalizing from a sample of 1 with regard to the heterogeneous human property of obesity. In Expt 2, this effect was replicated and was also observed in relation to the heterogeneous property of aggression. The findings suggest that delusion-prone individuals are less appreciative of the importance of sample size when making statistical judgments about heterogeneous properties; this may underpin the data gathering bias observed in previous studies. There was some support for the hypothesis that threatening material would exacerbate high PDI scorers' indifference to sample size.

  16. Page sample size in web accessibility testing: how many pages is enough?

    NARCIS (Netherlands)

    Velleman, Eric Martin; van der Geest, Thea

    2013-01-01

    Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This

  17. Sensitivity of Mantel Haenszel Model and Rasch Model as Viewed From Sample Size

    OpenAIRE

    ALWI, IDRUS

    2011-01-01

    The aims of this research is to study the sensitivity comparison of Mantel Haenszel and Rasch Model for detection differential item functioning, observed from the sample size. These two differential item functioning (DIF) methods were compared using simulate binary item respon data sets of varying sample size,  200 and 400 examinees were used in the analyses, a detection method of differential item functioning (DIF) based on gender difference. These test conditions were replication 4 tim...

  18. Long‐term trends in fall age ratios of black brant

    Science.gov (United States)

    Ward, David H.; Amundson, Courtney L.; Stehn, Robert A.; Dau, Christian P.

    2018-01-01

    Accurate estimates of the age composition of populations can inform past reproductive success and future population trajectories. We examined fall age ratios (juveniles:total birds) of black brant (Branta bernicla nigricans; brant) staging at Izembek National Wildlife Refuge near the tip of the Alaska Peninsula, southwest Alaska, USA, 1963 to 2015. We also investigated variation in fall age ratios associated with sampling location, an index of flock size, survey effort, day of season, observer, survey platform (boat‐ or land‐based) and tide stage. We analyzed data using logistic regression models implemented in a Bayesian framework. Mean predicted fall age ratio controlling for survey effort, day of year, and temporal and spatial variation was 0.24 (95% CL = 0.23, 0.25). Overall trend in age ratios was −0.6% per year (95% CL = −1.3%, 0.2%), resulting in an approximate 26% decline in the proportion of juveniles over the study period. We found evidence for variation across a range of variables implying that juveniles are not randomly distributed in space and time within Izembek Lagoon. Age ratios varied by location within the study area and were highly variable among years. They decreased with the number of birds aged (an index of flock size) and increased throughout September before leveling off in early October and declining in late October. Age ratios were similar among tide stages and observers and were lower during boat‐based (offshore) than land‐based (nearshore) surveys. Our results indicate surveys should be conducted annually during early to mid‐October to ensure the entire population is present and available for sampling, and throughout Izembek Lagoon to account for spatiotemporal variation in age ratios. Sampling should include a wide range of flock sizes representative of their distribution and occur in flocks located near and off shore. Further research evaluating the cause of declining age ratios in the fall population is necessary

  19. Research Note Pilot survey to assess sample size for herbaceous ...

    African Journals Online (AJOL)

    A pilot survey to determine sub-sample size (number of point observations per plot) for herbaceous species composition assessments, using a wheel-point apparatus applying the nearest-plant method, was conducted. Three plots differing in species composition on the Zululand coastal plain were selected, and on each plot ...

  20. Measuring Sulfur Isotope Ratios from Solid Samples with the Sample Analysis at Mars Instrument and the Effects of Dead Time Corrections

    Science.gov (United States)

    Franz, H. B.; Mahaffy, P. R.; Kasprzak, W.; Lyness, E.; Raaen, E.

    2011-01-01

    The Sample Analysis at Mars (SAM) instrument suite comprises the largest science payload on the Mars Science Laboratory (MSL) "Curiosity" rover. SAM will perform chemical and isotopic analysis of volatile compounds from atmospheric and solid samples to address questions pertaining to habitability and geochemical processes on Mars. Sulfur is a key element of interest in this regard, as sulfur compounds have been detected on the Martian surface by both in situ and remote sensing techniques. Their chemical and isotopic composition can belp constrain environmental conditions and mechanisms at the time of formation. A previous study examined the capability of the SAM quadrupole mass spectrometer (QMS) to determine sulfur isotope ratios of SO2 gas from a statistical perspective. Here we discuss the development of a method for determining sulfur isotope ratios with the QMS by sampling SO2 generated from heating of solid sulfate samples in SAM's pyrolysis oven. This analysis, which was performed with the SAM breadboard system, also required development of a novel treatment of the QMS dead time to accommodate the characteristics of an aging detector.

  1. A simple nomogram for sample size for estimating sensitivity and specificity of medical tests

    Directory of Open Access Journals (Sweden)

    Malhotra Rajeev

    2010-01-01

    Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.

  2. Estimating sample size for a small-quadrat method of botanical ...

    African Journals Online (AJOL)

    Reports the results of a study conducted to determine an appropriate sample size for a small-quadrat method of botanical survey for application in the Mixed Bushveld of South Africa. Species density and grass density were measured using a small-quadrat method in eight plant communities in the Nylsvley Nature Reserve.

  3. Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests

    Science.gov (United States)

    Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.

    2015-01-01

    The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…

  4. Sample size for monitoring sirex populations and their natural enemies

    Directory of Open Access Journals (Sweden)

    Susete do Rocio Chiarello Penteado

    2016-09-01

    Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.

  5. Collection of size fractionated particulate matter sample for neutron activation analysis in Japan

    International Nuclear Information System (INIS)

    Otoshi, Tsunehiko; Nakamatsu, Hiroaki; Oura, Yasuji; Ebihara, Mitsuru

    2004-01-01

    According to the decision of the 2001 Workshop on Utilization of Research Reactor (Neutron Activation Analysis (NAA) Section), size fractionated particulate matter collection for NAA was started from 2002 at two sites in Japan. The two monitoring sites, ''Tokyo'' and ''Sakata'', were classified into ''urban'' and ''rural''. In each site, two size fractions, namely PM 2-10 '' and PM 2 '' particles (aerodynamic particle size between 2 to 10 micrometer and less than 2 micrometer, respectively) were collected every month on polycarbonate membrane filters. Average concentrations of PM 10 (sum of PM 2-10 and PM 2 samples) during the common sampling period of August to November 2002 in each site were 0.031mg/m 3 in Tokyo, and 0.022mg/m 3 in Sakata. (author)

  6. Assessing the precision of a time-sampling-based study among GPs: balancing sample size and measurement frequency.

    Science.gov (United States)

    van Hassel, Daniël; van der Velden, Lud; de Bakker, Dinny; van der Hoek, Lucas; Batenburg, Ronald

    2017-12-04

    Our research is based on a technique for time sampling, an innovative method for measuring the working hours of Dutch general practitioners (GPs), which was deployed in an earlier study. In this study, 1051 GPs were questioned about their activities in real time by sending them one SMS text message every 3 h during 1 week. The required sample size for this study is important for health workforce planners to know if they want to apply this method to target groups who are hard to reach or if fewer resources are available. In this time-sampling method, however, standard power analyses is not sufficient for calculating the required sample size as this accounts only for sample fluctuation and not for the fluctuation of measurements taken from every participant. We investigated the impact of the number of participants and frequency of measurements per participant upon the confidence intervals (CIs) for the hours worked per week. Statistical analyses of the time-use data we obtained from GPs were performed. Ninety-five percent CIs were calculated, using equations and simulation techniques, for various different numbers of GPs included in the dataset and for various frequencies of measurements per participant. Our results showed that the one-tailed CI, including sample and measurement fluctuation, decreased from 21 until 3 h between one and 50 GPs. As a result of the formulas to calculate CIs, the increase of the precision continued and was lower with the same additional number of GPs. Likewise, the analyses showed how the number of participants required decreased if more measurements per participant were taken. For example, one measurement per 3-h time slot during the week requires 300 GPs to achieve a CI of 1 h, while one measurement per hour requires 100 GPs to obtain the same result. The sample size needed for time-use research based on a time-sampling technique depends on the design and aim of the study. In this paper, we showed how the precision of the

  7. A two-stage Bayesian design with sample size reestimation and subgroup analysis for phase II binary response trials.

    Science.gov (United States)

    Zhong, Wei; Koopmeiners, Joseph S; Carlin, Bradley P

    2013-11-01

    Frequentist sample size determination for binary outcome data in a two-arm clinical trial requires initial guesses of the event probabilities for the two treatments. Misspecification of these event rates may lead to a poor estimate of the necessary sample size. In contrast, the Bayesian approach that considers the treatment effect to be random variable having some distribution may offer a better, more flexible approach. The Bayesian sample size proposed by (Whitehead et al., 2008) for exploratory studies on efficacy justifies the acceptable minimum sample size by a "conclusiveness" condition. In this work, we introduce a new two-stage Bayesian design with sample size reestimation at the interim stage. Our design inherits the properties of good interpretation and easy implementation from Whitehead et al. (2008), generalizes their method to a two-sample setting, and uses a fully Bayesian predictive approach to reduce an overly large initial sample size when necessary. Moreover, our design can be extended to allow patient level covariates via logistic regression, now adjusting sample size within each subgroup based on interim analyses. We illustrate the benefits of our approach with a design in non-Hodgkin lymphoma with a simple binary covariate (patient gender), offering an initial step toward within-trial personalized medicine. Copyright © 2013 Elsevier Inc. All rights reserved.

  8. Impact of controlling the sum of error probability in the sequential probability ratio test

    Directory of Open Access Journals (Sweden)

    Bijoy Kumarr Pradhan

    2013-05-01

    Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.

  9. Effects of sample size and sampling frequency on studies of brown bear home ranges and habitat use

    Science.gov (United States)

    Arthur, Steve M.; Schwartz, Charles C.

    1999-01-01

    We equipped 9 brown bears (Ursus arctos) on the Kenai Peninsula, Alaska, with collars containing both conventional very-high-frequency (VHF) transmitters and global positioning system (GPS) receivers programmed to determine an animal's position at 5.75-hr intervals. We calculated minimum convex polygon (MCP) and fixed and adaptive kernel home ranges for randomly-selected subsets of the GPS data to examine the effects of sample size on accuracy and precision of home range estimates. We also compared results obtained by weekly aerial radiotracking versus more frequent GPS locations to test for biases in conventional radiotracking data. Home ranges based on the MCP were 20-606 km2 (x = 201) for aerial radiotracking data (n = 12-16 locations/bear) and 116-1,505 km2 (x = 522) for the complete GPS data sets (n = 245-466 locations/bear). Fixed kernel home ranges were 34-955 km2 (x = 224) for radiotracking data and 16-130 km2 (x = 60) for the GPS data. Differences between means for radiotracking and GPS data were due primarily to the larger samples provided by the GPS data. Means did not differ between radiotracking data and equivalent-sized subsets of GPS data (P > 0.10). For the MCP, home range area increased and variability decreased asymptotically with number of locations. For the kernel models, both area and variability decreased with increasing sample size. Simulations suggested that the MCP and kernel models required >60 and >80 locations, respectively, for estimates to be both accurate (change in area bears. Our results suggest that the usefulness of conventional radiotracking data may be limited by potential biases and variability due to small samples. Investigators that use home range estimates in statistical tests should consider the effects of variability of those estimates. Use of GPS-equipped collars can facilitate obtaining larger samples of unbiased data and improve accuracy and precision of home range estimates.

  10. Modified FlowCAM procedure for quantifying size distribution of zooplankton with sample recycling capacity.

    Directory of Open Access Journals (Sweden)

    Esther Wong

    Full Text Available We have developed a modified FlowCAM procedure for efficiently quantifying the size distribution of zooplankton. The modified method offers the following new features: 1 prevents animals from settling and clogging with constant bubbling in the sample container; 2 prevents damage to sample animals and facilitates recycling by replacing the built-in peristaltic pump with an external syringe pump, in order to generate negative pressure, creates a steady flow by drawing air from the receiving conical flask (i.e. vacuum pump, and transfers plankton from the sample container toward the main flowcell of the imaging system and finally into the receiving flask; 3 aligns samples in advance of imaging and prevents clogging with an additional flowcell placed ahead of the main flowcell. These modifications were designed to overcome the difficulties applying the standard FlowCAM procedure to studies where the number of individuals per sample is small, and since the FlowCAM can only image a subset of a sample. Our effective recycling procedure allows users to pass the same sample through the FlowCAM many times (i.e. bootstrapping the sample in order to generate a good size distribution. Although more advanced FlowCAM models are equipped with syringe pump and Field of View (FOV flowcells which can image all particles passing through the flow field; we note that these advanced setups are very expensive, offer limited syringe and flowcell sizes, and do not guarantee recycling. In contrast, our modifications are inexpensive and flexible. Finally, we compared the biovolumes estimated by automated FlowCAM image analysis versus conventional manual measurements, and found that the size of an individual zooplankter can be estimated by the FlowCAM image system after ground truthing.

  11. Estimation of sample size and testing power (part 6).

    Science.gov (United States)

    Hu, Liang-ping; Bao, Xiao-lei; Guan, Xue; Zhou, Shi-guo

    2012-03-01

    The design of one factor with k levels (k ≥ 3) refers to the research that only involves one experimental factor with k levels (k ≥ 3), and there is no arrangement for other important non-experimental factors. This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k ≥ 3).

  12. On the Structure of Cortical Microcircuits Inferred from Small Sample Sizes.

    Science.gov (United States)

    Vegué, Marina; Perin, Rodrigo; Roxin, Alex

    2017-08-30

    The structure in cortical microcircuits deviates from what would be expected in a purely random network, which has been seen as evidence of clustering. To address this issue, we sought to reproduce the nonrandom features of cortical circuits by considering several distinct classes of network topology, including clustered networks, networks with distance-dependent connectivity, and those with broad degree distributions. To our surprise, we found that all of these qualitatively distinct topologies could account equally well for all reported nonrandom features despite being easily distinguishable from one another at the network level. This apparent paradox was a consequence of estimating network properties given only small sample sizes. In other words, networks that differ markedly in their global structure can look quite similar locally. This makes inferring network structure from small sample sizes, a necessity given the technical difficulty inherent in simultaneous intracellular recordings, problematic. We found that a network statistic called the sample degree correlation (SDC) overcomes this difficulty. The SDC depends only on parameters that can be estimated reliably given small sample sizes and is an accurate fingerprint of every topological family. We applied the SDC criterion to data from rat visual and somatosensory cortex and discovered that the connectivity was not consistent with any of these main topological classes. However, we were able to fit the experimental data with a more general network class, of which all previous topologies were special cases. The resulting network topology could be interpreted as a combination of physical spatial dependence and nonspatial, hierarchical clustering. SIGNIFICANCE STATEMENT The connectivity of cortical microcircuits exhibits features that are inconsistent with a simple random network. Here, we show that several classes of network models can account for this nonrandom structure despite qualitative differences in

  13. Size ratio performance in detecting cerebral aneurysm rupture status is insensitive to small vessel removal.

    Science.gov (United States)

    Lauric, Alexandra; Baharoglu, Merih I; Malek, Adel M

    2013-04-01

    The variable definition of size ratio (SR) for sidewall (SW) vs bifurcation (BIF) aneurysms raises confusion for lesions harboring small branches, such as carotid ophthalmic or posterior communicating locations. These aneurysms are considered SW by many clinicians, but SR methodology classifies them as BIF. To evaluate the effect of ignoring small vessels and SW vs stringent BIF labeling on SR ruptured aneurysm detection performance in borderline aneurysms with small branches, and to reconcile SR-based labeling with clinical SW/BIF classification. Catheter rotational angiographic datasets of 134 consecutive aneurysms (60 ruptured) were automatically measured in 3-dimensional. Stringent BIF labeling was applied to clinically labeled aneurysms, with 21 aneurysms switching label from SW to BIF. Parent vessel size was evaluated both taking into account, and ignoring, small vessels. SR was defined accordingly as the ratio between aneurysm and parent vessel sizes. Univariate and multivariate statistics identified significant features. The square of the correlation coefficient (R(2)) was reported for bivariate analysis of alternative SR calculations. Regardless of SW/BIF labeling method, SR was equally significant in discriminating aneurysm ruptured status (P analysis of alternative SR had a high correlation of R(2) = 0.94 on the whole dataset, and R = 0.98 on the 21 borderline aneurysms. Ignoring small branches from SR calculation maintains rupture status detection performance, while reducing postprocessing complexity and removing labeling ambiguity. Aneurysms adjacent to these vessels can be considered SW for morphometric analysis. It is reasonable to use the clinical SW/BIF labeling when using SR for rupture risk evaluation.

  14. Particle Sampling and Real Time Size Distribution Measurement in H2/O2/TEOS Diffusion Flame

    International Nuclear Information System (INIS)

    Ahn, K.H.; Jung, C.H.; Choi, M.; Lee, J.S.

    2001-01-01

    Growth characteristics of silica particles have been studied experimentally using in situ particle sampling technique from H 2 /O 2 /Tetraethylorthosilicate (TEOS) diffusion flame with carefully devised sampling probe. The particle morphology and the size comparisons are made between the particles sampled by the local thermophoretic method from the inside of the flame and by the electrostatic collector sampling method after the dilution sampling probe. The Transmission Electron Microscope (TEM) image processed data of these two sampling techniques are compared with Scanning Mobility Particle Sizer (SMPS) measurement. TEM image analysis of two sampling methods showed a good agreement with SMPS measurement. The effects of flame conditions and TEOS flow rates on silica particle size distributions are also investigated using the new particle dilution sampling probe. It is found that the particle size distribution characteristics and morphology are mostly governed by the coagulation process and sintering process in the flame. As the flame temperature increases, the effect of coalescence or sintering becomes an important particle growth mechanism which reduces the coagulation process. However, if the flame temperature is not high enough to sinter the aggregated particles then the coagulation process is a dominant particle growth mechanism. In a certain flame condition a secondary particle formation is observed which results in a bimodal particle size distribution

  15. Influence of template/functional monomer/cross‐linking monomer ratio on particle size and binding properties of molecularly imprinted nanoparticles

    DEFF Research Database (Denmark)

    Yoshimatsu, Keiichi; Yamazaki, Tomohiko; Chronakis, Ioannis S.

    2012-01-01

    A series of molecularly imprinted polymer nanoparticles have been synthesized employing various template/functional monomer/crosslinking monomer ratio and characterized in detail to elucidate the correlation between the synthetic conditions used and the properties (e.g., particle size and templat...... tuning of particle size and binding properties are required to fit practical applications. © 2011 Wiley Periodicals, Inc. J Appl Polym Sci, 2012...

  16. The Sample Size Influence in the Accuracy of the Image Classification of the Remote Sensing

    Directory of Open Access Journals (Sweden)

    Thomaz C. e C. da Costa

    2004-12-01

    Full Text Available Landuse/landcover maps produced by classification of remote sensing images incorporate uncertainty. This uncertainty is measured by accuracy indices using reference samples. The size of the reference sample is defined by approximation by a binomial function without the use of a pilot sample. This way the accuracy are not estimated, but fixed a priori. In case of divergency between the estimated and a priori accuracy the error of the sampling will deviate from the expected error. The size using pilot sample (theorically correct procedure justify when haven´t estimate of accuracy for work area, referent the product remote sensing utility.

  17. Determination of Light Water Reactor Fuel Burnup with the Isotope Ratio Method

    International Nuclear Information System (INIS)

    Gerlach, David C.; Mitchell, Mark R.; Reid, Bruce D.; Gesh, Christopher J.; Hurley, David E.

    2007-01-01

    For the current project to demonstrate that isotope ratio measurements can be extended to zirconium alloys used in LWR fuel assemblies we report new analyses on irradiated samples obtained from a reactor. Zirconium alloys are used for structural elements of fuel assemblies and for the fuel element cladding. This report covers new measurements done on irradiated and unirradiated zirconium alloys, Unirradiated zircaloy samples serve as reference samples and indicate starting values or natural values for the Ti isotope ratio measured. New measurements of irradiated samples include results for 3 samples provided by AREVA. New results indicate: 1. Titanium isotope ratios were measured again in unirradiated samples to obtain reference or starting values at the same time irradiated samples were analyzed. In particular, 49Ti/48Ti ratios were indistinguishably close to values determined several months earlier and to expected natural values. 2. 49Ti/48Ti ratios were measured in 3 irradiated samples thus far, and demonstrate marked departures from natural or initial ratios, well beyond analytical uncertainty, and the ratios vary with reported fluence values. The irradiated samples appear to have significant surface contamination or radiation damage which required more time for SIMS analyses. 3. Other activated impurity elements still limit the sample size for SIMS analysis of irradiated samples. The sub-samples chosen for SIMS analysis, although smaller than optimal, were still analyzed successfully without violating the conditions of the applicable Radiological Work Permit

  18. Measurement and application of purine derivatives: Creatinine ratio in spot urine samples of ruminants

    International Nuclear Information System (INIS)

    Chen, X.B.; Jayasuriya, M.C.N.; Makkar, H.P.S.

    2004-01-01

    The daily excretion of purine derivatives in urine has been used to estimate the supply of microbial protein to ruminant animals. The method provides a simple and non-invasive tool to indicate the nutritional status of farm animals. However due to the need for complete collection of urine the potential application at farm level is restricted. Research conducted under the FAO/IAEA Co-ordinated Research Project has indicated that it is possible to use the purine derivatives:creatinine ratio measured in several spot urine samples collected within a day, as an index of microbial protein supply in a banding system for farm application. Some theoretical and experimental aspects in the measurement of purine derivatives:creatinine ratio in spot urine samples and the possible application of the banding system at the farm level are discussed. (author)

  19. Preliminary notes concerning the uranium-gold ratio and the gradient of heavy-mineral size distribution as factors of transport distance down the paleoslope of the Proterozoic Steyn Reef placer deposit, Orange Free State Goldfield, Witwatersrand, South Africa

    International Nuclear Information System (INIS)

    Minter, W.E.L.

    1981-01-01

    The size decrease of quartz pebbles, pyrite nodules, and zircon grains, evident from samples of Steyn Reef taken from various positions down a paleoslope indicated by crossbedding data, confirms their detrital origin. An increase in the ratio of uranium to gold, which appears to be related to their original size-frequency distribution, also indicates the paleoslope direction and effectively distinguishes between blanket and carbon-seam reefs

  20. Development of pre-concentration procedure for the determination of Hg isotope ratios in seawater samples

    International Nuclear Information System (INIS)

    Štrok, Marko; Hintelmann, Holger; Dimock, Brian

    2014-01-01

    Highlights: • The method for the quantitative pre-concentration of Hg from seawater was developed. • First report of Hg isotope ratios in seawater is presented. • A unique mass independent 200 Hg isotope fractionation was observed. • This fractionation has unique potential to distinguish anthropogenic and natural Hg. - Abstract: Hg concentrations in seawater are usually too low to allow direct (without pre-concentration and removal of salt matrix) measurement of its isotope ratios with multicollector-inductively coupled plasma mass spectrometry (MC-ICP-MS). Therefore, a new method for the pre-concentration of Hg from large volumes of seawater was developed. The final method allows for relatively fast (about 2.5 L h −1 ) and quantitative pre-concentration of Hg from seawater samples with an average Hg recovery of 98 ± 6%. Using this newly developed method we determined Hg isotope ratios in seawater. Reference seawater samples were compared to samples potentially impacted by anthropogenic activity. The results show negative mass dependent fractionation relative to the NIST 3133 Hg standard with δ 202 Hg values in the range from −0.50‰ to −1.50‰. In addition, positive mass independent fractionation of 200 Hg was observed for samples from reference sites, while impacted sites did not show significant Δ 200 Hg values. Although the influence of the impacted sediments is limited to the seawater and particulate matter in very close proximity to the sediment, this observation may raise the possibility of using Δ 200 Hg to distinguish between samples from impacted and reference sites

  1. Nonsphericity Index and Size Ratio Identify Morphologic Differences between Growing and Stable Aneurysms in a Longitudinal Study of 93 Cases.

    Science.gov (United States)

    Chien, A; Xu, M; Yokota, H; Scalzo, F; Morimoto, E; Salamon, N

    2018-01-25

    Recent studies have strongly associated intracranial aneurysm growth with increased risk of rupture. Identifying aneurysms that are likely to grow would be beneficial to plan more effective monitoring and intervention strategies. Our hypothesis is that for unruptured intracranial aneurysms of similar size, morphologic characteristics differ between aneurysms that continue to grow and those that do not. From aneurysms in our medical center with follow-up imaging dates in 2015, ninety-three intracranial aneurysms (23 growing, 70 stable) were selected. All CTA images for the aneurysm diagnosis and follow-up were collected, a total of 348 3D imaging studies. Aneurysm 3D geometry for each imaging study was reconstructed, and morphologic characteristics, including volume, surface area, nonsphericity index, aspect ratio, and size ratio were calculated. Morphologic characteristics were found to differ between growing and stable groups. For aneurysms of 7 mm, volume ( P differ between those that are growing and those that are stable. The nonsphericity index, in particular, was found to be higher among growing aneurysms. The size ratio was found to be the second most significant parameter associated with growth. © 2018 by American Journal of Neuroradiology.

  2. Control of size and aspect ratio in hydroquinone-based synthesis of gold nanorods

    International Nuclear Information System (INIS)

    Morasso, Carlo; Picciolini, Silvia; Schiumarini, Domitilla; Mehn, Dora; Ojea-Jiménez, Isaac; Zanchetta, Giuliano; Vanna, Renzo; Bedoni, Marzia; Prosperi, Davide; Gramatica, Furio

    2015-01-01

    In this article, we describe how it is possible to tune the size and the aspect ratio of gold nanorods obtained using a highly efficient protocol based on the use of hydroquinone as a reducing agent by varying the amounts of CTAB and silver ions present in the “seed-growth” solution. Our approach not only allows us to prepare nanorods with a four times increased Au 3+ reduction yield, when compared with the commonly used protocol based on ascorbic acid, but also allows a remarkable reduction of 50–60 % of the amount of CTAB needed. In fact, according to our findings, the concentration of CTAB present in the seed-growth solution do not linearly influence the final aspect ratio of the obtained nanorods, and an optimal concentration range between 30 and 50 mM has been identified as the one that is able to generate particles with more elongated shapes. On the optimized protocol, the effect of the concentration of Ag + ions in the seed-growth solution and the stability of the obtained particles has also been investigated

  3. Assessing terpene content variability of whitebark pine in order to estimate representative sample size

    Directory of Open Access Journals (Sweden)

    Stefanović Milena

    2013-01-01

    Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007

  4. Methodology for sample preparation and size measurement of commercial ZnO nanoparticles

    Directory of Open Access Journals (Sweden)

    Pei-Jia Lu

    2018-04-01

    Full Text Available This study discusses the strategies on sample preparation to acquire images with sufficient quality for size characterization by scanning electron microscope (SEM using two commercial ZnO nanoparticles of different surface properties as a demonstration. The central idea is that micrometer sized aggregates of ZnO in powdered forms need to firstly be broken down to nanosized particles through an appropriate process to generate nanoparticle dispersion before being deposited on a flat surface for SEM observation. Analytical tools such as contact angle, dynamic light scattering and zeta potential have been utilized to optimize the procedure for sample preparation and to check the quality of the results. Meanwhile, measurements of zeta potential values on flat surfaces also provide critical information and save lots of time and efforts in selection of suitable substrate for particles of different properties to be attracted and kept on the surface without further aggregation. This simple, low-cost methodology can be generally applied on size characterization of commercial ZnO nanoparticles with limited information from vendors. Keywords: Zinc oxide, Nanoparticles, Methodology

  5. Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.

    Science.gov (United States)

    Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang

    2018-02-01

    To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.

  6. A Meta-Analysis of Class Sizes and Ratios in Early Childhood Education Programs: Are Thresholds of Quality Associated with Greater Impacts on Cognitive, Achievement, and Socioemotional Outcomes?

    Science.gov (United States)

    Bowne, Jocelyn Bonnes; Magnuson, Katherine A.; Schindler, Holly S.; Duncan, Greg J.; Yoshikawa, Hirokazu

    2017-01-01

    This study uses data from a comprehensive database of U.S. early childhood education program evaluations published between 1960 and 2007 to evaluate the relationship between class size, child-teacher ratio, and program effect sizes for cognitive, achievement, and socioemotional outcomes. Both class size and child-teacher ratio showed nonlinear…

  7. Closure and ratio correlation analysis of lunar chemical and grain size data

    Science.gov (United States)

    Butler, J. C.

    1976-01-01

    Major element and major element plus trace element analyses were selected from the lunar data base for Apollo 11, 12 and 15 basalt and regolith samples. Summary statistics for each of the six data sets were compiled, and the effects of closure on the Pearson product moment correlation coefficient were investigated using the Chayes and Kruskal approximation procedure. In general, there are two types of closure effects evident in these data sets: negative correlations of intermediate size which are solely the result of closure, and correlations of small absolute value which depart significantly from their expected closure correlations which are of intermediate size. It is shown that a positive closure correlation will arise only when the product of the coefficients of variation is very small (less than 0.01 for most data sets) and, in general, trace elements in the lunar data sets exhibit relatively large coefficients of variation.

  8. Impact of sample size on principal component analysis ordination of an environmental data set: effects on eigenstructure

    Directory of Open Access Journals (Sweden)

    Shaukat S. Shahid

    2016-06-01

    Full Text Available In this study, we used bootstrap simulation of a real data set to investigate the impact of sample size (N = 20, 30, 40 and 50 on the eigenvalues and eigenvectors resulting from principal component analysis (PCA. For each sample size, 100 bootstrap samples were drawn from environmental data matrix pertaining to water quality variables (p = 22 of a small data set comprising of 55 samples (stations from where water samples were collected. Because in ecology and environmental sciences the data sets are invariably small owing to high cost of collection and analysis of samples, we restricted our study to relatively small sample sizes. We focused attention on comparison of first 6 eigenvectors and first 10 eigenvalues. Data sets were compared using agglomerative cluster analysis using Ward’s method that does not require any stringent distributional assumptions.

  9. B-graph sampling to estimate the size of a hidden population

    NARCIS (Netherlands)

    Spreen, M.; Bogaerts, S.

    2015-01-01

    Link-tracing designs are often used to estimate the size of hidden populations by utilizing the relational links between their members. A major problem in studies of hidden populations is the lack of a convenient sampling frame. The most frequently applied design in studies of hidden populations is

  10. Brood size and sex ratio in response to host quality and wasp traits in the gregarious parasitoid Oomyzus sokolowskii (Hymenoptera: Eulophidae

    Directory of Open Access Journals (Sweden)

    Xianwei Li

    2017-01-01

    Full Text Available This laboratory study investigated whether the larval-pupal parasitoid Oomyzus sokolowskii females adjust their brood size and sex ratio in response to body size and stage of Plutella xylostella larval hosts, as well as to their own body size and the order of oviposition. These factors were analyzed using multiple regression with simultaneous entry of them and their two-way interactions. Parasitoids brood size tended to increase with host body size at parasitism when the 4th instar larval host was attacked, but did not change when the 2nd and 3rd instar larvae were attacked. Parasitoids did not vary in brood size according to their body size, but decreased with their bouts of oviposition on a linear trend from 10 offspring adults emerged per host in the first bout of oviposition down to eight in the third. Parasitoid offspring sex ratio did not change with host instar, host body weight, wasp body size, and oviposition bout. Proportions of male offspring per brood were from 11% to 13% from attacking the 2nd to 4th instar larvae and from 13% to 16% across three successive bouts of oviposition, with a large variation for smaller host larvae and wasps. When fewer than 12 offspring were emerged from a host, one male was most frequently produced; when more than 12 offspring were emerged, two or more males were produced. Our study suggests that O. sokolowskii females may optimize their clutch size in response to body size of mature P. xylostella larvae, and their sex allocation in response to clutch size.

  11. FAKTOR-FAKTOR YANG MEMPENGARUHI DIVIDEND PAYOUT RATIO PADA PERUSAHAAN JASA KEUANGAN

    Directory of Open Access Journals (Sweden)

    Sutoyo Sutoyo

    2017-03-01

    Full Text Available The objective of this research was to analyze what factors influencing dividend payout ratio at the IndonesianStock Exchange (ISE. The method used in this research was the survey method. This research was conductedat the ISE using 82 emitens as the sample based on purposive sampling. The first and second hypothesis wasanalyzed using multiple regression. The result of the first hypothesis analysis showed profitability, liquidity,debt policy, institutional ownership, growth, and firm size simultaneously influential to dividend payoutratio. The second hypothesis analysis showed that only growth influencing dividend payout ratio.

  12. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  13. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    International Nuclear Information System (INIS)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A.

    2013-01-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  14. Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence

    Energy Technology Data Exchange (ETDEWEB)

    Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)

    2013-07-01

    Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)

  15. Successful performances of the EU-AltTF sample, a large size Nb{sub 3}Sn cable-in-conduit conductor with rectangular geometry

    Energy Technology Data Exchange (ETDEWEB)

    Della Corte, A; Corato, V; Di Zenobio, A; Fiamozzi Zignani, C; Muzzi, L; Polli, G M; Reccia, L; Turtu, S [Associazione EURATOM-ENEA sulla Fusione, Via E Fermi 45, 00044 Frascati, Rome (Italy); Bruzzone, P [EPFL-CRPP, Fusion Technology, 5232 Villigen PSI (Switzerland); Salpietro, E [European Fusion Development Agreement, Close Support Unit, Boltzmannstrasse 2, 85748 Garching (Germany); Vostner, A, E-mail: antonio.dellacorte@enea.i [Fusion for Energy, c/ Josep Pla 2, Edificio B3, 08019 Barcelona (Spain)

    2010-04-15

    One of the design features which yet offers interesting margins for performance optimization of cable-in-conduit conductors (CICCs), is their geometry. For relatively small size Nb{sub 3}Sn CICCs, operating at high electromagnetic pressure, such as those for the EDIPO project, it has been experimentally shown that a design based on a rectangular layout with higher aspect ratio leads to the best performance, especially in terms of degradation with electromagnetic loads. To extend this analysis to larger size Nb{sub 3}Sn CICCs, we manufactured and tested, in the SULTAN facility, an ITER toroidal field (TF) cable, inserted into a thick stainless steel tube and then compacted to a high aspect ratio rectangular shape. Besides establishing a new record in Nb{sub 3}Sn CICC performances for ITER TF type cables, the very good test results confirmed that the conductor properties improve not only by lowering the void fraction and raising the cable twist pitch, as already shown during the ITER TFPRO and the EDIPO test campaigns, but also by the proper optimization of the conductor shape with respect to the electromagnetic force distribution. The sample manufacturing steps, along with the main test results, are presented here.

  16. Sample size calculation while controlling false discovery rate for differential expression analysis with RNA-sequencing experiments.

    Science.gov (United States)

    Bi, Ran; Liu, Peng

    2016-03-31

    RNA-Sequencing (RNA-seq) experiments have been popularly applied to transcriptome studies in recent years. Such experiments are still relatively costly. As a result, RNA-seq experiments often employ a small number of replicates. Power analysis and sample size calculation are challenging in the context of differential expression analysis with RNA-seq data. One challenge is that there are no closed-form formulae to calculate power for the popularly applied tests for differential expression analysis. In addition, false discovery rate (FDR), instead of family-wise type I error rate, is controlled for the multiple testing error in RNA-seq data analysis. So far, there are very few proposals on sample size calculation for RNA-seq experiments. In this paper, we propose a procedure for sample size calculation while controlling FDR for RNA-seq experimental design. Our procedure is based on the weighted linear model analysis facilitated by the voom method which has been shown to have competitive performance in terms of power and FDR control for RNA-seq differential expression analysis. We derive a method that approximates the average power across the differentially expressed genes, and then calculate the sample size to achieve a desired average power while controlling FDR. Simulation results demonstrate that the actual power of several popularly applied tests for differential expression is achieved and is close to the desired power for RNA-seq data with sample size calculated based on our method. Our proposed method provides an efficient algorithm to calculate sample size while controlling FDR for RNA-seq experimental design. We also provide an R package ssizeRNA that implements our proposed method and can be downloaded from the Comprehensive R Archive Network ( http://cran.r-project.org ).

  17. Estimating sample size for landscape-scale mark-recapture studies of North American migratory tree bats

    Science.gov (United States)

    Ellison, Laura E.; Lukacs, Paul M.

    2014-01-01

    Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.

  18. Sample size determination for a three-arm equivalence trial of Poisson and negative binomial responses.

    Science.gov (United States)

    Chang, Yu-Wei; Tsong, Yi; Zhao, Zhigen

    2017-01-01

    Assessing equivalence or similarity has drawn much attention recently as many drug products have lost or will lose their patents in the next few years, especially certain best-selling biologics. To claim equivalence between the test treatment and the reference treatment when assay sensitivity is well established from historical data, one has to demonstrate both superiority of the test treatment over placebo and equivalence between the test treatment and the reference treatment. Thus, there is urgency for practitioners to derive a practical way to calculate sample size for a three-arm equivalence trial. The primary endpoints of a clinical trial may not always be continuous, but may be discrete. In this paper, the authors derive power function and discuss sample size requirement for a three-arm equivalence trial with Poisson and negative binomial clinical endpoints. In addition, the authors examine the effect of the dispersion parameter on the power and the sample size by varying its coefficient from small to large. In extensive numerical studies, the authors demonstrate that required sample size heavily depends on the dispersion parameter. Therefore, misusing a Poisson model for negative binomial data may easily lose power up to 20%, depending on the value of the dispersion parameter.

  19. Sequential boundaries approach in clinical trials with unequal allocation ratios

    Directory of Open Access Journals (Sweden)

    Ayatollahi Seyyed

    2006-01-01

    Full Text Available Abstract Background In clinical trials, both unequal randomization design and sequential analyses have ethical and economic advantages. In the single-stage-design (SSD, however, if the sample size is not adjusted based on unequal randomization, the power of the trial will decrease, whereas with sequential analysis the power will always remain constant. Our aim was to compare sequential boundaries approach with the SSD when the allocation ratio (R was not equal. Methods We evaluated the influence of R, the ratio of the patients in experimental group to the standard group, on the statistical properties of two-sided tests, including the two-sided single triangular test (TT, double triangular test (DTT and SSD by multiple simulations. The average sample size numbers (ASNs and power (1-β were evaluated for all tests. Results Our simulation study showed that choosing R = 2 instead of R = 1 increases the sample size of SSD by 12% and the ASN of the TT and DTT by the same proportion. Moreover, when R = 2, compared to the adjusted SSD, using the TT or DTT allows to retrieve the well known reductions of ASN observed when R = 1, compared to SSD. In addition, when R = 2, compared to SSD, using the TT and DTT allows to obtain smaller reductions of ASN than when R = 1, but maintains the power of the test to its planned value. Conclusion This study indicates that when the allocation ratio is not equal among the treatment groups, sequential analysis could indeed serve as a compromise between ethicists, economists and statisticians.

  20. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  1. Development of pre-concentration procedure for the determination of Hg isotope ratios in seawater samples.

    Science.gov (United States)

    Štrok, Marko; Hintelmann, Holger; Dimock, Brian

    2014-12-03

    Hg concentrations in seawater are usually too low to allow direct (without pre-concentration and removal of salt matrix) measurement of its isotope ratios with multicollector-inductively coupled plasma mass spectrometry (MC-ICP-MS). Therefore, a new method for the pre-concentration of Hg from large volumes of seawater was developed. The final method allows for relatively fast (about 2.5Lh(-1)) and quantitative pre-concentration of Hg from seawater samples with an average Hg recovery of 98±6%. Using this newly developed method we determined Hg isotope ratios in seawater. Reference seawater samples were compared to samples potentially impacted by anthropogenic activity. The results show negative mass dependent fractionation relative to the NIST 3133 Hg standard with δ(202)Hg values in the range from -0.50‰ to -1.50‰. In addition, positive mass independent fractionation of (200)Hg was observed for samples from reference sites, while impacted sites did not show significant Δ(200)Hg values. Although the influence of the impacted sediments is limited to the seawater and particulate matter in very close proximity to the sediment, this observation may raise the possibility of using Δ(200)Hg to distinguish between samples from impacted and reference sites. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Postoperative neutrophil-to-lymphocyte ratio of living-donor liver transplant: Association with graft size

    Directory of Open Access Journals (Sweden)

    Hironori Hayashi

    2016-04-01

    Full Text Available Issues related to small-for-size grafts in living donor liver transplantation (LDLT are highly important. The neutrophil lymphocyte ratio (NLR has been reported to be an inexpensive index of systemic inflammation for various diseases. We retrospectively evaluated the relationship between NLR and clinical course of 61 adult LDLT recipients in our institute until post-operative day 14. Patients were classified into two groups based on the graft volume divided by standard liver volume, as over 35% of graft volume divided by standard liver volume (GV/SLV (Group L; n = 55 and under 35% of GV/SLV (Group S; n = 6. No differences were seen in background of the patients between the two groups. Also, absolute neutrophil, lymphocyte and platelet counts in both the groups showed no significant differences. In contrast, the NLR between the groups differed significantly from post-operative day 3 to 10, being higher in the Group S. In addition, the incidence of prolonged hyperbilirubinemia and small for size graft syndrome differed significantly between the two groups. Therefore, the elevation of post-operative NLR in the smaller graft group reflect suggestive pathophysiology of endothelial injuries that related to small for size graft syndrome in LDLT.

  3. Measurement of the natural variation of 13C/12C isotope ratio in organic samples

    International Nuclear Information System (INIS)

    Ducatti, C.

    1977-01-01

    The isotopic ratio analysis for 13 C/ 12 C by mass spectrometry using a 'Working standard' allows the study of 13 C natural variation in organic material, with a total analytical error of less than 0,2%. Equations were derived in order to determine 13 C/ 12 C and 18 O/ 16 O ratios related to the 'working standard' CENA-std and to the international standard PDB. Isotope ratio values obtained with samples prepared in two different combustion apparatus were compared; also the values obtained preparing samples by acid decomposition of carbonaceous materials were compared with the values obtained in different international laboratories. Utilizing the methodology proposed, several leaves collected at different heights of different vegetal species, found 'inside' and 'outside' of the Ducke Forest Reserve, located in the Amazon region, are analysed. It is found that the 13 C natural variation depends upon metabolic process and environmental factors, both being factors which may be qualified as parcial influences on the CO 2 cycle in the forest. (author) [pt

  4. Crystallite size variation of TiO_2 samples depending time heat treatment

    International Nuclear Information System (INIS)

    Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.

    2016-01-01

    Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)

  5. 206Pb/207Pb ratios in dry deposit samples from the Metropolitan Zone of Mexico Valle

    International Nuclear Information System (INIS)

    Martinez, T.; Lartigue, J.; Marquez, C.

    2007-01-01

    206 Pb/ 207 Pb isotope ratios of dry deposit samples in the Metropolitan Zone of Mexico Valley (MZMV) were determined and correlated with some contemporary environmental material such as gasoline, urban dust, etc., as possible pollution sources, the latter presenting different signatures. 206 Pb/ 207 Pb ratios were determined in samples 'as is' by ICP-MS, using an Elan-6100. A standard material NIST-981 was used to monitor accuracy and to correct mass fractionation. The calculated enrichment factors of lead (taking rubidium as a conservative endogenous element) show its anthropogenic origin with percentages higher than 97.65%. 206 Pb/ 207 Pb ratio in dry deposit samples ranges from 0.816 to a maximum of 1.154, following a normal distribution. Arithmetic mean was 0.9967±0.0864 lower than those of possible pollution sources: 1.1395±0.0165 for gasoline, 1.071±0.008 for industrially derived lead and, for the more radiogenic natural soil and urban dust values ranging from 1.2082±0.022 to 1.211±0.108. The possible origin of lead in gasoline used prior to 1960 is discussed. (author)

  6. Control of size and aspect ratio in hydroquinone-based synthesis of gold nanorods

    Energy Technology Data Exchange (ETDEWEB)

    Morasso, Carlo, E-mail: cmorasso@dongnocchi.it; Picciolini, Silvia; Schiumarini, Domitilla [Fondazione Don Carlo Gnocchi ONLUS, Laboratory of Nanomedicine and Clinical Biophotonics (LABION) (Italy); Mehn, Dora; Ojea-Jiménez, Isaac [European Commission Joint Research Centre, Institute for Health and Consumer Protection (IHCP) (Italy); Zanchetta, Giuliano [Universitá degli Studi di Milano, Dipartimento di Biotecnologie Mediche e Medicina Traslazionale (Italy); Vanna, Renzo; Bedoni, Marzia [Fondazione Don Carlo Gnocchi ONLUS, Laboratory of Nanomedicine and Clinical Biophotonics (LABION) (Italy); Prosperi, Davide [Università degli Studi di Milano Bicocca, NanoBioLab, Dipartimento di Biotecnologie e Bioscienze (Italy); Gramatica, Furio [Fondazione Don Carlo Gnocchi ONLUS, Laboratory of Nanomedicine and Clinical Biophotonics (LABION) (Italy)

    2015-08-15

    In this article, we describe how it is possible to tune the size and the aspect ratio of gold nanorods obtained using a highly efficient protocol based on the use of hydroquinone as a reducing agent by varying the amounts of CTAB and silver ions present in the “seed-growth” solution. Our approach not only allows us to prepare nanorods with a four times increased Au{sup 3+} reduction yield, when compared with the commonly used protocol based on ascorbic acid, but also allows a remarkable reduction of 50–60 % of the amount of CTAB needed. In fact, according to our findings, the concentration of CTAB present in the seed-growth solution do not linearly influence the final aspect ratio of the obtained nanorods, and an optimal concentration range between 30 and 50 mM has been identified as the one that is able to generate particles with more elongated shapes. On the optimized protocol, the effect of the concentration of Ag{sup +} ions in the seed-growth solution and the stability of the obtained particles has also been investigated.

  7. Technical note: Alternatives to reduce adipose tissue sampling bias.

    Science.gov (United States)

    Cruz, G D; Wang, Y; Fadel, J G

    2014-10-01

    Understanding the mechanisms by which nutritional and pharmaceutical factors can manipulate adipose tissue growth and development in production animals has direct and indirect effects in the profitability of an enterprise. Adipocyte cellularity (number and size) is a key biological response that is commonly measured in animal science research. The variability and sampling of adipocyte cellularity within a muscle has been addressed in previous studies, but no attempt to critically investigate these issues has been proposed in the literature. The present study evaluated 2 sampling techniques (random and systematic) in an attempt to minimize sampling bias and to determine the minimum number of samples from 1 to 15 needed to represent the overall adipose tissue in the muscle. Both sampling procedures were applied on adipose tissue samples dissected from 30 longissimus muscles from cattle finished either on grass or grain. Briefly, adipose tissue samples were fixed with osmium tetroxide, and size and number of adipocytes were determined by a Coulter Counter. These results were then fit in a finite mixture model to obtain distribution parameters of each sample. To evaluate the benefits of increasing number of samples and the advantage of the new sampling technique, the concept of acceptance ratio was used; simply stated, the higher the acceptance ratio, the better the representation of the overall population. As expected, a great improvement on the estimation of the overall adipocyte cellularity parameters was observed using both sampling techniques when sample size number increased from 1 to 15 samples, considering both techniques' acceptance ratio increased from approximately 3 to 25%. When comparing sampling techniques, the systematic procedure slightly improved parameters estimation. The results suggest that more detailed research using other sampling techniques may provide better estimates for minimum sampling.

  8. Measurement of regional cerebral blood flow using one-point arterial blood sampling and microsphere model with 123I-IMP. Correction of one-point arterial sampling count by whole brain count ratio

    International Nuclear Information System (INIS)

    Makino, Kenichi; Masuda, Yasuhiko; Gotoh, Satoshi

    1998-01-01

    The experimental subjects were 189 patients with cerebrovascular disorders. 123 I-IMP, 222 MBq, was administered by intravenous infusion. Continuous arterial blood sampling was carried out for 5 minutes, and arterial blood was also sampled once at 5 minutes after 123 I-IMP administration. Then the whole blood count of the one-point arterial sampling was compared with the octanol-extracted count of the continuous arterial sampling. A positive correlation was found between the two values. The ratio of the continuous sampling octanol-extracted count (OC) to the one-point sampling whole blood count (TC5) was compared with the whole brain count ratio (5:29 ratio, Cn) using 1-minute planar SPECT images, centering on 5 and 29 minutes after 123 I-IMP administration. Correlation was found between the two values. The following relationship was shown from the correlation equation. OC/TC5=0.390969 x Cn-0.08924. Based on this correlation equation, we calculated the theoretical continuous arterial sampling octanol-extracted count (COC). COC=TC5 x (0.390969 x Cn-0.08924). There was good correlation between the value calculated with this equation and the actually measured value. The coefficient improved to r=0.94 from the r=0.87 obtained before using the 5:29 ratio for correction. For 23 of these 189 cases, another one-point arterial sampling was carried out at 6, 7, 8, 9 and 10 minutes after the administration of 123 I-IMP. The correlation coefficient was also improved for these other point samplings when this correction method using the 5:29 ratio was applied. It was concluded that it is possible to obtain highly accurate input functions, i.e., calculated continuous arterial sampling octanol-extracted counts, using one-point arterial sampling whole blood counts by performing correction using the 5:29 ratio. (K.H.)

  9. How Sample Size Affects a Sampling Distribution

    Science.gov (United States)

    Mulekar, Madhuri S.; Siegel, Murray H.

    2009-01-01

    If students are to understand inferential statistics successfully, they must have a profound understanding of the nature of the sampling distribution. Specifically, they must comprehend the determination of the expected value and standard error of a sampling distribution as well as the meaning of the central limit theorem. Many students in a high…

  10. Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions

    NARCIS (Netherlands)

    Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.

    2013-01-01

    Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of

  11. PIXE–PIGE analysis of size-segregated aerosol samples from remote areas

    Energy Technology Data Exchange (ETDEWEB)

    Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)

    2014-01-01

    The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.

  12. The one-sample PARAFAC approach reveals molecular size distributions of fluorescent components in dissolved organic matter

    DEFF Research Database (Denmark)

    Wünsch, Urban; Murphy, Kathleen R.; Stedmon, Colin

    2017-01-01

    Molecular size plays an important role in dissolved organic matter (DOM) biogeochemistry, but its relationship with the fluorescent fraction of DOM (FDOM) remains poorly resolved. Here high-performance size exclusion chromatography (HPSEC) was coupled to fluorescence emission-excitation (EEM...... but not their spectral properties. Thus, in contrast to absorption measurements, bulk fluorescence is unlikely to reliably indicate the average molecular size of DOM. The one-sample approach enables robust and independent cross-site comparisons without large-scale sampling efforts and introduces new analytical...... opportunities for elucidating the origins and biogeochemical properties of FDOM...

  13. 14CO2 analysis of soil gas: Evaluation of sample size limits and sampling devices

    Science.gov (United States)

    Wotte, Anja; Wischhöfer, Philipp; Wacker, Lukas; Rethemeyer, Janet

    2017-12-01

    Radiocarbon (14C) analysis of CO2 respired from soils or sediments is a valuable tool to identify different carbon sources. The collection and processing of the CO2, however, is challenging and prone to contamination. We thus continuously improve our handling procedures and present a refined method for the collection of even small amounts of CO2 in molecular sieve cartridges (MSCs) for accelerator mass spectrometry 14C analysis. Using a modified vacuum rig and an improved desorption procedure, we were able to increase the CO2 recovery from the MSC (95%) as well as the sample throughput compared to our previous study. By processing series of different sample size, we show that our MSCs can be used for CO2 samples of as small as 50 μg C. The contamination by exogenous carbon determined in these laboratory tests, was less than 2.0 μg C from fossil and less than 3.0 μg C from modern sources. Additionally, we tested two sampling devices for the collection of CO2 samples released from soils or sediments, including a respiration chamber and a depth sampler, which are connected to the MSC. We obtained a very promising, low process blank for the entire CO2 sampling and purification procedure of ∼0.004 F14C (equal to 44,000 yrs BP) and ∼0.003 F14C (equal to 47,000 yrs BP). In contrast to previous studies, we observed no isotopic fractionation towards lighter δ13C values during the passive sampling with the depth samplers.

  14. Applying a low energy HPGe detector gamma ray spectrometric technique for the evaluation of Pu/Am ratio in biological samples.

    Science.gov (United States)

    Singh, I S; Mishra, Lokpati; Yadav, J R; Nadar, M Y; Rao, D D; Pradeepkumar, K S

    2015-10-01

    The estimation of Pu/(241)Am ratio in the biological samples is an important input for the assessment of internal dose received by the workers. The radiochemical separation of Pu isotopes and (241)Am in a sample followed by alpha spectrometry is a widely used technique for the determination of Pu/(241)Am ratio. However, this method is time consuming and many times quick estimation is required. In this work, Pu/(241)Am ratio in the biological sample was estimated with HPGe detector based measurements using gamma/X-rays emitted by these radionuclides. These results were compared with those obtained from alpha spectroscopy of sample after radiochemical analysis and found to be in good agreement. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Radionuclide ratios in wet and dry deposition samples from June 1976 through December 1977

    International Nuclear Information System (INIS)

    Gavini, M.B.

    1978-01-01

    238 Pu, 239 Pu and 137 Cs in rain and dry fallout and 90 Sr in rain samples were measured at Woods Hole, Massachusetts, from June 1976 through December 1977. The dry fallout was estimated to be about 7.8% of the total deposition of 239 Pu and 137 Cs. 239 Pu/ 137 Cs ratios, almost constant at about 0.011 in rain or dry fallout, February through December 1977, suggested that fractionation between the refractory and volatile radionuclides is insignificant in stratospheric fallout. This supports the idea of regional homogeneity of radionuclide ratios in fallout. (Auth.)

  16. The attention-weighted sample-size model of visual short-term memory: Attention capture predicts resource allocation and memory load.

    Science.gov (United States)

    Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren

    2016-09-01

    We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  17. The influence of the negative-positive ratio and screening database size on the performance of machine learning-based virtual screening.

    Science.gov (United States)

    Kurczab, Rafał; Bojarski, Andrzej J

    2017-01-01

    The machine learning-based virtual screening of molecular databases is a commonly used approach to identify hits. However, many aspects associated with training predictive models can influence the final performance and, consequently, the number of hits found. Thus, we performed a systematic study of the simultaneous influence of the proportion of negatives to positives in the testing set, the size of screening databases and the type of molecular representations on the effectiveness of classification. The results obtained for eight protein targets, five machine learning algorithms (SMO, Naïve Bayes, Ibk, J48 and Random Forest), two types of molecular fingerprints (MACCS and CDK FP) and eight screening databases with different numbers of molecules confirmed our previous findings that increases in the ratio of negative to positive training instances greatly influenced most of the investigated parameters of the ML methods in simulated virtual screening experiments. However, the performance of screening was shown to also be highly dependent on the molecular library dimension. Generally, with the increasing size of the screened database, the optimal training ratio also increased, and this ratio can be rationalized using the proposed cost-effectiveness threshold approach. To increase the performance of machine learning-based virtual screening, the training set should be constructed in a way that considers the size of the screening database.

  18. Statistical characterization of a large geochemical database and effect of sample size

    Science.gov (United States)

    Zhang, C.; Manheim, F.T.; Hinde, J.; Grossman, J.N.

    2005-01-01

    smaller numbers of data points showed that few elements passed standard statistical tests for normality or log-normality until sample size decreased to a few hundred data points. Large sample size enhances the power of statistical tests, and leads to rejection of most statistical hypotheses for real data sets. For large sample sizes (e.g., n > 1000), graphical methods such as histogram, stem-and-leaf, and probability plots are recommended for rough judgement of probability distribution if needed. ?? 2005 Elsevier Ltd. All rights reserved.

  19. A note on power and sample size calculations for the Kruskal-Wallis test for ordered categorical data.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui

    2012-01-01

    Although the Kruskal-Wallis test has been widely used to analyze ordered categorical data, power and sample size methods for this test have been investigated to a much lesser extent when the underlying multinomial distributions are unknown. This article generalizes the power and sample size procedures proposed by Fan et al. ( 2011 ) for continuous data to ordered categorical data, when estimates from a pilot study are used in the place of knowledge of the true underlying distribution. Simulations show that the proposed power and sample size formulas perform well. A myelin oligodendrocyte glycoprotein (MOG) induced experimental autoimmunce encephalomyelitis (EAE) mouse study is used to demonstrate the application of the methods.

  20. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    Directory of Open Access Journals (Sweden)

    Mark Heckmann

    2017-01-01

    Full Text Available The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a discover all attribute categories relevant to the field and b yield a predefined minimal number of attributes per category. For most applied researchers who collect multiple repertory grid data, programming a numeric simulation to answer these questions is not feasible. The gridsampler software facilitates determining the required sample size by providing a GUI for conducting the necessary numerical simulations. Researchers can supply a set of parameters suitable for the specific research situation, determine the required sample size, and easily explore the effects of changes in the parameter set.

  1. Anomalies in the detection of change: When changes in sample size are mistaken for changes in proportions.

    Science.gov (United States)

    Fiedler, Klaus; Kareev, Yaakov; Avrahami, Judith; Beier, Susanne; Kutzner, Florian; Hütter, Mandy

    2016-01-01

    Detecting changes, in performance, sales, markets, risks, social relations, or public opinions, constitutes an important adaptive function. In a sequential paradigm devised to investigate detection of change, every trial provides a sample of binary outcomes (e.g., correct vs. incorrect student responses). Participants have to decide whether the proportion of a focal feature (e.g., correct responses) in the population from which the sample is drawn has decreased, remained constant, or increased. Strong and persistent anomalies in change detection arise when changes in proportional quantities vary orthogonally to changes in absolute sample size. Proportional increases are readily detected and nonchanges are erroneously perceived as increases when absolute sample size increases. Conversely, decreasing sample size facilitates the correct detection of proportional decreases and the erroneous perception of nonchanges as decreases. These anomalies are however confined to experienced samples of elementary raw events from which proportions have to be inferred inductively. They disappear when sample proportions are described as percentages in a normalized probability format. To explain these challenging findings, it is essential to understand the inductive-learning constraints imposed on decisions from experience.

  2. On sample size of the kruskal-wallis test with application to a mouse peritoneal cavity study.

    Science.gov (United States)

    Fan, Chunpeng; Zhang, Donghui; Zhang, Cun-Hui

    2011-03-01

    As the nonparametric generalization of the one-way analysis of variance model, the Kruskal-Wallis test applies when the goal is to test the difference between multiple samples and the underlying population distributions are nonnormal or unknown. Although the Kruskal-Wallis test has been widely used for data analysis, power and sample size methods for this test have been investigated to a much lesser extent. This article proposes new power and sample size calculation methods for the Kruskal-Wallis test based on the pilot study in either a completely nonparametric model or a semiparametric location model. No assumption is made on the shape of the underlying population distributions. Simulation results show that, in terms of sample size calculation for the Kruskal-Wallis test, the proposed methods are more reliable and preferable to some more traditional methods. A mouse peritoneal cavity study is used to demonstrate the application of the methods. © 2010, The International Biometric Society.

  3. Analysis of tin-ore samples by the ratio of Rayleigh to Compton backscattering

    International Nuclear Information System (INIS)

    Ao Qi; Cao Liguo; Ding Yimin

    1990-01-01

    The relationship between the ratio of gamma-ray Rayleigh to Compton backscattering intensities (R/C) and the weight fraction of heavy element in light matrixes were investigated. An improved (R/C) eff analytical technique for tin-ore samples was described. The technique can be regarded as a substitute for the XRF method in which the self-absorption process worsens the analytical accuracy of heavy elements

  4. Pre-analytical sample quality: metabolite ratios as an intrinsic marker for prolonged room temperature exposure of serum samples.

    Directory of Open Access Journals (Sweden)

    Gabriele Anton

    Full Text Available Advances in the "omics" field bring about the need for a high number of good quality samples. Many omics studies take advantage of biobanked samples to meet this need. Most of the laboratory errors occur in the pre-analytical phase. Therefore evidence-based standard operating procedures for the pre-analytical phase as well as markers to distinguish between 'good' and 'bad' quality samples taking into account the desired downstream analysis are urgently needed. We studied concentration changes of metabolites in serum samples due to pre-storage handling conditions as well as due to repeated freeze-thaw cycles. We collected fasting serum samples and subjected aliquots to up to four freeze-thaw cycles and to pre-storage handling delays of 12, 24 and 36 hours at room temperature (RT and on wet and dry ice. For each treated aliquot, we quantified 127 metabolites through a targeted metabolomics approach. We found a clear signature of degradation in samples kept at RT. Storage on wet ice led to less pronounced concentration changes. 24 metabolites showed significant concentration changes at RT. In 22 of these, changes were already visible after only 12 hours of storage delay. Especially pronounced were increases in lysophosphatidylcholines and decreases in phosphatidylcholines. We showed that the ratio between the concentrations of these molecule classes could serve as a measure to distinguish between 'good' and 'bad' quality samples in our study. In contrast, we found quite stable metabolite concentrations during up to four freeze-thaw cycles. We concluded that pre-analytical RT handling of serum samples should be strictly avoided and serum samples should always be handled on wet ice or in cooling devices after centrifugation. Moreover, serum samples should be frozen at or below -80°C as soon as possible after centrifugation.

  5. Inferring Population Size History from Large Samples of Genome-Wide Molecular Data - An Approximate Bayesian Computation Approach.

    Directory of Open Access Journals (Sweden)

    Simon Boitard

    2016-03-01

    Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.

  6. A study on the effect of size and ratio of book value to market value on excessive return

    Directory of Open Access Journals (Sweden)

    Seyed Mohsen Tabatabaei Mozdabadi

    2012-09-01

    Full Text Available Stock market plays an important role on demonstrating economy direction and it provides good opportunities for people who wish to purchase a small portion of different firms' shares. In this paper, we propose an empirical study to measure the impact of the market size and the ratio of book value on market value on excessive return. The study gathers the necessary information from some of active stock shares traded on Tehran Stock Exchange over the period of 2010-2011. The proposed model of this paper uses linear regression analysis to investigate the relationship between the excessive return and other factors. The study divides the information into seven equal groups and fits the regression model using ordinary least square technique. The results indicate that there is a negative relationship between size and excessive return and a positive relationship between the ratio of BV/MV and excessive return. Although the results of both tests are positive, we have to be more cautious about what have reported on the second hypothesis.

  7. Determination of extremely low 236U/238U isotope ratios in environmental samples by sector-field inductively coupled plasma mass spectrometry using high-efficiency sample introduction

    International Nuclear Information System (INIS)

    Boulyga, Sergei F.; Heumann, Klaus G.

    2006-01-01

    A method by inductively coupled plasma mass spectrometry (Icp-Ms) was developed which allows the measurement of 236 U at concentration ranges down to 3 x 10 -14 g g -1 and extremely low 236 U/ 238 U isotope ratios in soil samples of 10 -7 . By using the high-efficiency solution introduction system APEX in connection with a sector-field ICP-MS a sensitivity of more than 5000 counts fg -1 uranium was achieved. The use of an aerosol desolvating unit reduced the formation rate of uranium hydride ions UH + /U + down to a level of 10 -6 . An abundance sensitivity of 3 x 10 -7 was observed for 236 U/ 238 U isotope ratio measurements at mass resolution 4000. The detection limit for 236 U and the lowest detectable 236 U/ 238 U isotope ratio were improved by more than two orders of magnitude compared with corresponding values by alpha spectrometry. Determination of uranium in soil samples collected in the vicinity of Chernobyl nuclear power plant (NPP) resulted in that the 236 U/ 238 U isotope ratio is a much more sensitive and accurate marker for environmental contamination by spent uranium in comparison to the 235 U/ 238 U isotope ratio. The ICP-MS technique allowed for the first time detection of irradiated uranium in soil samples even at distances more than 200 km to the north of Chernobyl NPP (Mogilev region). The concentration of 236 U in the upper 0-10 cm soil layers varied from 2 x 10 -9 g g -1 within radioactive spots close to the Chernobyl NPP to 3 x 10 -13 g g -1 on a sampling site located by >200 km from Chernobyl

  8. Determination of extremely low (236)U/(238)U isotope ratios in environmental samples by sector-field inductively coupled plasma mass spectrometry using high-efficiency sample introduction.

    Science.gov (United States)

    Boulyga, Sergei F; Heumann, Klaus G

    2006-01-01

    A method by inductively coupled plasma mass spectrometry (ICP-MS) was developed which allows the measurement of (236)U at concentration ranges down to 3 x 10(-14)g g(-1) and extremely low (236)U/(238)U isotope ratios in soil samples of 10(-7). By using the high-efficiency solution introduction system APEX in connection with a sector-field ICP-MS a sensitivity of more than 5,000 counts fg(-1) uranium was achieved. The use of an aerosol desolvating unit reduced the formation rate of uranium hydride ions UH(+)/U(+) down to a level of 10(-6). An abundance sensitivity of 3 x 10(-7) was observed for (236)U/(238)U isotope ratio measurements at mass resolution 4000. The detection limit for (236)U and the lowest detectable (236)U/(238)U isotope ratio were improved by more than two orders of magnitude compared with corresponding values by alpha spectrometry. Determination of uranium in soil samples collected in the vicinity of Chernobyl nuclear power plant (NPP) resulted in that the (236)U/(238)U isotope ratio is a much more sensitive and accurate marker for environmental contamination by spent uranium in comparison to the (235)U/(238)U isotope ratio. The ICP-MS technique allowed for the first time detection of irradiated uranium in soil samples even at distances more than 200 km to the north of Chernobyl NPP (Mogilev region). The concentration of (236)U in the upper 0-10 cm soil layers varied from 2 x 10(-9)g g(-1) within radioactive spots close to the Chernobyl NPP to 3 x 10(-13)g g(-1) on a sampling site located by >200 km from Chernobyl.

  9. Atmospheric aerosol sampling campaign in Budapest and K-puszta. Part 1. Elemental concentrations and size distributions

    International Nuclear Information System (INIS)

    Dobos, E.; Borbely-Kiss, I.; Kertesz, Zs.; Szabo, Gy.; Salma, I.

    2004-01-01

    Complete text of publication follows. Atmospheric aerosol samples were collected in a sampling campaign from 24 July to 1 Au- gust, 2003 in Hungary. The sampling were performed in two places simultaneously: in Budapest (urban site) and K-puszta (remote area). Two PIXE International 7-stage cascade impactors were used for aerosol sampling with 24 hours duration. These impactors separate the aerosol into 7 size ranges. The elemental concentrations of the samples were obtained by proton-induced X-ray Emission (PIXE) analysis. Size distributions of S, Si, Ca, W, Zn, Pb and Fe elements were investigated in K-puszta and in Budapest. Average rates (shown in Table 1) of the elemental concentrations was calculated for each stage (in %) from the obtained distributions. The elements can be grouped into two parts on the basis of these data. The majority of the particle containing Fe, Si, Ca, (Ti) are in the 2-8 μm size range (first group). These soil origin elements were found usually in higher concentration in Budapest than in K-puszta (Fig.1.). The second group consisted of S, Pb and (W). The majority of these elements was found in the 0.25-1 μm size range and was much higher in Budapest than in K-puszta. W was measured only in samples collected in Budapest. Zn has uniform distribution in Budapest and does not belong to the above mentioned groups. This work was supported by the National Research and Development Program (NRDP 3/005/2001). (author)

  10. Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies

    Science.gov (United States)

    McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.

    2010-01-01

    This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.

  11. Size Matters: Assessing Optimum Soil Sample Size for Fungal and Bacterial Community Structure Analyses Using High Throughput Sequencing of rRNA Gene Amplicons

    Directory of Open Access Journals (Sweden)

    Christopher Ryan Penton

    2016-06-01

    Full Text Available We examined the effect of different soil sample sizes obtained from an agricultural field, under a single cropping system uniform in soil properties and aboveground crop responses, on bacterial and fungal community structure and microbial diversity indices. DNA extracted from soil sample sizes of 0.25, 1, 5 and 10 g using MoBIO kits and from 10 and 100 g sizes using a bead-beating method (SARDI were used as templates for high-throughput sequencing of 16S and 28S rRNA gene amplicons for bacteria and fungi, respectively, on the Illumina MiSeq and Roche 454 platforms. Sample size significantly affected overall bacterial and fungal community structure, replicate dispersion and the number of operational taxonomic units (OTUs retrieved. Richness, evenness and diversity were also significantly affected. The largest diversity estimates were always associated with the 10 g MoBIO extractions with a corresponding reduction in replicate dispersion. For the fungal data, smaller MoBIO extractions identified more unclassified Eukaryota incertae sedis and unclassified glomeromycota while the SARDI method retrieved more abundant OTUs containing unclassified Pleosporales and the fungal genera Alternaria and Cercophora. Overall, these findings indicate that a 10 g soil DNA extraction is most suitable for both soil bacterial and fungal communities for retrieving optimal diversity while still capturing rarer taxa in concert with decreasing replicate variation.

  12. Evaluating sampling strategy for DNA barcoding study of coastal and inland halo-tolerant Poaceae and Chenopodiaceae: A case study for increased sample size.

    Directory of Open Access Journals (Sweden)

    Peng-Cheng Yao

    Full Text Available Environmental conditions in coastal salt marsh habitats have led to the development of specialist genetic adaptations. We evaluated six DNA barcode loci of the 53 species of Poaceae and 15 species of Chenopodiaceae from China's coastal salt marsh area and inland area. Our results indicate that the optimum DNA barcode was ITS for coastal salt-tolerant Poaceae and matK for the Chenopodiaceae. Sampling strategies for ten common species of Poaceae and Chenopodiaceae were analyzed according to optimum barcode. We found that by increasing the number of samples collected from the coastal salt marsh area on the basis of inland samples, the number of haplotypes of Arundinella hirta, Digitaria ciliaris, Eleusine indica, Imperata cylindrica, Setaria viridis, and Chenopodium glaucum increased, with a principal coordinate plot clearly showing increased distribution points. The results of a Mann-Whitney test showed that for Digitaria ciliaris, Eleusine indica, Imperata cylindrica, and Setaria viridis, the distribution of intraspecific genetic distances was significantly different when samples from the coastal salt marsh area were included (P < 0.01. These results suggest that increasing the sample size in specialist habitats can improve measurements of intraspecific genetic diversity, and will have a positive effect on the application of the DNA barcodes in widely distributed species. The results of random sampling showed that when sample size reached 11 for Chloris virgata, Chenopodium glaucum, and Dysphania ambrosioides, 13 for Setaria viridis, and 15 for Eleusine indica, Imperata cylindrica and Chenopodium album, average intraspecific distance tended to reach stability. These results indicate that the sample size for DNA barcode of globally distributed species should be increased to 11-15.

  13. Adaptive clinical trial designs with pre-specified rules for modifying the sample size: understanding efficient types of adaptation.

    Science.gov (United States)

    Levin, Gregory P; Emerson, Sarah C; Emerson, Scott S

    2013-04-15

    Adaptive clinical trial design has been proposed as a promising new approach that may improve the drug discovery process. Proponents of adaptive sample size re-estimation promote its ability to avoid 'up-front' commitment of resources, better address the complicated decisions faced by data monitoring committees, and minimize accrual to studies having delayed ascertainment of outcomes. We investigate aspects of adaptation rules, such as timing of the adaptation analysis and magnitude of sample size adjustment, that lead to greater or lesser statistical efficiency. Owing in part to the recent Food and Drug Administration guidance that promotes the use of pre-specified sampling plans, we evaluate alternative approaches in the context of well-defined, pre-specified adaptation. We quantify the relative costs and benefits of fixed sample, group sequential, and pre-specified adaptive designs with respect to standard operating characteristics such as type I error, maximal sample size, power, and expected sample size under a range of alternatives. Our results build on others' prior research by demonstrating in realistic settings that simple and easily implemented pre-specified adaptive designs provide only very small efficiency gains over group sequential designs with the same number of analyses. In addition, we describe optimal rules for modifying the sample size, providing efficient adaptation boundaries on a variety of scales for the interim test statistic for adaptation analyses occurring at several different stages of the trial. We thus provide insight into what are good and bad choices of adaptive sampling plans when the added flexibility of adaptive designs is desired. Copyright © 2012 John Wiley & Sons, Ltd.

  14. Determining Sample Size with a Given Range of Mean Effects in One-Way Heteroscedastic Analysis of Variance

    Science.gov (United States)

    Shieh, Gwowen; Jan, Show-Li

    2013-01-01

    The authors examined 2 approaches for determining the required sample size of Welch's test for detecting equality of means when the greatest difference between any 2 group means is given. It is shown that the actual power obtained with the sample size of the suggested approach is consistently at least as great as the nominal power. However, the…

  15. Modified energy-deposition model, for the computation of the stopping-power ratio for small cavity sizes

    International Nuclear Information System (INIS)

    Janssens, A.C.A.

    1981-01-01

    This paper presents a modification to the Spencer-Attix theory, which allows application of the theory to larger cavity sizes. The modified theory is in better agreement with the actual process of energy deposition by delta rays. In the first part of the paper it is recalled how the Spencer-Attix theory can be derived from basic principles, which allows a physical interpretation of the theory in terms of a function describing the space and direction average of the deposited energy. A realistic model for the computation of this function is described and the resulting expression for the stopping-power ratio is calculated. For the comparison between the Spencer-Attix theory and this modified expression a correction factor to the ''Bragg-Gray inhomogeneous term'' has been defined. This factor has been computed as a function of cavity size for different source energies and mean excitation energies; thus, general properties of this factor have been elucidated. The computations have been extended to include the density effect. It has been shown that the computation of the inhomogeneous term can be performed for any expression describing the energy loss per unit distance of the electrons as a function of their energy. Thus an expression has been calculated which is in agreement with a quadratic range-energy relationship. In conclusion, the concrete procedure for computing the stopping-power ratio is reviewed

  16. Activity ratios of {sup 137}Cs, {sup 90}Sr and {sup 239+240}Pu in environmental samples

    Energy Technology Data Exchange (ETDEWEB)

    Bossew, P. [European Commission - DG Joint Research Centre, Institute for Environment and Sustainability (IES), I-21020 Ispra (Vatican City State, Holy See,) (Italy)], E-mail: peter.bossew@jrc.it; Lettner, H. [Institute of Physics and Biophysics, University of Salzburg, Hellbrunner Strasse 34, A-5020 Salzburg (Austria)], E-mail: herbert.lettner@sbg.ac.at; Hubmer, A.; Erlinger, C.; Gastberger, M. [Institute of Physics and Biophysics, University of Salzburg, Hellbrunner Strasse 34, A-5020 Salzburg (Austria)

    2007-09-15

    Both global and Chernobyl fallout have resulted in environmental contamination with radionuclides such as {sup 137}Cs, {sup 90}Sr and {sup 239+240}Pu. In environmental samples, {sup 137}Cs and {sup 239+240}Pu can be divided into the contributions of either source, if also the isotopes {sup 134}Cs and {sup 238}Pu are measurable, based on the known isotopic ratios in global and Chernobyl fallout. No analogous method is available for {sup 90}Sr. The activity ratios of Sr to Cs and Pu, respectively, are known for the actual fallout mainly from air filter measurements; but due to the high mobility of Sr in the environment, compared to Cs and Pu, these ratios generally do not hold for the inventory many years after deposition. In this paper we suggest a method to identify the mean contributions of global and Chernobyl fallout to total Sr in soil, sediment and cryoconite samples from Alpine and pre-Alpine regions of Austria, based on a statistical evaluation of Sr/Cs/Pu radionuclide activity ratios. Results are given for Sr:Cs, Sr:Pu and Cs:Pu ratios. Comparison with fallout data shows a strong depletion of Sr against Cs and Pu.

  17. The Overall Odds Ratio as an Intuitive Effect Size Index for Multiple Logistic Regression: Examination of Further Refinements

    Science.gov (United States)

    Le, Huy; Marcus, Justin

    2012-01-01

    This study used Monte Carlo simulation to examine the properties of the overall odds ratio (OOR), which was recently introduced as an index for overall effect size in multiple logistic regression. It was found that the OOR was relatively independent of study base rate and performed better than most commonly used R-square analogs in indexing model…

  18. In Situ Sampling of Relative Dust Devil Particle Loads and Their Vertical Grain Size Distributions.

    Science.gov (United States)

    Raack, Jan; Reiss, Dennis; Balme, Matthew R; Taj-Eddine, Kamal; Ori, Gian Gabriele

    2017-04-19

    During a field campaign in the Sahara Desert in southern Morocco, spring 2012, we sampled the vertical grain size distribution of two active dust devils that exhibited different dimensions and intensities. With these in situ samples of grains in the vortices, it was possible to derive detailed vertical grain size distributions and measurements of the lifted relative particle load. Measurements of the two dust devils show that the majority of all lifted particles were only lifted within the first meter (∼46.5% and ∼61% of all particles; ∼76.5 wt % and ∼89 wt % of the relative particle load). Furthermore, ∼69% and ∼82% of all lifted sand grains occurred in the first meter of the dust devils, indicating the occurrence of "sand skirts." Both sampled dust devils were relatively small (∼15 m and ∼4-5 m in diameter) compared to dust devils in surrounding regions; nevertheless, measurements show that ∼58.5% to 73.5% of all lifted particles were small enough to go into suspension (grain size classification). This relatively high amount represents only ∼0.05 to 0.15 wt % of the lifted particle load. Larger dust devils probably entrain larger amounts of fine-grained material into the atmosphere, which can have an influence on the climate. Furthermore, our results indicate that the composition of the surface, on which the dust devils evolved, also had an influence on the particle load composition of the dust devil vortices. The internal particle load structure of both sampled dust devils was comparable related to their vertical grain size distribution and relative particle load, although both dust devils differed in their dimensions and intensities. A general trend of decreasing grain sizes with height was also detected. Key Words: Mars-Dust devils-Planetary science-Desert soils-Atmosphere-Grain sizes. Astrobiology 17, xxx-xxx.

  19. Sensitivity and specificity of normality tests and consequences on reference interval accuracy at small sample size: a computer-simulation study.

    Science.gov (United States)

    Le Boedec, Kevin

    2016-12-01

    According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.

  20. Bolton's tooth size discrepancy in malaysian orthodontic patients: Are occlusal characteristics such as overjet, overbite, midline, and crowding related to tooth size discrepancy in specific malocclusions and ethnicities?

    Directory of Open Access Journals (Sweden)

    Priti Subhash Mulimani

    2018-01-01

    Full Text Available Introduction: Tooth size, occlusal traits, and ethnicity are closely interrelated, and their impact on desirable orthodontic treatment outcome cannot be underestimated. This study was undertaken to assess the occlusal characteristics and ethnic variations in occlusion of Malaysian orthodontic patients and evaluate their correlation with Bolton's tooth size discrepancy. Materials and Methods: On 112 pretreatment study models of orthodontic patients, molar relationship, overjet, overbite, spacing, crowding, midline shift, and Bolton's ratios were assessed. ANOVA, one-sample t-test, Chi-squared test, and Spearman's rho correlation coefficient were used for statistical analysis. Results: Significant difference between anterior ratio of our study and Bolton's ideal values was found, for the entire study sample and Chinese ethnic group. Differences between races and malocclusion groups were not statistically significant (P > 0.05. Significant correlations were found as follows – in Angle's Class I malocclusion between 1 anterior ratio and overbite, 2 overall ratio and maxillary crowding and spacing; in Angle's Class II malocclusion between 1 anterior ratio and overjet and midline shift, 2 overall ratio and mandibular crowding; in Angle's Class III malocclusion between 1 anterior ratio and mandibular crowding and both maxillary and mandibular spacing 2 overall ratio and mandibular crowding. Conclusions: Significant differences in anterior ratio and Bolton's ideal values for the Malaysian population were found, indicating variations in anterior tooth size as compared to Caucasians. Statistically significant correlations existed between Bolton's ratios and occlusal traits. These findings can be applied clinically in diagnosis and treatment planning by keeping in mind the specific discrepancies that can occur in certain malocclusions and addressing them accordingly.

  1. Evaluating the performance of species richness estimators: sensitivity to sample grain size

    DEFF Research Database (Denmark)

    Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara

    2006-01-01

    and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3.  Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar....... Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...

  2. Study of Zn/Cu ratio and oligoelements in serum samples for cancer diagnosis

    International Nuclear Information System (INIS)

    Lue-Meru, M. P.; Jimenez, E.; Hernandez, E.; Rojas, A.; Greaves, E.

    2000-01-01

    The aim of this work was to study methods for cancer diagnosis based on trace element determination in serum blood samples. TXRF technique was selected for the analysis, due to its simultaneous and multi-elemental character, the very small amount of sample required and the high sensitivity. For the study, blood samples were collected from normal individuals (Blood donors and students), classified by age and sex in order to obtain reference normal values for the elements Zn, Cu, Fe, Mn, Se, and additionally, Ca and K. Samples from cancer patients before treatment and under treatment were collected at the oncological Service (BADAN-Lara), and were also classified by age and sex. The TXRF procedure used was developed in a previous work and involves the direct analysis and the use of Compton peak as Internal Standard. All the samples were analyzed by the routine clinical test (blood chemistry). Elemental concentrations and clinical data were processed with the statistical package Minitab-Windows, in order to establish the respective correlation. Concerning to elemental concentrations, significant differences were found in Zn/Cu ratio between normal individuals group and the cancer patients group. (author)

  3. Statistical Power and Optimum Sample Allocation Ratio for Treatment and Control Having Unequal Costs Per Unit of Randomization

    Science.gov (United States)

    Liu, Xiaofeng

    2003-01-01

    This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…

  4. Considerations for Sample Preparation Using Size-Exclusion Chromatography for Home and Synchrotron Sources.

    Science.gov (United States)

    Rambo, Robert P

    2017-01-01

    The success of a SAXS experiment for structural investigations depends on two precise measurements, the sample and the buffer background. Buffer matching between the sample and background can be achieved using dialysis methods but in biological SAXS of monodisperse systems, sample preparation is routinely being performed with size exclusion chromatography (SEC). SEC is the most reliable method for SAXS sample preparation as the method not only purifies the sample for SAXS but also almost guarantees ideal buffer matching. Here, I will highlight the use of SEC for SAXS sample preparation and demonstrate using example proteins that SEC purification does not always provide for ideal samples. Scrutiny of the SEC elution peak using quasi-elastic and multi-angle light scattering techniques can reveal hidden features (heterogeneity) of the sample that should be considered during SAXS data analysis. In some cases, sample heterogeneity can be controlled using a small molecule additive and I outline a simple additive screening method for sample preparation.

  5. Discrepancies in sample size calculations and data analyses reported in randomised trials: comparison of publications with protocols

    DEFF Research Database (Denmark)

    Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.

    2008-01-01

    OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... of handling missing data was described in 16 protocols and 49 publications. 39/49 protocols and 42/43 publications reported the statistical test used to analyse primary outcome measures. Unacknowledged discrepancies between protocols and publications were found for sample size calculations (18/34 trials...

  6. A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies

    Directory of Open Access Journals (Sweden)

    Hojin Moon

    2002-12-01

    Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.

  7. Utilization of AHWR critical facility for research and development work on large sample NAA

    International Nuclear Information System (INIS)

    Acharya, R.; Dasari, K.B.; Pujari, P.K.; Swain, K.K.; Reddy, A.V.R.; Verma, S.K.; De, S.K.

    2014-01-01

    The graphite reflector position of AHWR critical facility (CF) was utilized for analysis of large size (g-kg scale) samples using internal mono standard neutron activation analysis (IM-NAA). The reactor position was characterized by cadmium ratio method using In monitor for total flux and sub cadmium to epithermal flux ratio (f). Large sample neutron activation analysis (LSNAA) work was carried out for samples of stainless steel, ancient and new clay potteries and dross. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated. Radioactive assay was carried out using high resolution gamma ray spectrometry. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. Concentrations of Au and Ag were determined in not so homogeneous three large size samples of dross. An X-Z rotary scanning unit has been installed for counting large and not so homogeneous samples. (author)

  8. Generalized procedures for determining inspection sample sizes (related to quantitative measurements). Vol. 1: Detailed explanations

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1986-11-01

    Generalized procedures have been developed to determine sample sizes in connection with the planning of inspection activities. These procedures are based on different measurement methods. They are applied mainly to Bulk Handling Facilities and Physical Inventory Verifications. The present report attempts (i) to assign to appropriate statistical testers (viz. testers for gross, partial and small defects) the measurement methods to be used, and (ii) to associate the measurement uncertainties with the sample sizes required for verification. Working papers are also provided to assist in the application of the procedures. This volume contains the detailed explanations concerning the above mentioned procedures

  9. (I Can't Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research.

    Science.gov (United States)

    van Rijnsoever, Frank J

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: "random chance," which is based on probability sampling, "minimal information," which yields at least one new code per sampling step, and "maximum information," which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario.

  10. Analysis of femtogram-sized plutonium samples by thermal ionization mass spectrometry

    International Nuclear Information System (INIS)

    Smith, D.H.; Duckworth, D.C.; Bostick, D.T.; Coleman, R.M.; McPherson, R.L.; McKown, H.S.

    1994-01-01

    The goal of this investigation was to extend the ability to perform isotopic analysis of plutonium to samples as small as possible. Plutonium ionizes thermally with quite good efficiency (first ionization potential 5.7 eV). Sub-nanogram sized samples can be analyzed on a near-routine basis given the necessary instrumentation. Efforts in this laboratory have been directed at rhenium-carbon systems; solutions of carbon in rhenium provide surfaces with work functions higher than pure rhenium (5.8 vs. ∼ 5.4 eV). Using a single resin bead as a sample loading medium both concentrates the sample nearly to a point and, due to its interaction with rhenium, produces the desired composite surface. Earlier work in this area showed that a layer of rhenium powder slurried in solution containing carbon substantially enhanced precision of isotopic measurements for uranium. Isotopic fractionation was virtually eliminated, and ionization efficiencies 2-5 times better than previously measured were attained for both Pu and U (1.7 and 0.5%, respectively). The other side of this coin should be the ability to analyze smaller samples, which is the subject of this report

  11. Sample Size and Robustness of Inferences from Logistic Regression in the Presence of Nonlinearity and Multicollinearity

    OpenAIRE

    Bergtold, Jason S.; Yeager, Elizabeth A.; Featherstone, Allen M.

    2011-01-01

    The logistic regression models has been widely used in the social and natural sciences and results from studies using this model can have significant impact. Thus, confidence in the reliability of inferences drawn from these models is essential. The robustness of such inferences is dependent on sample size. The purpose of this study is to examine the impact of sample size on the mean estimated bias and efficiency of parameter estimation and inference for the logistic regression model. A numbe...

  12. Bias in segmented gamma scans arising from size differences between calibration standards and assay samples

    International Nuclear Information System (INIS)

    Sampson, T.E.

    1991-01-01

    Recent advances in segmented gamma scanning have emphasized software corrections for gamma-ray self-adsorption in particulates or lumps of special nuclear material in the sample. another feature of this software is an attenuation correction factor formalism that explicitly accounts for differences in sample container size and composition between the calibration standards and the individual items being measured. Software without this container-size correction produces biases when the unknowns are not packaged in the same containers as the calibration standards. This new software allows the use of different size and composition containers for standards and unknowns, as enormous savings considering the expense of multiple calibration standard sets otherwise needed. This paper presents calculations of the bias resulting from not using this new formalism. These calculations may be used to estimate bias corrections for segmented gamma scanners that do not incorporate these advanced concepts

  13. Sample Size Estimation for Negative Binomial Regression Comparing Rates of Recurrent Events with Unequal Follow-Up Time.

    Science.gov (United States)

    Tang, Yongqiang

    2015-01-01

    A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.

  14. Rule-of-thumb adjustment of sample sizes to accommodate dropouts in a two-stage analysis of repeated measurements.

    Science.gov (United States)

    Overall, John E; Tonidandel, Scott; Starbuck, Robert R

    2006-01-01

    Recent contributions to the statistical literature have provided elegant model-based solutions to the problem of estimating sample sizes for testing the significance of differences in mean rates of change across repeated measures in controlled longitudinal studies with differentially correlated error and missing data due to dropouts. However, the mathematical complexity and model specificity of these solutions make them generally inaccessible to most applied researchers who actually design and undertake treatment evaluation research in psychiatry. In contrast, this article relies on a simple two-stage analysis in which dropout-weighted slope coefficients fitted to the available repeated measurements for each subject separately serve as the dependent variable for a familiar ANCOVA test of significance for differences in mean rates of change. This article is about how a sample of size that is estimated or calculated to provide desired power for testing that hypothesis without considering dropouts can be adjusted appropriately to take dropouts into account. Empirical results support the conclusion that, whatever reasonable level of power would be provided by a given sample size in the absence of dropouts, essentially the same power can be realized in the presence of dropouts simply by adding to the original dropout-free sample size the number of subjects who would be expected to drop from a sample of that original size under conditions of the proposed study.

  15. Uncertainty budget in internal monostandard NAA for small and large size samples analysis

    International Nuclear Information System (INIS)

    Dasari, K.B.; Acharya, R.

    2014-01-01

    Total uncertainty budget evaluation on determined concentration value is important under quality assurance programme. Concentration calculation in NAA or carried out by relative NAA and k0 based internal monostandard NAA (IM-NAA) method. IM-NAA method has been used for small and large sample analysis of clay potteries. An attempt was made to identify the uncertainty components in IM-NAA and uncertainty budget for La in both small and large size samples has been evaluated and compared. (author)

  16. A contemporary decennial global Landsat sample of changing agricultural field sizes

    Science.gov (United States)

    White, Emma; Roy, David

    2014-05-01

    Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by

  17. Addressing small sample size bias in multiple-biomarker trials: Inclusion of biomarker-negative patients and Firth correction.

    Science.gov (United States)

    Habermehl, Christina; Benner, Axel; Kopp-Schneider, Annette

    2018-03-01

    In recent years, numerous approaches for biomarker-based clinical trials have been developed. One of these developments are multiple-biomarker trials, which aim to investigate multiple biomarkers simultaneously in independent subtrials. For low-prevalence biomarkers, small sample sizes within the subtrials have to be expected, as well as many biomarker-negative patients at the screening stage. The small sample sizes may make it unfeasible to analyze the subtrials individually. This imposes the need to develop new approaches for the analysis of such trials. With an expected large group of biomarker-negative patients, it seems reasonable to explore options to benefit from including them in such trials. We consider advantages and disadvantages of the inclusion of biomarker-negative patients in a multiple-biomarker trial with a survival endpoint. We discuss design options that include biomarker-negative patients in the study and address the issue of small sample size bias in such trials. We carry out a simulation study for a design where biomarker-negative patients are kept in the study and are treated with standard of care. We compare three different analysis approaches based on the Cox model to examine if the inclusion of biomarker-negative patients can provide a benefit with respect to bias and variance of the treatment effect estimates. We apply the Firth correction to reduce the small sample size bias. The results of the simulation study suggest that for small sample situations, the Firth correction should be applied to adjust for the small sample size bias. Additional to the Firth penalty, the inclusion of biomarker-negative patients in the analysis can lead to further but small improvements in bias and standard deviation of the estimates. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Numerical evaluation of droplet sizing based on the ratio of fluorescent and scattered light intensities (LIF/Mie technique)

    International Nuclear Information System (INIS)

    Charalampous, Georgios; Hardalupas, Yannis

    2011-01-01

    The dependence of fluorescent and scattered light intensities from spherical droplets on droplet diameter was evaluated using Mie theory. The emphasis is on the evaluation of droplet sizing, based on the ratio of laser-induced fluorescence and scattered light intensities (LIF/Mie technique). A parametric study is presented, which includes the effects of scattering angle, the real part of the refractive index and the dye concentration in the liquid (determining the imaginary part of the refractive index). The assumption that the fluorescent and scattered light intensities are proportional to the volume and surface area of the droplets for accurate sizing measurements is not generally valid. More accurate sizing measurements can be performed with minimal dye concentration in the liquid and by collecting light at a scattering angle of 60 deg. rather than the commonly used angle of 90 deg. Unfavorable to the sizing accuracy are oscillations of the scattered light intensity with droplet diameter that are profound at the sidescatter direction (90 deg.) and for droplets with refractive indices around 1.4.

  19. Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size

    Directory of Open Access Journals (Sweden)

    Zhihua Wang

    2014-01-01

    Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.

  20. Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA.

    Science.gov (United States)

    Kelly, Brendan J; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D; Collman, Ronald G; Bushman, Frederic D; Li, Hongzhe

    2015-08-01

    The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence-absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. The quantitative LOD score: test statistic and sample size for exclusion and linkage of quantitative traits in human sibships.

    Science.gov (United States)

    Page, G P; Amos, C I; Boerwinkle, E

    1998-04-01

    We present a test statistic, the quantitative LOD (QLOD) score, for the testing of both linkage and exclusion of quantitative-trait loci in randomly selected human sibships. As with the traditional LOD score, the boundary values of 3, for linkage, and -2, for exclusion, can be used for the QLOD score. We investigated the sample sizes required for inferring exclusion and linkage, for various combinations of linked genetic variance, total heritability, recombination distance, and sibship size, using fixed-size sampling. The sample sizes required for both linkage and exclusion were not qualitatively different and depended on the percentage of variance being linked or excluded and on the total genetic variance. Information regarding linkage and exclusion in sibships larger than size 2 increased as approximately all possible pairs n(n-1)/2 up to sibships of size 6. Increasing the recombination (theta) distance between the marker and the trait loci reduced empirically the power for both linkage and exclusion, as a function of approximately (1-2theta)4.

  2. The importance of plot size and the number of sampling seasons on capturing macrofungal species richness.

    Science.gov (United States)

    Li, Huili; Ostermann, Anne; Karunarathna, Samantha C; Xu, Jianchu; Hyde, Kevin D; Mortimer, Peter E

    2018-07-01

    The species-area relationship is an important factor in the study of species diversity, conservation biology, and landscape ecology. A deeper understanding of this relationship is necessary, in order to provide recommendations on how to improve the quality of data collection on macrofungal diversity in different land use systems in future studies, a systematic assessment of methodological parameters, in particular optimal plot sizes. The species-area relationship of macrofungi in tropical and temperate climatic zones and four different land use systems were investigated by determining the macrofungal species richness in plot sizes ranging from 100 m 2 to 10 000 m 2 over two sampling seasons. We found that the effect of plot size on recorded species richness significantly differed between land use systems with the exception of monoculture systems. For both climate zones, land use system needs to be considered when determining optimal plot size. Using an optimal plot size was more important than temporal replication (over two sampling seasons) in accurately recording species richness. Copyright © 2018 British Mycological Society. Published by Elsevier Ltd. All rights reserved.

  3. Improved ASTM G72 Test Method for Ensuring Adequate Fuel-to-Oxidizer Ratios

    Science.gov (United States)

    Juarez, Alfredo; Harper, Susana Tapia

    2016-01-01

    The ASTM G72/G72M-15 Standard Test Method for Autogenous Ignition Temperature of Liquids and Solids in a High-Pressure Oxygen-Enriched Environment is currently used to evaluate materials for the ignition susceptibility driven by exposure to external heat in an enriched oxygen environment. Testing performed on highly volatile liquids such as cleaning solvents has proven problematic due to inconsistent test results (non-ignitions). Non-ignition results can be misinterpreted as favorable oxygen compatibility, although they are more likely associated with inadequate fuel-to-oxidizer ratios. Forced evaporation during purging and inadequate sample size were identified as two potential causes for inadequate available sample material during testing. In an effort to maintain adequate fuel-to-oxidizer ratios within the reaction vessel during test, several parameters were considered, including sample size, pretest sample chilling, pretest purging, and test pressure. Tests on a variety of solvents exhibiting a range of volatilities are presented in this paper. A proposed improvement to the standard test protocol as a result of this evaluation is also presented. Execution of the final proposed improved test protocol outlines an incremental step method of determining optimal conditions using increased sample sizes while considering test system safety limits. The proposed improved test method increases confidence in results obtained by utilizing the ASTM G72 autogenous ignition temperature test method and can aid in the oxygen compatibility assessment of highly volatile liquids and other conditions that may lead to false non-ignition results.

  4. Re-estimating sample size in cluster randomized trials with active recruitment within clusters

    NARCIS (Netherlands)

    van Schie, Sander; Moerbeek, Mirjam

    2014-01-01

    Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster

  5. A simple approach to power and sample size calculations in logistic regression and Cox regression models.

    Science.gov (United States)

    Vaeth, Michael; Skovlund, Eva

    2004-06-15

    For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.

  6. Enhanced Sampling in Free Energy Calculations: Combining SGLD with the Bennett's Acceptance Ratio and Enveloping Distribution Sampling Methods.

    Science.gov (United States)

    König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R

    2012-10-09

    One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.

  7. PET/CT in cancer: moderate sample sizes may suffice to justify replacement of a regional gold standard

    DEFF Research Database (Denmark)

    Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten

    2009-01-01

    PURPOSE: For certain cancer indications, the current patient evaluation strategy is a perfect but locally restricted gold standard procedure. If positron emission tomography/computed tomography (PET/CT) can be shown to be reliable within the gold standard region and if it can be argued that PET...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced....../CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...

  8. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    Science.gov (United States)

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minimal information,” which yields at least one new code per sampling step, and “maximum information,” which yields the largest number of new codes per sampling step. Next, I use simulations to assess the minimum sample size for each scenario for systematically varying hypothetical populations. I show that theoretical saturation is more dependent on the mean probability of observing codes than on the number of codes in a population. Moreover, the minimal and maximal information scenarios are significantly more efficient than random chance, but yield fewer repetitions per code to validate the findings. I formulate guidelines for purposive sampling and recommend that researchers follow a minimum information scenario. PMID:28746358

  9. Validation Of Intermediate Large Sample Analysis (With Sizes Up to 100 G) and Associated Facility Improvement

    International Nuclear Information System (INIS)

    Bode, P.; Koster-Ammerlaan, M.J.J.

    2018-01-01

    Pragmatic rather than physical correction factors for neutron and gamma-ray shielding were studied for samples of intermediate size, i.e. up to the 10-100 gram range. It was found that for most biological and geological materials, the neutron self-shielding is less than 5 % and the gamma-ray self-attenuation can easily be estimated. A trueness control material of 1 kg size was made based on use of left-overs of materials, used in laboratory intercomparisons. A design study for a large sample pool-side facility, handling plate-type volumes, had to be stopped because of a reduction in human resources, available for this CRP. The large sample NAA facilities were made available to guest scientists from Greece and Brazil. The laboratory for neutron activation analysis participated in the world’s first laboratory intercomparison utilizing large samples. (author)

  10. Effect of dislocation pile-up on size-dependent yield strength in finite single-crystal micro-samples

    Energy Technology Data Exchange (ETDEWEB)

    Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)

    2015-07-07

    Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.

  11. Optimizing the triple-axis spectrometer PANDA at the MLZ for small samples and complex sample environment conditions

    Science.gov (United States)

    Utschick, C.; Skoulatos, M.; Schneidewind, A.; Böni, P.

    2016-11-01

    The cold-neutron triple-axis spectrometer PANDA at the neutron source FRM II has been serving an international user community studying condensed matter physics problems. We report on a new setup, improving the signal-to-noise ratio for small samples and pressure cell setups. Analytical and numerical Monte Carlo methods are used for the optimization of elliptic and parabolic focusing guides. They are placed between the monochromator and sample positions, and the flux at the sample is compared to the one achieved by standard monochromator focusing techniques. A 25 times smaller spot size is achieved, associated with a factor of 2 increased intensity, within the same divergence limits, ± 2 ° . This optional neutron focusing guide shall establish a top-class spectrometer for studying novel exotic properties of matter in combination with more stringent sample environment conditions such as extreme pressures associated with small sample sizes.

  12. Calcium availability influences litter size and sex ratio in white-footed mice (Peromyscus leucopus.

    Directory of Open Access Journals (Sweden)

    Christina M Schmidt

    Full Text Available The production of offspring typically requires investment of resources derived from both the environment and maternal somatic reserves. As such, the availability of either of these types of resources has the potential to limit the degree to which resources are allocated to reproduction. Theory and empirical studies have argued that mothers modify reproductive performance relative to exogenous resource availability and maternal condition by adjusting size, number or sex of offspring produced. These relationships have classically been defined relative to availability of energy sources; however, in vertebrates, calcium also plays a critical role in offspring production, as a considerable amount of calcium is required to support the development of offspring skeleton(s. We tested whether the availability of calcium influences reproductive output by providing female white-footed mice with a low-calcium or standard diet from reproductive maturity to senescence. We then compared maternal skeletal condition and reproductive output, based on offspring mass, offspring number and litter sex ratio, between dietary treatments. Mothers on the low-calcium diet exhibited diminished skeletal condition at senescence and produced smaller and strongly female-biased litters. We show that skeletal condition and calcium intake can influence sex ratio and reproductive output following general theoretical models of resource partitioning during reproduction.

  13. Soot Particle Size Distribution Functions in a Turbulent Non-Premixed Ethylene-Nitrogen Flame

    KAUST Repository

    Boyette, Wesley

    2017-02-21

    A scanning mobility particle sizer with a nano differential mobility analyzer was used to measure nanoparticle size distribution functions in a turbulent non-premixed flame. The burner utilizes a premixed pilot flame which anchors a C2H4/N2 (35/65) central jet with ReD = 20,000. Nanoparticles in the flame were sampled through a N2-filled tube with a 500- μm orifice. Previous studies have shown that insufficient dilution of the nanoparticles can lead to coagulation in the sampling line and skewed particle size distribution functions. A system of mass flow controllers and valves were used to vary the dilution ratio. Single-stage and two-stage dilution systems were investigated. A parametric study on the effect of the dilution ratio on the observed particle size distribution function indicates that particle coagulation in the sampling line can be eliminated using a two-stage dilution process. Carbonaceous nanoparticle (soot) concentration particle size distribution functions along the flame centerline at multiple heights in the flame are presented. The resulting distributions reveal a pattern of increasing mean particle diameters as the distance from the nozzle along the centerline increases.

  14. Soot Particle Size Distribution Functions in a Turbulent Non-Premixed Ethylene-Nitrogen Flame

    KAUST Repository

    Boyette, Wesley; Chowdhury, Snehaunshu; Roberts, William L.

    2017-01-01

    A scanning mobility particle sizer with a nano differential mobility analyzer was used to measure nanoparticle size distribution functions in a turbulent non-premixed flame. The burner utilizes a premixed pilot flame which anchors a C2H4/N2 (35/65) central jet with ReD = 20,000. Nanoparticles in the flame were sampled through a N2-filled tube with a 500- μm orifice. Previous studies have shown that insufficient dilution of the nanoparticles can lead to coagulation in the sampling line and skewed particle size distribution functions. A system of mass flow controllers and valves were used to vary the dilution ratio. Single-stage and two-stage dilution systems were investigated. A parametric study on the effect of the dilution ratio on the observed particle size distribution function indicates that particle coagulation in the sampling line can be eliminated using a two-stage dilution process. Carbonaceous nanoparticle (soot) concentration particle size distribution functions along the flame centerline at multiple heights in the flame are presented. The resulting distributions reveal a pattern of increasing mean particle diameters as the distance from the nozzle along the centerline increases.

  15. Size-Resolved Penetration Through High-Efficiency Filter Media Typically Used for Aerosol Sampling

    Czech Academy of Sciences Publication Activity Database

    Zíková, Naděžda; Ondráček, Jakub; Ždímal, Vladimír

    2015-01-01

    Roč. 49, č. 4 (2015), s. 239-249 ISSN 0278-6826 R&D Projects: GA ČR(CZ) GBP503/12/G147 Institutional support: RVO:67985858 Keywords : filters * size-resolved penetration * atmospheric aerosol sampling Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.953, year: 2015

  16. A simple sample size formula for analysis of covariance in cluster randomized trials.

    NARCIS (Netherlands)

    Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.

    2012-01-01

    For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An

  17. Creation of a predictive equation to estimate fat-free mass and the ratio of fat-free mass to skeletal size using morphometry in lean working farm dogs.

    Science.gov (United States)

    Leung, Y M; Cave, N J; Hodgson, B A S

    2018-06-27

    To develop an equation that accurately estimates fat-free mass (FFM) and the ratio of FFM to skeletal size or mass, using morphometric measurements in lean working farm dogs, and to examine the association between FFM derived from body condition score (BCS) and FFM measured using isotope dilution. Thirteen Huntaway and seven Heading working dogs from sheep and beef farms in the Waikato region of New Zealand were recruited based on BCS (BCS 4) using a nine-point scale. Bodyweight, BCS, and morphometric measurements (head length and circumference, body length, thoracic girth, and fore and hind limb length) were recorded for each dog, and body composition was measured using an isotopic dilution technique. A new variable using morphometric measurements, termed skeletal size, was created using principal component analysis. Models for predicting FFM, leanST (FFM minus skeletal mass) and ratios of FFM and leanST to skeletal size or mass were generated using multiple linear regression analysis. Mean FFM of the 20 dogs, measured by isotope dilution, was 22.1 (SD 4.4) kg and the percentage FFM of bodyweight was 87.0 (SD 5.0)%. Median BCS was 3.0 (min 1, max 6). Bodyweight, breed, age and skeletal size or mass were associated with measured FFM (pFFM and measured FFM (R 2 =0.96), and for the ratio of predicted FFM to skeletal size and measured values (R 2 =0.99). Correlation coefficients were higher for the ratio FFM and leanST to skeletal size than for ratios using skeletal mass. There was a positive correlation between BCS-derived fat mass as a percentage of bodyweight and fat mass percentage determined using isotope dilution (R 2 =0.65). As expected, the predictive equation was accurate in estimating FFM when tested on the same group of dogs used to develop the equation. The significance of breed, independent of skeletal size, in predicting FFM indicates that individual breed formulae may be required. Future studies that apply these equations on a greater population of

  18. Analysis of small sample size studies using nonparametric bootstrap test with pooled resampling method.

    Science.gov (United States)

    Dwivedi, Alok Kumar; Mallawaarachchi, Indika; Alvarado, Luis A

    2017-06-30

    Experimental studies in biomedical research frequently pose analytical problems related to small sample size. In such studies, there are conflicting findings regarding the choice of parametric and nonparametric analysis, especially with non-normal data. In such instances, some methodologists questioned the validity of parametric tests and suggested nonparametric tests. In contrast, other methodologists found nonparametric tests to be too conservative and less powerful and thus preferred using parametric tests. Some researchers have recommended using a bootstrap test; however, this method also has small sample size limitation. We used a pooled method in nonparametric bootstrap test that may overcome the problem related with small samples in hypothesis testing. The present study compared nonparametric bootstrap test with pooled resampling method corresponding to parametric, nonparametric, and permutation tests through extensive simulations under various conditions and using real data examples. The nonparametric pooled bootstrap t-test provided equal or greater power for comparing two means as compared with unpaired t-test, Welch t-test, Wilcoxon rank sum test, and permutation test while maintaining type I error probability for any conditions except for Cauchy and extreme variable lognormal distributions. In such cases, we suggest using an exact Wilcoxon rank sum test. Nonparametric bootstrap paired t-test also provided better performance than other alternatives. Nonparametric bootstrap test provided benefit over exact Kruskal-Wallis test. We suggest using nonparametric bootstrap test with pooled resampling method for comparing paired or unpaired means and for validating the one way analysis of variance test results for non-normal data in small sample size studies. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Normative data for uterine size according to age and gravidity and possible role of the classical golden ratio.

    Science.gov (United States)

    Verguts, J; Ameye, L; Bourne, T; Timmerman, D

    2013-12-01

    To document normal measurements (length, width, anteroposterior (AP) diameter) and proportions of the non-pregnant uterus according to age and gravidity. We hypothesized that uterine proportions conform to the classical 'golden ratio' (1.618). This was a retrospective study of ultrasonographic measurements of the length, width and AP diameter of non-pregnant uteri recorded in our database between 1 January 2000 and 31 July 2012. All patients for whom abnormal findings were reported were excluded and only the first set of measurements for each patient was retained for analysis. Loess (local regression) analysis was performed using age and gravidity as explanatory variables. Measurements of 5466 non-pregnant uteri were retrieved for analysis. The mean length was found to increase to 72 mm at the age of 40 and decrease to 42 mm at the age of 80 years. Gravidity was associated with greater uterine length, width and AP diameter. Mean length/width ratio was found to be 1.857 at birth, decreasing to 1.452 at the age of 91 years. At the age of 21 years, the mean ratio was found to be 1.618, i.e. equal to the golden ratio. Increasing gravidity was associated with lower mean length/width ratio. Uterine size in non-pregnant women varies in relation to age and gravidity. Mean length/width ratio conformed to the golden ratio at the age of 21, coinciding with peak fertility. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.

  20. Determination of extremely low {sup 236}U/{sup 238}U isotope ratios in environmental samples by sector-field inductively coupled plasma mass spectrometry using high-efficiency sample introduction

    Energy Technology Data Exchange (ETDEWEB)

    Boulyga, Sergei F. [Institute of Inorganic Chemistry and Analytical Chemistry, Johannes Gutenberg-University Mainz, Duesbergweg 10-14, 55099 Mainz (Germany)]. E-mail: sergei.boulyga@univie.ac.at; Heumann, Klaus G. [Institute of Inorganic Chemistry and Analytical Chemistry, Johannes Gutenberg-University Mainz, Duesbergweg 10-14, 55099 Mainz (Germany)

    2006-07-01

    A method by inductively coupled plasma mass spectrometry (Icp-Ms) was developed which allows the measurement of {sup 236}U at concentration ranges down to 3 x 10{sup -14} g g{sup -1} and extremely low {sup 236}U/{sup 238}U isotope ratios in soil samples of 10{sup -7}. By using the high-efficiency solution introduction system APEX in connection with a sector-field ICP-MS a sensitivity of more than 5000 counts fg{sup -1} uranium was achieved. The use of an aerosol desolvating unit reduced the formation rate of uranium hydride ions UH{sup +}/U{sup +} down to a level of 10{sup -6}. An abundance sensitivity of 3 x 10{sup -7} was observed for {sup 236}U/{sup 238}U isotope ratio measurements at mass resolution 4000. The detection limit for {sup 236}U and the lowest detectable {sup 236}U/{sup 238}U isotope ratio were improved by more than two orders of magnitude compared with corresponding values by alpha spectrometry. Determination of uranium in soil samples collected in the vicinity of Chernobyl nuclear power plant (NPP) resulted in that the {sup 236}U/{sup 238}U isotope ratio is a much more sensitive and accurate marker for environmental contamination by spent uranium in comparison to the {sup 235}U/{sup 238}U isotope ratio. The ICP-MS technique allowed for the first time detection of irradiated uranium in soil samples even at distances more than 200 km to the north of Chernobyl NPP (Mogilev region). The concentration of {sup 236}U in the upper 0-10 cm soil layers varied from 2 x 10{sup -9} g g{sup -1} within radioactive spots close to the Chernobyl NPP to 3 x 10{sup -13} g g{sup -1} on a sampling site located by >200 km from Chernobyl.

  1. Determination of 129I/127I isotope ratios in liquid solutions and environmental soil samples by ICP-MS with hexapole collision cell

    OpenAIRE

    Izmer, A. V.; Boulyga, S. F.; Becker, J. S.

    2003-01-01

    The determination of I-129 in environmental samples at ultratrace levels is very difficult by ICP-MS due to a high noise caused by Xe impurities in argon plasma gas (interference of Xe-129(+)), possible (IH2+)-I-127 interference and an insufficient abundance ratio sensitivity of the ICP mass spectrometer for I-129/I-127 isotope ratio measurement. A sensitive, powerful and fast analytical technique for iodine isotope ratio measurements in aqueous solutions and contaminated soil samples directl...

  2. Sample sizes to control error estimates in determining soil bulk density in California forest soils

    Science.gov (United States)

    Youzhi Han; Jianwei Zhang; Kim G. Mattson; Weidong Zhang; Thomas A. Weber

    2016-01-01

    Characterizing forest soil properties with high variability is challenging, sometimes requiring large numbers of soil samples. Soil bulk density is a standard variable needed along with element concentrations to calculate nutrient pools. This study aimed to determine the optimal sample size, the number of observation (n), for predicting the soil bulk density with a...

  3. Size-segregated urban aerosol characterization by electron microscopy and dynamic light scattering and influence of sample preparation

    Science.gov (United States)

    Marvanová, Soňa; Kulich, Pavel; Skoupý, Radim; Hubatka, František; Ciganek, Miroslav; Bendl, Jan; Hovorka, Jan; Machala, Miroslav

    2018-04-01

    Size-segregated particulate matter (PM) is frequently used in chemical and toxicological studies. Nevertheless, toxicological in vitro studies working with the whole particles often lack a proper evaluation of PM real size distribution and characterization of agglomeration under the experimental conditions. In this study, changes in particle size distributions during the PM sample manipulation and also semiquantitative elemental composition of single particles were evaluated. Coarse (1-10 μm), upper accumulation (0.5-1 μm), lower accumulation (0.17-0.5 μm), and ultrafine (culture media. PM suspension of lower accumulation fraction in water agglomerated after freezing/thawing the sample, and the agglomerates were disrupted by subsequent sonication. Ultrafine fraction did not agglomerate after freezing/thawing the sample. Both lower accumulation and ultrafine fractions were stable in cell culture media with fetal bovine serum, while high agglomeration occurred in media without fetal bovine serum as measured during 24 h.

  4. Clustering for high-dimension, low-sample size data using distance vectors

    OpenAIRE

    Terada, Yoshikazu

    2013-01-01

    In high-dimension, low-sample size (HDLSS) data, it is not always true that closeness of two objects reflects a hidden cluster structure. We point out the important fact that it is not the closeness, but the "values" of distance that contain information of the cluster structure in high-dimensional space. Based on this fact, we propose an efficient and simple clustering approach, called distance vector clustering, for HDLSS data. Under the assumptions given in the work of Hall et al. (2005), w...

  5. Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient

    Science.gov (United States)

    Krishnamoorthy, K.; Xia, Yanping

    2008-01-01

    The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…

  6. Type-II generalized family-wise error rate formulas with application to sample size determination.

    Science.gov (United States)

    Delorme, Phillipe; de Micheaux, Pierre Lafaye; Liquet, Benoit; Riou, Jérémie

    2016-07-20

    Multiple endpoints are increasingly used in clinical trials. The significance of some of these clinical trials is established if at least r null hypotheses are rejected among m that are simultaneously tested. The usual approach in multiple hypothesis testing is to control the family-wise error rate, which is defined as the probability that at least one type-I error is made. More recently, the q-generalized family-wise error rate has been introduced to control the probability of making at least q false rejections. For procedures controlling this global type-I error rate, we define a type-II r-generalized family-wise error rate, which is directly related to the r-power defined as the probability of rejecting at least r false null hypotheses. We obtain very general power formulas that can be used to compute the sample size for single-step and step-wise procedures. These are implemented in our R package rPowerSampleSize available on the CRAN, making them directly available to end users. Complexities of the formulas are presented to gain insight into computation time issues. Comparison with Monte Carlo strategy is also presented. We compute sample sizes for two clinical trials involving multiple endpoints: one designed to investigate the effectiveness of a drug against acute heart failure and the other for the immunogenicity of a vaccine strategy against pneumococcus. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. Size and shape characteristics of drumlins, derived from a large sample, and associated scaling laws

    Science.gov (United States)

    Clark, Chris D.; Hughes, Anna L. C.; Greenwood, Sarah L.; Spagnolo, Matteo; Ng, Felix S. L.

    2009-04-01

    Ice sheets flowing across a sedimentary bed usually produce a landscape of blister-like landforms streamlined in the direction of the ice flow and with each bump of the order of 10 2 to 10 3 m in length and 10 1 m in relief. Such landforms, known as drumlins, have mystified investigators for over a hundred years. A satisfactory explanation for their formation, and thus an appreciation of their glaciological significance, has remained elusive. A recent advance has been in numerical modelling of the land-forming process. In anticipation of future modelling endeavours, this paper is motivated by the requirement for robust data on drumlin size and shape for model testing. From a systematic programme of drumlin mapping from digital elevation models and satellite images of Britain and Ireland, we used a geographic information system to compile a range of statistics on length L, width W, and elongation ratio E (where E = L/ W) for a large sample. Mean L, is found to be 629 m ( n = 58,983), mean W is 209 m and mean E is 2.9 ( n = 37,043). Most drumlins are between 250 and 1000 metres in length; between 120 and 300 metres in width; and between 1.7 and 4.1 times as long as they are wide. Analysis of such data and plots of drumlin width against length reveals some new insights. All frequency distributions are unimodal from which we infer that the geomorphological label of 'drumlin' is fair in that this is a true single population of landforms, rather than an amalgam of different landform types. Drumlin size shows a clear minimum bound of around 100 m (horizontal). Maybe drumlins are generated at many scales and this is the minimum, or this value may be an indication of the fundamental scale of bump generation ('proto-drumlins') prior to them growing and elongating. A relationship between drumlin width and length is found (with r2 = 0.48) and that is approximately W = 7 L 1/2 when measured in metres. A surprising and sharply-defined line bounds the data cloud plotted in E- W

  8. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  9. Comparison of IRT Likelihood Ratio Test and Logistic Regression DIF Detection Procedures

    Science.gov (United States)

    Atar, Burcu; Kamata, Akihito

    2011-01-01

    The Type I error rates and the power of IRT likelihood ratio test and cumulative logit ordinal logistic regression procedures in detecting differential item functioning (DIF) for polytomously scored items were investigated in this Monte Carlo simulation study. For this purpose, 54 simulation conditions (combinations of 3 sample sizes, 2 sample…

  10. What Makes Jessica Rabbit Sexy? Contrasting Roles of Waist and Hip Size

    Directory of Open Access Journals (Sweden)

    William D. Lassek

    2016-04-01

    Full Text Available While waist/hip ratio (WHR and body mass index (BMI have been the most studied putative determinants of female bodily attractiveness, BMI is not directly observable, and few studies have considered the independent roles of waist and hip size. The range of attractiveness in many studies is also quite limited, with none of the stimuli rated as highly attractive. To explore the relationships of these anthropometric parameters with attractiveness across a much broader spectrum of attractiveness, we employ three quite different samples: a large sample of college women, a larger sample of Playboy Playmates of the Month than that has been previously examined, and a large pool of imaginary women (e.g., cartoon, video game, graphic novel characters chosen as the “most attractive” by university students. Within-sample and between-sample comparisons agree in indicating that waist size is the key determinant of female bodily attractiveness and accounts for the relationship of both BMI and WHR with attractiveness, with between-sample effect sizes of 2.4–3.2. In contrast, hip size is much more similar across attractiveness groups and is unrelated to attractiveness when BMI or waist size is controlled.

  11. Size matters: relationships between body size and body mass of common coastal, aquatic invertebrates in the Baltic Sea

    Directory of Open Access Journals (Sweden)

    Johan Eklöf

    2017-01-01

    Full Text Available Background Organism biomass is one of the most important variables in ecological studies, making biomass estimations one of the most common laboratory tasks. Biomass of small macroinvertebrates is usually estimated as dry mass or ash-free dry mass (hereafter ‘DM’ vs. ‘AFDM’ per sample; a laborious and time consuming process, that often can be speeded up using easily measured and reliable proxy variables like body size or wet (fresh mass. Another common way of estimating AFDM (one of the most accurate but also time-consuming estimates of biologically active tissue mass is the use of AFDM/DM ratios as conversion factors. So far, however, these ratios typically ignore the possibility that the relative mass of biologically active vs. non-active support tissue (e.g., protective exoskeleton or shell—and therefore, also AFDM/DM ratios—may change with body size, as previously shown for taxa like spiders, vertebrates and trees. Methods We collected aquatic, epibenthic macroinvertebrates (>1 mm in 32 shallow bays along a 360 km stretch of the Swedish coast along the Baltic Sea; one of the largest brackish water bodies on Earth. We then estimated statistical relationships between the body size (length or height in mm, body dry mass and ash-free dry mass for 14 of the most common taxa; five gastropods, three bivalves, three crustaceans and three insect larvae. Finally, we statistically estimated the potential influence of body size on the AFDM/DM ratio per taxon. Results For most taxa, non-linear regression models describing the power relationship between body size and (i DM and (ii AFDM fit the data well (as indicated by low SE and high R2. Moreover, for more than half of the taxa studied (including the vast majority of the shelled molluscs, body size had a negative influence on organism AFDM/DM ratios. Discussion The good fit of the modelled power relationships suggests that the constants reported here can be used to quickly estimate

  12. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2004-03-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  13. Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem

    International Nuclear Information System (INIS)

    Reer, B.

    2004-01-01

    The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)

  14. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  15. Spatial Variability and Application of Ratios between BTEX in Two Canadian Cities

    Directory of Open Access Journals (Sweden)

    Lindsay Miller

    2011-01-01

    Full Text Available Spatial monitoring campaigns of volatile organic compounds were carried out in two similarly sized urban industrial cities, Windsor and Sarnia, ON, Canada. For Windsor, data were obtained for all four seasons at approximately 50 sites in each season (winter, spring, summer, and fall over a three-year period (2004, 2005, and 2006 for a total of 12 sampling sessions. Sampling in Sarnia took place at 37 monitoring sites in fall 2005. In both cities, passive sampling was done using 3M 3500 organic vapor samplers. This paper characterizes benzene, toluene, ethylbenzene, o, and (m + p-xylene (BTEX concentrations and relationships among BTEX species in the two cities during the fall sampling periods. BTEX concentration levels and rank order among the species were similar between the two cities. In Sarnia, the relationships between the BTEX species varied depending on location. Correlation analysis between land use and concentration ratios showed a strong influence from local industries. Use one of the ratios between the BTEX species to diagnose photochemical age may be biased due to point source emissions, for example, 53 tonnes of benzene and 86 tonnes of toluene in Sarnia. However, considering multiple ratios leads to better conclusions regarding photochemical aging. Ratios obtained in the sampling campaigns showed significant deviation from those obtained at central monitoring stations, with less difference in the (m + p/E ratio but better overall agreement in Windsor than in Sarnia.

  16. A study on the effect of free cash flow and profitability current ratio on dividend payout ratio: Evidence from Tehran Stock Exchange

    Directory of Open Access Journals (Sweden)

    Hosein Parsian

    2014-01-01

    Full Text Available Decision making about dividend payout is one of the most important decision that companies should encounter. Identifying factors that influence dividends can help managers in making an appropriate dividend policy. In the other side, companies’ dividend payouts over time and with a stable manner may influence on stock price, future earnings growth and finally investor's evaluation about owners' equity. Hence, investigating the factors influencing dividend payout ratio is of high importance. In this research, we investigate the effects of various factors on dividend payout ratio of Tehran Stock Exchange (TSE listed companies. We use time series regression (panel data in order to test the hypothesis of this study. This study provides empirical evidences by choosing a sample of 102 companies over the time span of 2005-2010. The result shows that independent variables of free cash flow and profitability current ratio have negative and significant impact on dividend payout ratio; whereas, the independent variable of leverage ratio has a positive and significant impact on dividend payout ratio. The other independent ratio such as size of the company, growth opportunities and systematic risk do not have any significant influence on dividend payout ratio.

  17. Effects of growth rate, size, and light availability on tree survival across life stages: a demographic analysis accounting for missing values and small sample sizes.

    Science.gov (United States)

    Moustakas, Aristides; Evans, Matthew R

    2015-02-28

    Plant survival is a key factor in forest dynamics and survival probabilities often vary across life stages. Studies specifically aimed at assessing tree survival are unusual and so data initially designed for other purposes often need to be used; such data are more likely to contain errors than data collected for this specific purpose. We investigate the survival rates of ten tree species in a dataset designed to monitor growth rates. As some individuals were not included in the census at some time points we use capture-mark-recapture methods both to allow us to account for missing individuals, and to estimate relocation probabilities. Growth rates, size, and light availability were included as covariates in the model predicting survival rates. The study demonstrates that tree mortality is best described as constant between years and size-dependent at early life stages and size independent at later life stages for most species of UK hardwood. We have demonstrated that even with a twenty-year dataset it is possible to discern variability both between individuals and between species. Our work illustrates the potential utility of the method applied here for calculating plant population dynamics parameters in time replicated datasets with small sample sizes and missing individuals without any loss of sample size, and including explanatory covariates.

  18. Development of a sampling strategy and sample size calculation to estimate the distribution of mammographic breast density in Korean women.

    Science.gov (United States)

    Jun, Jae Kwan; Kim, Mi Jin; Choi, Kui Son; Suh, Mina; Jung, Kyu-Won

    2012-01-01

    Mammographic breast density is a known risk factor for breast cancer. To conduct a survey to estimate the distribution of mammographic breast density in Korean women, appropriate sampling strategies for representative and efficient sampling design were evaluated through simulation. Using the target population from the National Cancer Screening Programme (NCSP) for breast cancer in 2009, we verified the distribution estimate by repeating the simulation 1,000 times using stratified random sampling to investigate the distribution of breast density of 1,340,362 women. According to the simulation results, using a sampling design stratifying the nation into three groups (metropolitan, urban, and rural), with a total sample size of 4,000, we estimated the distribution of breast density in Korean women at a level of 0.01% tolerance. Based on the results of our study, a nationwide survey for estimating the distribution of mammographic breast density among Korean women can be conducted efficiently.

  19. Likelihood ratio sequential sampling models of recognition memory.

    Science.gov (United States)

    Osth, Adam F; Dennis, Simon; Heathcote, Andrew

    2017-02-01

    The mirror effect - a phenomenon whereby a manipulation produces opposite effects on hit and false alarm rates - is benchmark regularity of recognition memory. A likelihood ratio decision process, basing recognition on the relative likelihood that a stimulus is a target or a lure, naturally predicts the mirror effect, and so has been widely adopted in quantitative models of recognition memory. Glanzer, Hilford, and Maloney (2009) demonstrated that likelihood ratio models, assuming Gaussian memory strength, are also capable of explaining regularities observed in receiver-operating characteristics (ROCs), such as greater target than lure variance. Despite its central place in theorising about recognition memory, however, this class of models has not been tested using response time (RT) distributions. In this article, we develop a linear approximation to the likelihood ratio transformation, which we show predicts the same regularities as the exact transformation. This development enabled us to develop a tractable model of recognition-memory RT based on the diffusion decision model (DDM), with inputs (drift rates) provided by an approximate likelihood ratio transformation. We compared this "LR-DDM" to a standard DDM where all targets and lures receive their own drift rate parameters. Both were implemented as hierarchical Bayesian models and applied to four datasets. Model selection taking into account parsimony favored the LR-DDM, which requires fewer parameters than the standard DDM but still fits the data well. These results support log-likelihood based models as providing an elegant explanation of the regularities of recognition memory, not only in terms of choices made but also in terms of the times it takes to make them. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Sample size calculations based on a difference in medians for positively skewed outcomes in health care studies

    Directory of Open Access Journals (Sweden)

    Aidan G. O’Keeffe

    2017-12-01

    Full Text Available Abstract Background In healthcare research, outcomes with skewed probability distributions are common. Sample size calculations for such outcomes are typically based on estimates on a transformed scale (e.g. log which may sometimes be difficult to obtain. In contrast, estimates of median and variance on the untransformed scale are generally easier to pre-specify. The aim of this paper is to describe how to calculate a sample size for a two group comparison of interest based on median and untransformed variance estimates for log-normal outcome data. Methods A log-normal distribution for outcome data is assumed and a sample size calculation approach for a two-sample t-test that compares log-transformed outcome data is demonstrated where the change of interest is specified as difference in median values on the untransformed scale. A simulation study is used to compare the method with a non-parametric alternative (Mann-Whitney U test in a variety of scenarios and the method is applied to a real example in neurosurgery. Results The method attained a nominal power value in simulation studies and was favourable in comparison to a Mann-Whitney U test and a two-sample t-test of untransformed outcomes. In addition, the method can be adjusted and used in some situations where the outcome distribution is not strictly log-normal. Conclusions We recommend the use of this sample size calculation approach for outcome data that are expected to be positively skewed and where a two group comparison on a log-transformed scale is planned. An advantage of this method over usual calculations based on estimates on the log-transformed scale is that it allows clinical efficacy to be specified as a difference in medians and requires a variance estimate on the untransformed scale. Such estimates are often easier to obtain and more interpretable than those for log-transformed outcomes.

  1. Landslide Susceptibility Assessment Using Frequency Ratio Technique with Iterative Random Sampling

    Directory of Open Access Journals (Sweden)

    Hyun-Joo Oh

    2017-01-01

    Full Text Available This paper assesses the performance of the landslide susceptibility analysis using frequency ratio (FR with an iterative random sampling. A pair of before-and-after digital aerial photographs with 50 cm spatial resolution was used to detect landslide occurrences in Yongin area, Korea. Iterative random sampling was run ten times in total and each time it was applied to the training and validation datasets. Thirteen landslide causative factors were derived from the topographic, soil, forest, and geological maps. The FR scores were calculated from the causative factors and training occurrences repeatedly ten times. The ten landslide susceptibility maps were obtained from the integration of causative factors that assigned FR scores. The landslide susceptibility maps were validated by using each validation dataset. The FR method achieved susceptibility accuracies from 89.48% to 93.21%. And the landslide susceptibility accuracy of the FR method is higher than 89%. Moreover, the ten times iterative FR modeling may contribute to a better understanding of a regularized relationship between the causative factors and landslide susceptibility. This makes it possible to incorporate knowledge-driven considerations of the causative factors into the landslide susceptibility analysis and also be extensively used to other areas.

  2. Mortality, fertility, and the OY ratio in a model hunter-gatherer system.

    Science.gov (United States)

    White, Andrew A

    2014-06-01

    An agent-based model (ABM) is used to explore how the ratio of old to young adults (the OY ratio) in a sample of dead individuals is related to aspects of mortality, fertility, and longevity experienced by the living population from which the sample was drawn. The ABM features representations of rules, behaviors, and constraints that affect person- and household-level decisions about marriage, reproduction, and infant mortality in hunter-gatherer systems. The demographic characteristics of the larger model system emerge through human-level interactions playing out in the context of "global" parameters that can be adjusted to produce a range of mortality and fertility conditions. Model data show a relationship between the OY ratios of living populations (the living OY ratio) and assemblages of dead individuals drawn from those populations (the dead OY ratio) that is consistent with that from empirically known ethnographic hunter-gatherer cases. The dead OY ratio is clearly related to the mean ages, mean adult mortality rates, and mean total fertility rates experienced by living populations in the model. Sample size exerts a strong effect on the accuracy with which the calculated dead OY ratio reflects the actual dead OY ratio of the complete assemblage. These results demonstrate that the dead OY ratio is a potentially useful metric for paleodemographic analysis of changes in mortality and mean age, and suggest that, in general, hunter-gatherer populations with higher mortality, higher fertility, and lower mean ages are characterized by lower dead OY ratios. Copyright © 2014 Wiley Periodicals, Inc.

  3. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  4. Head-body ratio as a visual cue for stature in people and sculptural art

    OpenAIRE

    Mather, George

    2010-01-01

    Body size is crucial for determining the outcome of competition for resources and mates. Many species use acoustic cues to measure caller body size. Vision is the pre-eminent sense for humans, but visual depth cues are of limited utility in judgments of absolute body size. The reliability of internal body proportion as a potential cue to stature was assessed with a large sample of anthropometric data, and the ratio of head height to body height (HBR) was found to be highly correlated with sta...

  5. Performance and separation occurrence of binary probit regression estimator using maximum likelihood method and Firths approach under different sample size

    Science.gov (United States)

    Lusiana, Evellin Dewi

    2017-12-01

    The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.

  6. Sample size estimation to substantiate freedom from disease for clustered binary data with a specific risk profile

    DEFF Research Database (Denmark)

    Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.

    2013-01-01

    and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......, thus, optimizing resource allocation. A VPC-based predictive simulation method for sample size estimation to substantiate freedom from disease is presented. To illustrate the benefits of the proposed approach we give two examples with the analysis of data from a risk factor study on Mycobacterium avium...

  7. Analysis of time series and size of equivalent sample

    International Nuclear Information System (INIS)

    Bernal, Nestor; Molina, Alicia; Pabon, Daniel; Martinez, Jorge

    2004-01-01

    In a meteorological context, a first approach to the modeling of time series is to use models of autoregressive type. This allows one to take into account the meteorological persistence or temporal behavior, thereby identifying the memory of the analyzed process. This article seeks to pre-sent the concept of the size of an equivalent sample, which helps to identify in the data series sub periods with a similar structure. Moreover, in this article we examine the alternative of adjusting the variance of the series, keeping in mind its temporal structure, as well as an adjustment to the covariance of two time series. This article presents two examples, the first one corresponding to seven simulated series with autoregressive structure of first order, and the second corresponding to seven meteorological series of anomalies of the air temperature at the surface in two Colombian regions

  8. Size dependence of non-magnetic thickness in YIG nanoparticles

    Energy Technology Data Exchange (ETDEWEB)

    Niyaifar, M., E-mail: md.niyaifar@gmail.com; Mohammadpour, H.; Dorafshani, M.; Hasanpour, A.

    2016-07-01

    This study is focused on particle size dependence of structural and magnetic properties in yttrium iron garnet (Y{sub 3}Fe{sub 5}O{sub 12}) nanoparticles. A series of YIG samples with different particle size were produced by varying the annealing temperatures. The X-ray analysis revealed an inverse correlation between lattice parameter and the crystallite size. The normal distribution is used for fitting the particles size distribution which is extracted from scanning electron micrographs. Also, by using the results of vibrating sample magnetometer, the magnetic diameter was calculated based on Langevin model in order to investigate the variation of dead layer thickness. Furthermore, the observed line broadening in Mössbauer spectra confirmed the increase of non-magnetic thickness due to the reduction of particle size. - Highlights: • Pure phase Y{sub 3}Fe{sub 5}O{sub 12} nanoparticles are fabricated in different particle size by a thermal treatment. • The size effect on magnetic properties is studied with a core/shell (magnetic/nonmagnetic) model. • The logarithmic variation of (dead layer thickness)/(particle size) ratio with the particle size is investigated. • The results of Mossbauer are explained based on the correlation between lattice constant and particle size variation.

  9. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Science.gov (United States)

    Lachin, John M; McGee, Paula L; Greenbaum, Carla J; Palmer, Jerry; Pescovitz, Mark D; Gottlieb, Peter; Skyler, Jay

    2011-01-01

    Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet), repeated 2-hour Mixed Meal Tolerance Tests (MMTT) were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC) of the C-peptide values. The natural log(x), log(x+1) and square-root (√x) transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years) and adults (18+ years). The sample size needed to detect a given relative (percentage) difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1) and √x transformed values in terms of the original units of measurement (pmol/ml). Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab) versus masked placebo. These results provide the information needed to accurately

  10. Sample size requirements for studies of treatment effects on beta-cell function in newly diagnosed type 1 diabetes.

    Directory of Open Access Journals (Sweden)

    John M Lachin

    Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to

  11. Sample size for comparing negative binomial rates in noninferiority and equivalence trials with unequal follow-up times.

    Science.gov (United States)

    Tang, Yongqiang

    2017-05-25

    We derive the sample size formulae for comparing two negative binomial rates based on both the relative and absolute rate difference metrics in noninferiority and equivalence trials with unequal follow-up times, and establish an approximate relationship between the sample sizes required for the treatment comparison based on the two treatment effect metrics. The proposed method allows the dispersion parameter to vary by treatment groups. The accuracy of these methods is assessed by simulations. It is demonstrated that ignoring the between-subject variation in the follow-up time by setting the follow-up time for all individuals to be the mean follow-up time may greatly underestimate the required size, resulting in underpowered studies. Methods are provided for back-calculating the dispersion parameter based on the published summary results.

  12. Bioethanol Production by Calcium Alginate-Immobilised St1 Yeast System: Effects of Size of Beads, Ratio and Concentration

    Directory of Open Access Journals (Sweden)

    Masniroszaime Md Zain

    2011-12-01

    Full Text Available Immobilized yeast-cell technology posses several advantages in bioethanol production due to its potential to increase the ethanol yield by eliminating unit process used. Thus, process expenses in cell recovery and reutilization can be minimised. The aim of this study is to investigate the influence of three parameters (substrate concentrations, size of alginate beads and ratio of volume of beads to volume of medium on local isolated yeast (ST1 which immobilized using calcium alginate fermentation system. The most affected ethanol production by calcium alginate-immobilised ST1 yeast system were ratio of volume of the beads to the volume of substrate and concentration of LBS. Highest theoretical yield, 78% was obtained in ST1-alginate beads with the size of beads 0.5cm, ratio volume of beads to the volume of LBS media 0.4 and 150g/l concentration of LBS.ABSTRAK: Teknologi sel yis pegun memiliki beberapa kelebihan dalam penghasilan bioetanol kerana ia berpotensi meningkatkan pengeluaran etanol dengan menyingkirkan unit proses yang digunakan. Maka, proses pembiayaan dalam perolehan sel dan penggunaan semula boleh dikurangkan. Tujuan kajian ini adalah untuk mengkaji pengaruh tiga parameter (kepekatan substrat, saiz manik alginat dan nisbah isipadu manik terhadap isipadu bahantara ke atas sel tempatan terasing (local isolated yeast (ST1 yang dipegun menggunakan sistem penapaian kalsium alginat. Penghasilan etanol yang paling berkesan dengan menggunakan sistem yis ST1 kalsium alginat-pegun adalah dengan kadar nisbah isipadu manik terhadap isipadu substrat dan kepekatan LBS. Kadar hasil teori tertinggi iaitu 78% didapati menerusi manik alginat-ST1 dengan saiz manik 0.5cm, nisbah isipadu 0.4 terhadap perantara LBS dan kepekatan LBS sebanyak 150g/l. Normal 0 false false false EN-US X-NONE X-NONE

  13. Sampling of illicit drugs for quantitative analysis--part II. Study of particle size and its influence on mass reduction.

    Science.gov (United States)

    Bovens, M; Csesztregi, T; Franc, A; Nagy, J; Dujourdy, L

    2014-01-01

    The basic goal in sampling for the quantitative analysis of illicit drugs is to maintain the average concentration of the drug in the material from its original seized state (the primary sample) all the way through to the analytical sample, where the effect of particle size is most critical. The size of the largest particles of different authentic illicit drug materials, in their original state and after homogenisation, using manual or mechanical procedures, was measured using a microscope with a camera attachment. The comminution methods employed included pestle and mortar (manual) and various ball and knife mills (mechanical). The drugs investigated were amphetamine, heroin, cocaine and herbal cannabis. It was shown that comminution of illicit drug materials using these techniques reduces the nominal particle size from approximately 600 μm down to between 200 and 300 μm. It was demonstrated that the choice of 1 g increments for the primary samples of powdered drugs and cannabis resin, which were used in the heterogeneity part of our study (Part I) was correct for the routine quantitative analysis of illicit seized drugs. For herbal cannabis we found that the appropriate increment size was larger. Based on the results of this study we can generally state that: An analytical sample weight of between 20 and 35 mg of an illicit powdered drug, with an assumed purity of 5% or higher, would be considered appropriate and would generate an RSDsampling in the same region as the RSDanalysis for a typical quantitative method of analysis for the most common, powdered, illicit drugs. For herbal cannabis, with an assumed purity of 1% THC (tetrahydrocannabinol) or higher, an analytical sample weight of approximately 200 mg would be appropriate. In Part III we will pull together our homogeneity studies and particle size investigations and use them to devise sampling plans and sample preparations suitable for the quantitative instrumental analysis of the most common illicit

  14. Evaluation of species richness estimators based on quantitative performance measures and sensitivity to patchiness and sample grain size

    Science.gov (United States)

    Willie, Jacob; Petre, Charles-Albert; Tagg, Nikki; Lens, Luc

    2012-11-01

    Data from forest herbaceous plants in a site of known species richness in Cameroon were used to test the performance of rarefaction and eight species richness estimators (ACE, ICE, Chao1, Chao2, Jack1, Jack2, Bootstrap and MM). Bias, accuracy, precision and sensitivity to patchiness and sample grain size were the evaluation criteria. An evaluation of the effects of sampling effort and patchiness on diversity estimation is also provided. Stems were identified and counted in linear series of 1-m2 contiguous square plots distributed in six habitat types. Initially, 500 plots were sampled in each habitat type. The sampling process was monitored using rarefaction and a set of richness estimator curves. Curves from the first dataset suggested adequate sampling in riparian forest only. Additional plots ranging from 523 to 2143 were subsequently added in the undersampled habitats until most of the curves stabilized. Jack1 and ICE, the non-parametric richness estimators, performed better, being more accurate and less sensitive to patchiness and sample grain size, and significantly reducing biases that could not be detected by rarefaction and other estimators. This study confirms the usefulness of non-parametric incidence-based estimators, and recommends Jack1 or ICE alongside rarefaction while describing taxon richness and comparing results across areas sampled using similar or different grain sizes. As patchiness varied across habitat types, accurate estimations of diversity did not require the same number of plots. The number of samples needed to fully capture diversity is not necessarily the same across habitats, and can only be known when taxon sampling curves have indicated adequate sampling. Differences in observed species richness between habitats were generally due to differences in patchiness, except between two habitats where they resulted from differences in abundance. We suggest that communities should first be sampled thoroughly using appropriate taxon sampling

  15. Radiographic measurement of the cardiothoracic ratio in pet macaques from Sulawesi, Indonesia

    Energy Technology Data Exchange (ETDEWEB)

    Schillaci, Michael A. [Department of Social Sciences, University of Toronto Scarborough, 1265 Military Trail, Toronto, Ontario M1C 1A4 (Canada)], E-mail: schillaci@utsc.utoronto.ca; Parish, Stephanie [Department of Social Sciences, University of Toronto Scarborough, 1265 Military Trail, Toronto, Ontario M1C 1A4 (Canada); Jones-Engel, Lisa [National Primate Research Center, University of Washington, 1705 N.E. Pacific Street, Seattle, WA 98195 (United States)

    2009-11-15

    The relative size of the heart, as measured by the cardiothoracic ratio, is often used as an index of ventricular hypertrophy-an important measure of myocardial pathophysiology in human primates. Despite its widespread use in human medicine, use of the cardiothoracic ratio in nonhuman primate veterinary medicine has been poorly documented. This report describes the results of our radiographic study of the cardiothoracic ratio in a sample of pet monkeys from Sulawesi, Indonesia. We assessed the effects of age and sex on cardiothoracic ratios, and compared our estimates with those presented in the literature for the Formosan macaque (Macaca cyclopis). Our results indicated a significant difference between the Sulawesi macaque species groupings in cardiothoracic ratios. Sex and age-related differences were not significant. Comparisons of cardiothoracic ratios with published ratios indicated similarity between M. cyclopis and Macaca nigra, but not between M. cyclopis and Macaca tonkeana.

  16. Radiographic measurement of the cardiothoracic ratio in pet macaques from Sulawesi, Indonesia

    International Nuclear Information System (INIS)

    Schillaci, Michael A.; Parish, Stephanie; Jones-Engel, Lisa

    2009-01-01

    The relative size of the heart, as measured by the cardiothoracic ratio, is often used as an index of ventricular hypertrophy-an important measure of myocardial pathophysiology in human primates. Despite its widespread use in human medicine, use of the cardiothoracic ratio in nonhuman primate veterinary medicine has been poorly documented. This report describes the results of our radiographic study of the cardiothoracic ratio in a sample of pet monkeys from Sulawesi, Indonesia. We assessed the effects of age and sex on cardiothoracic ratios, and compared our estimates with those presented in the literature for the Formosan macaque (Macaca cyclopis). Our results indicated a significant difference between the Sulawesi macaque species groupings in cardiothoracic ratios. Sex and age-related differences were not significant. Comparisons of cardiothoracic ratios with published ratios indicated similarity between M. cyclopis and Macaca nigra, but not between M. cyclopis and Macaca tonkeana.

  17. Estimating the Effective Sample Size of Tree Topologies from Bayesian Phylogenetic Analyses

    Science.gov (United States)

    Lanfear, Robert; Hua, Xia; Warren, Dan L.

    2016-01-01

    Bayesian phylogenetic analyses estimate posterior distributions of phylogenetic tree topologies and other parameters using Markov chain Monte Carlo (MCMC) methods. Before making inferences from these distributions, it is important to assess their adequacy. To this end, the effective sample size (ESS) estimates how many truly independent samples of a given parameter the output of the MCMC represents. The ESS of a parameter is frequently much lower than the number of samples taken from the MCMC because sequential samples from the chain can be non-independent due to autocorrelation. Typically, phylogeneticists use a rule of thumb that the ESS of all parameters should be greater than 200. However, we have no method to calculate an ESS of tree topology samples, despite the fact that the tree topology is often the parameter of primary interest and is almost always central to the estimation of other parameters. That is, we lack a method to determine whether we have adequately sampled one of the most important parameters in our analyses. In this study, we address this problem by developing methods to estimate the ESS for tree topologies. We combine these methods with two new diagnostic plots for assessing posterior samples of tree topologies, and compare their performance on simulated and empirical data sets. Combined, the methods we present provide new ways to assess the mixing and convergence of phylogenetic tree topologies in Bayesian MCMC analyses. PMID:27435794

  18. Lift-off process for deep-submicron-size junctions using supercritical CO2

    International Nuclear Information System (INIS)

    Fukushima, A.; Kubota, H.; Yuasa, S.; Takahachi, T.; Kadoriku, S.; Miyake, K.

    2007-01-01

    Deep-submicron-size (∼100-nm-size) junctions are a key element to investigate spin-torque transfer phenomena such as current induced magnetization reversal or the spin-torque diode effect. In the fabrication of submicron-size junctions using an etching method, the lift-off process after the etching process tends to be difficult as the size of junctions shrinks. In this study, we present a new lift-off process using supercritical CO 2 . In this process, the samples were immersed in solvent (mixture of N-Methyl-2-pyrrolidone and isopropanol), and pressurized by CO 2 gas. The CO 2 gas then went into supercritical phase and the solvent was removed by a continuous flow of CO 2 . We obtained considerable yield rate (success ratio in lift-off process) of more than 50% for the samples down to 100-nm-size junctions

  19. Breeding sex ratio and population size of loggerhead turtles from Southwestern Florida.

    Directory of Open Access Journals (Sweden)

    Jacob A Lasala

    Full Text Available Species that display temperature-dependent sex determination are at risk as a result of increasing global temperatures. For marine turtles, high incubation temperatures can skew sex ratios towards females. There are concerns that temperature increases may result in highly female-biased offspring sex ratios, which would drive a future sex ratio skew. Studying the sex ratios of adults in the ocean is logistically very difficult because individuals are widely distributed and males are inaccessible because they remain in the ocean. Breeding sex ratios (BSR are sought as a functional alternative to study adult sex ratios. One way to examine BSR is to determine the number of males that contribute to nests. Our goal was to evaluate the BSR for loggerhead turtles (Caretta caretta nesting along the eastern Gulf of Mexico in Florida, from 2013-2015, encompassing three nesting seasons. We genotyped 64 nesting females (approximately 28% of all turtles nesting at that time and up to 20 hatchlings from their nests (n = 989 using 7 polymorphic microsatellite markers. We identified multiple paternal contributions in 70% of the nests analyzed and 126 individual males. The breeding sex ratio was approximately 1 female for every 2.5 males. We did not find repeat males in any of our nests. The sex ratio and lack of repeating males was surprising because of female-biased primary sex ratios. We hypothesize that females mate offshore of their nesting beaches as well as en route. We recommend further comparisons of subsequent nesting events and of other beaches as it is imperative to establish baseline breeding sex ratios to understand how growing populations behave before extreme environmental effects are evident.

  20. Internal jugular vein: Peripheral vein adrenocorticotropic hormone ratio in patients with adrenocorticotropic hormone-dependent Cushing′s syndrome: Ratio calculated from one adrenocorticotropic hormone sample each from right and left internal jugular vein during corticotrophin releasing hormone stimulation test

    Directory of Open Access Journals (Sweden)

    Sachin Chittawar

    2013-01-01

    Full Text Available Background: Demonstration of central: Peripheral adrenocorticotropic hormone (ACTH gradient is important for diagnosis of Cushing′s disease. Aim: The aim was to assess the utility of internal jugular vein (IJV: Peripheral vein ACTH ratio for diagnosis of Cushing′s disease. Materials and Methods: Patients with ACTH-dependent Cushing′s syndrome (CS patients were the subjects for this study. One blood sample each was collected from right and left IJV following intravenous hCRH at 3 and 5 min, respectively. A simultaneous peripheral vein sample was also collected with each IJV sample for calculation of IJV: Peripheral vein ACTH ratio. IJV sample collection was done under ultrasound guidance. ACTH was assayed using electrochemiluminescence immunoassay (ECLIA. Results: Thirty-two patients participated in this study. The IJV: Peripheral vein ACTH ratio ranged from 1.07 to 6.99 ( n = 32. It was more than 1.6 in 23 patients. Cushing′s disease could be confirmed in 20 of the 23 cases with IJV: Peripheral vein ratio more than 1.6. Four patients with Cushing′s disease and 2 patients with ectopic ACTH syndrome had IJV: Peripheral vein ACTH ratio less than 1.6. Six cases with unknown ACTH source were excluded for calculation of sensitivity and specificity of the test. Conclusion: IJV: Peripheral vein ACTH ratio calculated from a single sample from each IJV obtained after hCRH had 83% sensitivity and 100% specificity for diagnosis of CD.

  1. Comparison of measurement of 99mTc-MAG3 plasma clearance by single plasma sample and renal uptake ratio

    International Nuclear Information System (INIS)

    Ushijima, Yo; Sugihara, Hiroki; Okuyama, Chio; Okitsu, Sigeyuki; Nii, Takeshi; Nishida, Takuji; Okamoto, Kunio; Maeda, Tomoho

    1997-01-01

    Measurement of 99m Tc-MAG 3 plasma clearance based on one-compartment model (MPC method) is a non-invasive method using the renal uptake ratio. We evaluated the clinical usefulness of this method, compared with effective renal plasma flow (ERPF) using 123 I-OIH and two single-plasma sample methods using 99m Tc-MAG 3 (Russell method and Bubeck method). The ratio of 99m Tc-MAG 3 clearance to ERPF was 1.00±0.26. MPC method correlated well with Russell and Bubeck methods (r=0.904, r=0.897). We conclude that MPC method is a suitable replacement for single-plasma sample method in routine clinical use. (author)

  2. Sex-specific effects of altered competition on nestling growth and survival: an experimental manipulation of brood size and sex ratio.

    Science.gov (United States)

    Nicolaus, Marion; Michler, Stephanie P M; Ubels, Richard; van der Velde, Marco; Komdeur, Jan; Both, Christiaan; Tinbergen, Joost M

    2009-03-01

    1. An increase of competition among adults or nestlings usually negatively affects breeding output. Yet little is known about the differential effects that competition has on the offspring sexes. This could be important because it may influence parental reproductive decisions. 2. In sexual size dimorphic species, two main contradictory mechanisms are proposed regarding sex-specific effects of competition on nestling performance assuming that parents do not feed their chicks differentially: (i) the larger sex requires more resources to grow and is more sensitive to a deterioration of the rearing conditions ('costly sex hypothesis'); (ii) the larger sex has a competitive advantage in intra-brood competition and performs better under adverse conditions ('competitive advantage hypothesis'). 3. In the present study, we manipulated the level of sex-specific sibling competition in a great tit population (Parus major) by altering simultaneously the brood size and the brood sex ratio on two levels: the nest (competition for food among nestlings) and the woodlot where the parents breed (competition for food among adults). We investigated whether altered competition during the nestling phase affected nestling growth traits and survival in the nest and whether the effects differed between males, the larger sex, and females. 4. We found a strong negative and sex-specific effect of experimental brood size on all the nestling traits. In enlarged broods, sexual size dimorphism was smaller which may have resulted from biased mortality towards the less competitive individuals i.e. females of low condition. No effect of brood sex ratio on nestling growth traits was found. 5. Negative brood size effects on nestling traits were stronger in natural high-density areas but we could not confirm this experimentally. 6. Our results did not support the 'costly sex hypothesis' because males did not suffer from higher mortality under harsh conditions. The 'competitive advantage hypothesis' was

  3. Efficient isotope ratio analysis of uranium particles in swipe samples by total-reflection x-ray fluorescence spectrometry and secondary ion mass spectrometry

    International Nuclear Information System (INIS)

    Esaka, Fumitaka; Watanabe, Kazuo; Fukuyama, Hiroyasu; Onodera, Takashi; Esaka, Konomi T.; Magara, Masaaki; Sakurai, Satoshi; Usuda, Shigekazu

    2004-01-01

    A new particle recovery method and a sensitive screening method were developed for subsequent isotope ratio analysis of uranium particles in safeguards swipe samples. The particles in the swipe sample were recovered onto a carrier by means of vacuum suction-impact collection method. When grease coating was applied to the carrier, the recovery efficiency was improved to 48±9%, which is superior to that of conventionally-used ultrasoneration method. Prior to isotope ratio analysis with secondary ion mass spectrometry (SIMS), total reflection X-ray fluorescence spectrometry (TXRF) was applied to screen the sample for the presence of uranium particles. By the use of Si carriers in TXRF analysis, the detection limit of 22 pg was achieved for uranium. By combining these methods with SIMS, the isotope ratios of 235 U/ 238 U for individual uranium particles were efficiently determined. (author)

  4. SIZES AND TEMPERATURE PROFILES OF QUASAR ACCRETION DISKS FROM CHROMATIC MICROLENSING

    International Nuclear Information System (INIS)

    Blackburne, Jeffrey A.; Pooley, David; Rappaport, Saul; Schechter, Paul L.

    2011-01-01

    Microlensing perturbations to the flux ratios of gravitationally lensed quasar images can vary with wavelength because of the chromatic dependence of the accretion disk's apparent size. Multiwavelength observations of microlensed quasars can thus constrain the temperature profiles of their accretion disks, a fundamental test of an important astrophysical process which is not currently possible using any other method. We present single-epoch broadband flux ratios for 12 quadruply lensed quasars in 8 bands ranging from 0.36 to 2.2 μm, as well as Chandra 0.5-8 keV flux ratios for five of them. We combine the optical/IR and X-ray ratios, together with X-ray ratios from the literature, using a Bayesian approach to constrain the half-light radii of the quasars in each filter. Comparing the overall disk sizes and wavelength slopes to those predicted by the standard thin accretion disk model, we find that on average the disks are larger than predicted by nearly an order of magnitude, with sizes that grow with wavelength with an average slope of ∼0.2 rather than the slope of 4/3 predicted by the standard thin disk theory. Though the error bars on the slope are large for individual quasars, the large sample size lends weight to the overall result. Our results present severe difficulties for a standard thin accretion disk as the main source of UV/optical radiation from quasars.

  5. Uniform deposition of size-selected clusters using Lissajous scanning

    International Nuclear Information System (INIS)

    Beniya, Atsushi; Watanabe, Yoshihide; Hirata, Hirohito

    2016-01-01

    Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonal directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt n (n = 7, 15, 20) clusters uniformly deposited on the Al 2 O 3 /NiAl(110) surface and demonstrated the importance of uniform deposition.

  6. Uniform deposition of size-selected clusters using Lissajous scanning

    Energy Technology Data Exchange (ETDEWEB)

    Beniya, Atsushi; Watanabe, Yoshihide, E-mail: e0827@mosk.tytlabs.co.jp [Toyota Central R& D Labs., Inc., 41-1 Yokomichi, Nagakute, Aichi 480-1192 (Japan); Hirata, Hirohito [Toyota Motor Corporation, 1200 Mishuku, Susono, Shizuoka 410-1193 (Japan)

    2016-05-15

    Size-selected clusters can be deposited on the surface using size-selected cluster ion beams. However, because of the cross-sectional intensity distribution of the ion beam, it is difficult to define the coverage of the deposited clusters. The aggregation probability of the cluster depends on coverage, whereas cluster size on the surface depends on the position, despite the size-selected clusters are deposited. It is crucial, therefore, to deposit clusters uniformly on the surface. In this study, size-selected clusters were deposited uniformly on surfaces by scanning the cluster ions in the form of Lissajous pattern. Two sets of deflector electrodes set in orthogonal directions were placed in front of the sample surface. Triangular waves were applied to the electrodes with an irrational frequency ratio to ensure that the ion trajectory filled the sample surface. The advantages of this method are simplicity and low cost of setup compared with raster scanning method. The authors further investigated CO adsorption on size-selected Pt{sub n} (n = 7, 15, 20) clusters uniformly deposited on the Al{sub 2}O{sub 3}/NiAl(110) surface and demonstrated the importance of uniform deposition.

  7. The case for cases B and C: intrinsic hydrogen line ratios of the broad-line region of active galactic nuclei, reddenings, and accretion disc sizes

    Science.gov (United States)

    Gaskell, C. Martin

    2017-05-01

    Low-redshift active galactic nuclei (AGNs) with extremely blue optical spectral indices are shown to have a mean, velocity-averaged, broad-line Hα/Hβ ratio of ≈2.72 ± 0.04, consistent with a Baker-Menzel Case B value. Comparison of a wide range of properties of the very bluest AGNs with those of a luminosity-matched subset of the Dong et al. blue AGN sample indicates that the only difference is the internal reddening. Ultraviolet fluxes are brighter for the bluest AGNs by an amount consistent with the flat AGN reddening curve of Gaskell et al. The lack of a significant difference in the GALEX (far-ultraviolet-near-ultraviolet) colour index strongly rules out a steep Small Magellanic Cloud-like reddening curve and also argues against an intrinsically harder spectrum for the bluest AGNs. For very blue AGNs, the Ly α/Hβ ratio is also consistent with being the Case B value. The Case B ratios provide strong support for the self-shielded broad-line model of Gaskell, Klimek & Nazarova. It is proposed that the greatly enhanced Ly α/Hβ ratio at very high velocities is a consequence of continuum fluorescence in the Lyman lines (Case C). Reddenings of AGNs mean that the far-UV luminosity is often underestimated by up to an order of magnitude. This is a major factor causing the discrepancies between measured accretion disc sizes and the predictions of simple accretion disc theory. Dust covering fractions for most AGNs are lower than has been estimated. The total mass in lower mass supermassive black holes must be greater than hitherto estimated.

  8. A Bayesian approach for incorporating economic factors in sample size design for clinical trials of individual drugs and portfolios of drugs.

    Science.gov (United States)

    Patel, Nitin R; Ankolekar, Suresh

    2007-11-30

    Classical approaches to clinical trial design ignore economic factors that determine economic viability of a new drug. We address the choice of sample size in Phase III trials as a decision theory problem using a hybrid approach that takes a Bayesian view from the perspective of a drug company and a classical Neyman-Pearson view from the perspective of regulatory authorities. We incorporate relevant economic factors in the analysis to determine the optimal sample size to maximize the expected profit for the company. We extend the analysis to account for risk by using a 'satisficing' objective function that maximizes the chance of meeting a management-specified target level of profit. We extend the models for single drugs to a portfolio of clinical trials and optimize the sample sizes to maximize the expected profit subject to budget constraints. Further, we address the portfolio risk and optimize the sample sizes to maximize the probability of achieving a given target of expected profit.

  9. The net effect of alternative allocation ratios on recruitment time and trial cost.

    Science.gov (United States)

    Vozdolska, Ralitza; Sano, Mary; Aisen, Paul; Edland, Steven D

    2009-04-01

    Increasing the proportion of subjects allocated to the experimental treatment in controlled clinical trials is often advocated as a method of increasing recruitment rates and improving the performance of trials. The presumption is that the higher likelihood of randomization to the experimental treatment will be perceived by potential study enrollees as an added benefit of participation and will increase recruitment rates and speed the completion of trials. However, studies with alternative allocation ratios require a larger sample size to maintain statistical power, which may result in a net increase in time required to complete recruitment and a net increase in total trial cost. To describe the potential net effect of alternative allocation ratios on recruitment time and trial cost. Models of recruitment time and trial cost were developed and used to compare trials with 1:1 allocation to trials with alternative allocation ratios under a range of per subject costs, per day costs, and enrollment rates. In regard to time required to complete recruitment, alternative allocation ratios are net beneficial if the recruitment rate improves by more than about 4% for trials with a 1.5:1 allocation ratio and 12% for trials with a 2:1 allocation ratio. More substantial improvements in recruitment rate, 13 and 47% respectively for scenarios we considered, are required for alternative allocation to be net beneficial in terms of tangible monetary cost. The cost models were developed expressly for trials comparing proportions or means across treatment groups. Using alternative allocation ratio designs to improve recruitment may or may not be time and cost-effective. Using alternative allocation for this purpose should only be considered for trial contexts where there is both clear evidence that the alternative design does improve recruitment rates and the attained time or cost efficiency justifies the added study subject burden implied by a larger sample size.

  10. Effects of quartz particle size and water-to-solid ratio on hydrothermal synthesis of tobermorite studied by in-situ time-resolved X-ray diffraction

    International Nuclear Information System (INIS)

    Kikuma, J.; Tsunashima, M.; Ishikawa, T.; Matsuno, S.; Ogawa, A.; Matsui, K.; Sato, M.

    2011-01-01

    Hydrothermal synthesis process of tobermorite (5CaO.6SiO 2 .5H 2 O) has been investigated by in-situ X-ray diffraction using high-energy X-rays from a synchrotron radiation source in combination with a purpose-build autoclave cell. Dissolution rates of quartz were largely affected by its particle size distribution in the starting mixtures. However, the composition (Ca/Si) of non-crystalline C-S-H at the start of tobermorite formation was identical regardless of the quartz dissolution rate. An effect of water-to-solid ratio (w/s) was investigated for samples using fine particle quartz. Tobermorite did not occur with w/s of 1.7 but occurred with w/s higher than 3.0. Surprisingly, however, the dissolution curves of quartz were nearly identical for all samples with w/s from 1.7 to 9, indicating that the dissolution rate is predominated by surface area. Possible reaction mechanism for tobermorite formation will be discussed in terms of Ca and/or silicate ion concentration in the liquid phase and distribution of Ca/Si in non-crystalline C-S-H. - Graphical abstract: Time-resolved XRD data set was obtained at up to 190 deg. C under a saturated steam pressure. Tobermorite (5CaO.6SiO 2 .5H 2 O) formation reaction was investigated in detail for several different starting materials. Highlights: → Hydrothermal formation of tobermorite was monitored by in-situ XRD. → Ca/Si of C-S-H at the start time of tobermorite formation was determined. → The Ca/Si value was identical regardless of the quartz particle size in the starting mixture.

  11. Effect of sample moisture content on XRD-estimated cellulose crystallinity index and crystallite size

    Science.gov (United States)

    Umesh P. Agarwal; Sally A. Ralph; Carlos Baez; Richard S. Reiner; Steve P. Verrill

    2017-01-01

    Although X-ray diffraction (XRD) has been the most widely used technique to investigate crystallinity index (CrI) and crystallite size (L200) of cellulose materials, there are not many studies that have taken into account the role of sample moisture on these measurements. The present investigation focuses on a variety of celluloses and cellulose...

  12. The effect of sand/cement ratio on radon exhalation from cement specimens containing 226Ra

    International Nuclear Information System (INIS)

    Takriti, S.; Shweikani, R.; Ali, A. F.; Rajaa, G.

    2002-09-01

    Portland cement was mixed with different kind of sand (calcite and silica) in different ratio to produce radioactive specimens with radium chloride. The release of radon from these samples was studied. The results showed that radon release from the calcite-cement samples increased with the increases of the sand mixed ratio until fixed value (about 20%) then decreased to less than its release from the beginning, and the release changed with the sand size also. Radon release from silica-cement samples had the same observations of calcite-cement samples. It was found that calcite-cement reduced the radon exhalation quantity rather than the silica-cement samples. The decreases of the radon exhalation from the cement-sand may be due to the creation of free spaces in the samples, which gave the possibility to radon to decay into these free spaces rather than radon exhalation. The daughters of the radon decay 214 Bi and 214 Pb reported by gamma measurements of the cement-sand samples. (author)

  13. Reproducibility of 5-HT2A receptor measurements and sample size estimations with [18F]altanserin PET using a bolus/infusion approach

    International Nuclear Information System (INIS)

    Haugboel, Steven; Pinborg, Lars H.; Arfan, Haroon M.; Froekjaer, Vibe M.; Svarer, Claus; Knudsen, Gitte M.; Madsen, Jacob; Dyrby, Tim B.

    2007-01-01

    To determine the reproducibility of measurements of brain 5-HT 2A receptors with an [ 18 F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects reproducibility and the required sample size. For assessment of the variability, six subjects were investigated with [ 18 F]altanserin PET twice, at an interval of less than 2 weeks. The sample size required to detect a 20% difference was estimated from [ 18 F]altanserin PET studies in 84 healthy subjects. Regions of interest were automatically delineated on co-registered MR and PET images. In cortical brain regions with a high density of 5-HT 2A receptors, the outcome parameter (binding potential, BP 1 ) showed high reproducibility, with a median difference between the two group measurements of 6% (range 5-12%), whereas in regions with a low receptor density, BP 1 reproducibility was lower, with a median difference of 17% (range 11-39%). Partial volume correction reduced the variability in the sample considerably. The sample size required to detect a 20% difference in brain regions with high receptor density is approximately 27, whereas for low receptor binding regions the required sample size is substantially higher. This study demonstrates that [ 18 F]altanserin PET with a bolus/infusion design has very low variability, particularly in larger brain regions with high 5-HT 2A receptor density. Moreover, partial volume correction considerably reduces the sample size required to detect regional changes between groups. (orig.)

  14. Effects of sample size on estimation of rainfall extremes at high temperatures

    Science.gov (United States)

    Boessenkool, Berry; Bürger, Gerd; Heistermann, Maik

    2017-09-01

    High precipitation quantiles tend to rise with temperature, following the so-called Clausius-Clapeyron (CC) scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD) fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  15. Effects of sample size on estimation of rainfall extremes at high temperatures

    Directory of Open Access Journals (Sweden)

    B. Boessenkool

    2017-09-01

    Full Text Available High precipitation quantiles tend to rise with temperature, following the so-called Clausius–Clapeyron (CC scaling. It is often reported that the CC-scaling relation breaks down and even reverts for very high temperatures. In our study, we investigate this reversal using observational climate data from 142 stations across Germany. One of the suggested meteorological explanations for the breakdown is limited moisture supply. Here we argue that, instead, it could simply originate from undersampling. As rainfall frequency generally decreases with higher temperatures, rainfall intensities as dictated by CC scaling are less likely to be recorded than for moderate temperatures. Empirical quantiles are conventionally estimated from order statistics via various forms of plotting position formulas. They have in common that their largest representable return period is given by the sample size. In small samples, high quantiles are underestimated accordingly. The small-sample effect is weaker, or disappears completely, when using parametric quantile estimates from a generalized Pareto distribution (GPD fitted with L moments. For those, we obtain quantiles of rainfall intensities that continue to rise with temperature.

  16. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Annemarie [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden)], E-mail: wagnera@chalmers.se; Boman, Johan [Department of Chemistry, Atmospheric Science, Goeteborg University, SE-412 96 Goeteborg (Sweden); Gatari, Michael J. [Institute of Nuclear Science and Technology, University of Nairobi, P.O. Box 30197-00100, Nairobi (Kenya)

    2008-12-15

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 {mu}m aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers.

  17. Elemental analysis of size-fractionated particulate matter sampled in Goeteborg, Sweden

    International Nuclear Information System (INIS)

    Wagner, Annemarie; Boman, Johan; Gatari, Michael J.

    2008-01-01

    The aim of the study was to investigate the mass distribution of trace elements in aerosol samples collected in the urban area of Goeteborg, Sweden, with special focus on the impact of different air masses and anthropogenic activities. Three measurement campaigns were conducted during December 2006 and January 2007. A PIXE cascade impactor was used to collect particulate matter in 9 size fractions ranging from 16 to 0.06 μm aerodynamic diameter. Polished quartz carriers were chosen as collection substrates for the subsequent direct analysis by TXRF. To investigate the sources of the analyzed air masses, backward trajectories were calculated. Our results showed that diurnal sampling was sufficient to investigate the mass distribution for Br, Ca, Cl, Cu, Fe, K, Sr and Zn, whereas a 5-day sampling period resulted in additional information on mass distribution for Cr and S. Unimodal mass distributions were found in the study area for the elements Ca, Cl, Fe and Zn, whereas the distributions for Br, Cu, Cr, K, Ni and S were bimodal, indicating high temperature processes as source of the submicron particle components. The measurement period including the New Year firework activities showed both an extensive increase in concentrations as well as a shift to the submicron range for K and Sr, elements that are typically found in fireworks. Further research is required to validate the quantification of trace elements directly collected on sample carriers

  18. Sampling and chemical analysis by TXRF of size-fractionated ambient aerosols and emissions

    International Nuclear Information System (INIS)

    John, A.C.; Kuhlbusch, T.A.J.; Fissan, H.; Schmidt, K.-G-; Schmidt, F.; Pfeffer, H.-U.; Gladtke, D.

    2000-01-01

    Results of recent epidemiological studies led to new European air quality standards which require the monitoring of particles with aerodynamic diameters ≤ 10 μm (PM 10) and ≤ 2.5 μm (PM 2.5) instead of TSP (total suspended particulate matter). As these ambient air limit values will be exceeded most likely at several locations in Europe, so-called 'action plans' have to be set up to reduce particle concentrations, which requires information about sources and processes of PMx aerosols. For chemical characterization of the aerosols, different samplers were used and total reflection x-ray fluorescence analysis (TXRF) was applied beside other methods (elemental and organic carbon analysis, ion chromatography, atomic absorption spectrometry). For TXRF analysis, a specially designed sampling unit was built where the particle size classes 10-2.5 μm and 2.5-1.0 μm were directly impacted on TXRF sample carriers. An electrostatic precipitator (ESP) was used as a back-up filter to collect particles <1 μm directly on a TXRF sample carrier. The sampling unit was calibrated in the laboratory and then used for field measurements to determine the elemental composition of the mentioned particle size fractions. One of the field campaigns was carried out at a measurement site in Duesseldorf, Germany, in November 1999. As the composition of the ambient aerosols may have been influenced by a large construction site directly in the vicinity of the station during the field campaign, not only the aerosol particles, but also construction material was sampled and analyzed by TXRF. As air quality is affected by natural and anthropogenic sources, the emissions of particles ≤ 10 μm and ≤ 2.5 μm, respectively, have to be determined to estimate their contributions to the so called coarse and fine particle modes of ambient air. Therefore, an in-stack particle sampling system was developed according to the new ambient air quality standards. This PM 10/PM 2.5 cascade impactor was

  19. Sample size planning for composite reliability coefficients: accuracy in parameter estimation via narrow confidence intervals.

    Science.gov (United States)

    Terry, Leann; Kelley, Ken

    2012-11-01

    Composite measures play an important role in psychology and related disciplines. Composite measures almost always have error. Correspondingly, it is important to understand the reliability of the scores from any particular composite measure. However, the point estimates of the reliability of composite measures are fallible and thus all such point estimates should be accompanied by a confidence interval. When confidence intervals are wide, there is much uncertainty in the population value of the reliability coefficient. Given the importance of reporting confidence intervals for estimates of reliability, coupled with the undesirability of wide confidence intervals, we develop methods that allow researchers to plan sample size in order to obtain narrow confidence intervals for population reliability coefficients. We first discuss composite reliability coefficients and then provide a discussion on confidence interval formation for the corresponding population value. Using the accuracy in parameter estimation approach, we develop two methods to obtain accurate estimates of reliability by planning sample size. The first method provides a way to plan sample size so that the expected confidence interval width for the population reliability coefficient is sufficiently narrow. The second method ensures that the confidence interval width will be sufficiently narrow with some desired degree of assurance (e.g., 99% assurance that the 95% confidence interval for the population reliability coefficient will be less than W units wide). The effectiveness of our methods was verified with Monte Carlo simulation studies. We demonstrate how to easily implement the methods with easy-to-use and freely available software. ©2011 The British Psychological Society.

  20. Required sample size for monitoring stand dynamics in strict forest reserves: a case study

    Science.gov (United States)

    Diego Van Den Meersschaut; Bart De Cuyper; Kris Vandekerkhove; Noel Lust

    2000-01-01

    Stand dynamics in European strict forest reserves are commonly monitored using inventory densities of 5 to 15 percent of the total surface. The assumption that these densities guarantee a representative image of certain parameters is critically analyzed in a case study for the parameters basal area and stem number. The required sample sizes for different accuracy and...

  1. Effect of particle size ratio on the conducting percolation threshold of granular conductive-insulating composites

    International Nuclear Information System (INIS)

    He Da; Ekere, N N

    2004-01-01

    In this paper, we apply Monte Carlo simulation to investigate the conductive percolation threshold of granular composite of conductive and insulating powders with amorphous structure. We focus on the effect of insulating to conductive particle size ratio λ = d i /d c on the conducting percolation threshold p c (the volume fraction of the conductive powder). Simulation results show that, for λ = 1, the percolation threshold p c lies between simple cubic and body centred cubic site percolation thresholds, and that as λ increases the percolation threshold decreases. We also use the structural information obtained by the simulation to study the nonlinear current-voltage characteristics of composite with solid volume fraction of conductive powder below p c in terms of electron tunnelling for nanoscale powders, dielectric breakdown for microscale or larger powders, and pressing induced conduction for non-rigid insulating powders

  2. Sex ratios in fetuses and liveborn infants with autosomal aneuploidy

    Energy Technology Data Exchange (ETDEWEB)

    Heuther, C.A.; Martin, R.L.M.; Stoppelman, S.M. [Univ. of Cincinnati, OH (United States)] [and others

    1996-06-14

    Ten data sources were used substantially to increase the available data for estimating fetal and livebirth sex ratios for Patau (trisomy 13), Edwards (trisomy 18), and Down (trisomy 21) syndromes and controls. The fetal sex ratio estimate was 0.88 (N = 584) for trisomy 13, 0.90 (N = 1702) for trisomy 18, and 1.16 (N = 3154) for trisomy 21. All were significantly different from prenatal controls (1.07). The estimated ratios in prenatal controls were 1.28 (N = 1409) for CVSs and 1.06 (N = 49427) for amniocenteses, indicating a clear differential selection against males, mostly during the first half of fetal development. By contrast, there were no sex ratio differences for any of the trisomies when comparing gestational ages <16 and >16 weeks. The livebirth sex ratio estimate was 0.90 (N = 293) for trisomy 13, 0.63 (N = 497) for trisomy 18, and 1.15 (N = 6424) for trisomy 21, the latter two being statistically different than controls (1.05) (N = 3660707). These ratios for trisomies 13 and 18 were also statistically different than the ratio for trisomy 21. Only in trisomy 18 did the sex ratios in fetuses and livebirths differ, indicating a prenatal selection against males >16 weeks. No effects of maternal age or race were found on these estimates for any of the fetal or livebirth trisomies. Sex ratios for translocations and mosaics were also estimated for these aneuploids. Compared to previous estimates, these results are less extreme, most likely because of larger sample sizes and less sample bias. They support the hypothesis that these trisomy sex ratios are skewed at conception, or become so during embryonic development through differential intrauterine selection. The estimate for Down syndrome livebirths is also consistent with the hypothesis that its higher sex ratio is associated with paternal nondisjunction. 36 refs., 5 tabs.

  3. Influence of aggregate size, water cement ratio and age on the microstructure of the interfacial transition zone

    International Nuclear Information System (INIS)

    Elsharief, Amir; Cohen, Menashi D.; Olek, Jan

    2003-01-01

    This paper presents the results of an investigation on the effect of water-cement ratio (w/c), aggregate size, and age on the microstructure of the interfacial transition zone (ITZ) between normal weight aggregate and the bulk cement paste. Backscattered electron images (BSE) obtained by scanning electron microscope were used to characterize the ITZ microstructure. The results suggest that the w/c plays an important role in controlling the microstructure of the ITZ and its thickness. Reducing w/c from 0.55 to 0.40 resulted in an ITZ with characteristics that are not distinguishable from those of the bulk paste as demonstrated by BSE images. Aggregate size appears to have an important influence on the ITZ characteristics. Reducing the aggregate size tends to reduce the ITZ porosity. The evolution of the ITZ microstructure relative to that of the bulk paste appears to depend on the initial content of the unhydrated cement grains (UH). The results suggest that the presence of a relatively low amount of UH in the ITZ at early age may cause the porosity of the ITZ, relative to that of the bulk paste, to increase with time. The presence of relatively large amount of UH in the ITZ at early ages may cause its porosity, relative to that of the bulk paste, to decrease with time

  4. Reproducibility of R-fMRI metrics on the impact of different strategies for multiple comparison correction and sample sizes.

    Science.gov (United States)

    Chen, Xiao; Lu, Bin; Yan, Chao-Gan

    2018-01-01

    Concerns regarding reproducibility of resting-state functional magnetic resonance imaging (R-fMRI) findings have been raised. Little is known about how to operationally define R-fMRI reproducibility and to what extent it is affected by multiple comparison correction strategies and sample size. We comprehensively assessed two aspects of reproducibility, test-retest reliability and replicability, on widely used R-fMRI metrics in both between-subject contrasts of sex differences and within-subject comparisons of eyes-open and eyes-closed (EOEC) conditions. We noted permutation test with Threshold-Free Cluster Enhancement (TFCE), a strict multiple comparison correction strategy, reached the best balance between family-wise error rate (under 5%) and test-retest reliability/replicability (e.g., 0.68 for test-retest reliability and 0.25 for replicability of amplitude of low-frequency fluctuations (ALFF) for between-subject sex differences, 0.49 for replicability of ALFF for within-subject EOEC differences). Although R-fMRI indices attained moderate reliabilities, they replicated poorly in distinct datasets (replicability < 0.3 for between-subject sex differences, < 0.5 for within-subject EOEC differences). By randomly drawing different sample sizes from a single site, we found reliability, sensitivity and positive predictive value (PPV) rose as sample size increased. Small sample sizes (e.g., < 80 [40 per group]) not only minimized power (sensitivity < 2%), but also decreased the likelihood that significant results reflect "true" effects (PPV < 0.26) in sex differences. Our findings have implications for how to select multiple comparison correction strategies and highlight the importance of sufficiently large sample sizes in R-fMRI studies to enhance reproducibility. Hum Brain Mapp 39:300-318, 2018. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  5. Discounting and Digit Ratio: Low 2D:4D Predicts Patience for a Sample of Females

    Directory of Open Access Journals (Sweden)

    Diego Aycinena

    2018-01-01

    Full Text Available Inter-temporal trade-offs are ubiquitous in human decision making. We study the relationship between preferences over such trade-offs and the ratio of the second digit to that of the forth (2D:4D, a marker for pre-natal exposure to sex hormones. Specifically, we study whether 2D:4D affects discounting. Our sample consists of 419 female participants of a Guatemalan conditional cash transfer program who take part in an experiment. Their choices in the convex time budget (CTB experimental task allow us to make inferences regarding their patience (discounting, while controlling for present-biasedness and preference for smoothing consumption (utility curvature. We find that women with lower digit ratios tend to be more patient.

  6. Establishing a sample-to cut-off ratio for lab-diagnosis of hepatitis C virus in Indian context.

    Science.gov (United States)

    Tiwari, Aseem K; Pandey, Prashant K; Negi, Avinash; Bagga, Ruchika; Shanker, Ajay; Baveja, Usha; Vimarsh, Raina; Bhargava, Richa; Dara, Ravi C; Rawat, Ganesh

    2015-01-01

    Lab-diagnosis of hepatitis C virus (HCV) is based on detecting specific antibodies by enzyme immuno-assay (EIA) or chemiluminescence immuno-assay (CIA). Center for Disease Control reported that signal-to-cut-off (s/co) ratios in anti-HCV antibody tests like EIA/CIA can be used to predict the probable result of supplemental test; above a certain s/co value it is most likely to be true-HCV positive result and below that certain s/co it is most likely to be false-positive result. A prospective study was undertaken in patients in tertiary care setting for establishing this "certain" s/co value. The study was carried out in consecutive patients requiring HCV testing for screening/diagnosis and medical management. These samples were tested for anti-HCV on CIA (VITROS(®) Anti-HCV assay, Ortho-Clinical Diagnostics, New Jersey) for calculating s/co value. The supplemental nucleic acid test used was polymerase chain reaction (PCR) (Abbott). PCR test results were used to define true negatives, false negatives, true positives, and false positives. Performance of different putative s/co ratios versus PCR was measured using sensitivity, specificity, positive predictive value and negative predictive value and most appropriate s/co was considered on basis of highest specificity at sensitivity of at least 95%. An s/co ratio of ≥6 worked out to be over 95% sensitive and almost 92% specific in 438 consecutive patient samples tested. The s/co ratio of six can be used for lab-diagnosis of HCV infection; those with s/co higher than six can be diagnosed to have HCV infection without any need for supplemental assays.

  7. Small Mammal Sampling in Mortandad and Los Alamos Canyons, 2005

    International Nuclear Information System (INIS)

    Kathy Bennett; Sherri Sherwood; Rhonda Robinson

    2006-01-01

    As part of an ongoing ecological field investigation at Los Alamos National Laboratory, a study was conducted that compared measured contaminant concentrations in sediment to population parameters for small mammals in the Mortandad Canyon watershed. Mortandad Canyon and its tributary canyons have received contaminants from multiple solid waste management units and areas of concern since establishment of the Laboratory in the 1940s. The study included three reaches within Effluent and Mortandad canyons (E-1W, M-2W, and M-3) that had a spread in the concentrations of metals and radionuclides and included locations where polychlorinated biphenyls and perchlorate had been detected. A reference location, reach LA-BKG in upper Los Alamos Canyon, was also included in the study for comparison purposes. A small mammal study was initiated to assess whether potential adverse effects were evident in Mortandad Canyon due to the presence of contaminants, designated as contaminants of potential ecological concern, in the terrestrial media. Study sites, including the reference site, were sampled in late July/early August. Species diversity and the mean daily capture rate were the highest for E-1W reach and the lowest for the reference site. Species composition among the three reaches in Mortandad was similar with very little overlap with the reference canyon. Differences in species composition and diversity were most likely due to differences in habitat. Sex ratios, body weights, and reproductive status of small mammals were also evaluated. However, small sample sizes of some species within some sites affected the analysis. Ratios of males to females by species of each site (n = 5) were tested using a Chi-square analysis. No differences were detected. Where there was sufficient sample size, body weights of adult small mammals were compared between sites. No differences in body weights were found. Reproductive status of species appears to be similar across sites. However, sample

  8. Small Mammal Sampling in Mortandad and Los Alamos Canyons, 2005

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, Kathy; Sherwood, Sherri; Robinson, Rhonda

    2006-08-15

    As part of an ongoing ecological field investigation at Los Alamos National Laboratory, a study was conducted that compared measured contaminant concentrations in sediment to population parameters for small mammals in the Mortandad Canyon watershed. Mortandad Canyon and its tributary canyons have received contaminants from multiple solid waste management units and areas of concern since establishment of the Laboratory in the 1940s. The study included three reaches within Effluent and Mortandad canyons (E-1W, M-2W, and M-3) that had a spread in the concentrations of metals and radionuclides and included locations where polychlorinated biphenyls and perchlorate had been detected. A reference location, reach LA-BKG in upper Los Alamos Canyon, was also included in the study for comparison purposes. A small mammal study was initiated to assess whether potential adverse effects were evident in Mortandad Canyon due to the presence of contaminants, designated as contaminants of potential ecological concern, in the terrestrial media. Study sites, including the reference site, were sampled in late July/early August. Species diversity and the mean daily capture rate were the highest for E-1W reach and the lowest for the reference site. Species composition among the three reaches in Mortandad was similar with very little overlap with the reference canyon. Differences in species composition and diversity were most likely due to differences in habitat. Sex ratios, body weights, and reproductive status of small mammals were also evaluated. However, small sample sizes of some species within some sites affected the analysis. Ratios of males to females by species of each site (n = 5) were tested using a Chi-square analysis. No differences were detected. Where there was sufficient sample size, body weights of adult small mammals were compared between sites. No differences in body weights were found. Reproductive status of species appears to be similar across sites. However, sample

  9. Effects of aspect ratio and specimen size on uniaxial failure stress of iron green bodies at high strain rates

    Directory of Open Access Journals (Sweden)

    Kuroyanagi Yuki

    2015-01-01

    Full Text Available Powder metallurgy is used for the production of a number of mechanical parts and is an essential production method. These are great advantages such as product cost effectiveness and product uniqueness. In general, however parts created by powder metallurgy have low strength because of low density. In order to increase strength as well as density, new techniques such as high-velocity-compaction (HVC was developed and further investigation has been conducted on improvement of techniques and optimum condition using computer simulation. In this study, the effects of aspect ratio and specimen size of iron green bodies on failure strength of uniaxial compression and failure behavior were examined using a split Hopkinson pressure Bar. The diameters of specimens were 12.5 mm and 25 mm the aspect ratios (thickness/diameter were 0.8 and 1.2.

  10. Applications of Isotope Ratio Mass Spectrometry in Sports Drug Testing Accounting for Isotope Fractionation in Analysis of Biological Samples.

    Science.gov (United States)

    Piper, Thomas; Thevis, Mario

    2017-01-01

    The misuse of anabolic-androgenic steroids (AAS) in sports aiming at enhancing athletic performance has been a challenging matter for doping control laboratories for decades. While the presence of a xenobiotic AAS or its metabolite(s) in human urine immediately represents an antidoping rule violation, the detection of the misuse of endogenous steroids such as testosterone necessitates comparably complex procedures. Concentration thresholds and diagnostic analyte ratios computed from urinary steroid concentrations of, e.g., testosterone and epitestosterone have aided identifying suspicious doping control samples in the past. These ratios can however also be affected by confounding factors and are therefore not sufficient to prove illicit steroid administrations. Here, carbon and, in rare cases, hydrogen isotope ratio mass spectrometry (IRMS) has become an indispensable tool. Importantly, the isotopic signatures of pharmaceutical steroid preparations commonly differ slightly but significantly from those found with endogenously produced steroids. By comparing the isotope ratios of endogenous reference compounds like pregnanediol to that of testosterone and its metabolites, the unambiguous identification of the urinary steroids' origin is accomplished. Due to the complex urinary matrix, several steps in sample preparation are inevitable as pure analyte peaks are a prerequisite for valid IRMS determinations. The sample cleanup encompasses steps such as solid phase or liquid-liquid extraction that are presumably not accompanied by isotopic fractionation processes, as well as more critical steps like enzymatic hydrolysis, high-performance liquid chromatography fractionation, and derivatization of analytes. In order to exclude any bias of the analytical results, each step of the analytical procedure is optimized and validated to exclude, or at least result in constant, isotopic fractionation. These efforts are explained in detail. © 2017 Elsevier Inc. All rights reserved.

  11. Isotope analytics for the evaluation of the feeding influence on the isotope ratio in beef samples; Isotopenanalytik zur Bestimmung des Einflusses der Ernaehrung auf die Isotopenzusammensetzung in Rinderproben

    Energy Technology Data Exchange (ETDEWEB)

    Herwig, Nadine

    2010-11-17

    Information about the origin of food and associated production systems has a high significance for food control. An extremely promising approach to obtain such information is the determination of isotope ratios of different elements. In this study the correlation of the isotope ratios C-13/C-12, N-15/N-14, Mg-25/Mg-24, and Sr-87/Sr-86 in bovine samples (milk and urine) and the corresponding isotope ratios in feed was investigated. It was shown that in the bovine samples all four isotope ratios correlate with the isotope composition of the feed. The isotope ratios of strontium and magnesium have the advantage that they directly reflect the isotope ratios of the ingested feed since there is no isotope fractionation in the bovine organism which is in contrast to the case of carbon and nitrogen isotope ratios. From the present feeding study it is evident, that a feed change leads to a significant change in the delta C-13 values in milk and urine within 10 days already. For the deltaN-15 values the feed change was only visible in the bovine urine after 49 days. Investigations of cows from two different regions (Berlin/Germany and Goestling/Austria) kept at different feeding regimes revealed no differences in the N-15/N-14 and Mg-26/Mg-24 isotope ratios. The strongest correlation between the isotope ratio of the bovine samples and the kind of ingested feed was observed for the carbon isotope ratio. With this ratio even smallest differences in the feed composition were traceable in the bovine samples. Since different regions usually coincide with different feeding regimes, carbon isotope ratios can be used to distinguish bovine samples from different regions if the delta C-13 values of the ingested feed are different. Furthermore, the determination of strontium isotope ratios revealed significant differences between bovine and feed samples of Berlin and Goestling due to the different geologic realities. Hence the carbon and strontium isotope ratios allow the best

  12. Mechanical stability of nanoporous metals with small ligament sizes

    International Nuclear Information System (INIS)

    Crowson, Douglas A.; Farkas, Diana; Corcoran, Sean G.

    2009-01-01

    Digital samples of nanoporous gold with small ligament sizes were studied by atomistic simulation using different interatomic potentials that represent varying surface stress values. We predict a surface relaxation driven mechanical instability for these materials. Plastic deformation is induced by the surface stress without external load, related to the combination of the surface stress value and the surface to volume ratio.

  13. Does Size Matter? The Impact of Student-Staff Ratios

    Science.gov (United States)

    McDonald, Gael

    2013-01-01

    Student-staff ratios (SSRs) in higher education have a significant impact on teaching and learning and critical financial implications for organisations. While SSRs are often used as a currency for quality both externally for political reasons and internally within universities for resource allocations, there is a considerable amount of ambiguity…

  14. Power and sample size calculations in the presence of phenotype errors for case/control genetic association studies

    Directory of Open Access Journals (Sweden)

    Finch Stephen J

    2005-04-01

    Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.

  15. Assessment of crown angulations, crown inclinations, and tooth size discrepancies in a South Indian population

    Directory of Open Access Journals (Sweden)

    Geeta Maruti Doodamani

    2011-01-01

    Full Text Available Aims and Objective: The aim of this study was to assess crown angulations, crown inclinations, and tooth size discrepancy in a sample population from Davangere, South India. Materials and Methods: One hundred adults (50 male and 50 female of age 18-30 years, with Angle′s class I ideal occlusion and balanced profiles, were selected for the study. Study models were prepared and crown angulations and crown inclinations were measured using a customized protractor device. Bolton′s analysis was used to measure the tooth size discrepancies. Results: Maxillary and mandibular teeth had less crown angulations. Maxillary and mandibular incisors and maxillary molars showed increased crown inclinations, whereas mandibular molars and premolars had less crown inclinations than the original Andrews sample. The mean maxillary and mandibular tooth size ratios, overall and anterior, were similar to Bolton′s ratios. Conclusions: The finding of this study indicates that there are possible racial and ethnic factors contributing to variations in crown angulations and crown inclinations.

  16. Effect of Mechanical Impact Energy on the Sorption and Diffusion of Moisture in Reinforced Polymer Composite Samples on Variation of Their Sizes

    Science.gov (United States)

    Startsev, V. O.; Il'ichev, A. V.

    2018-05-01

    The effect of mechanical impact energy on the sorption and diffusion of moisture in polymer composite samples on variation of their sizes was investigated. Square samples, with sides of 40, 60, 80, and 100 mm, made of a KMKU-2m-120.E0,1 carbon-fiber and KMKS-2m.120.T10 glass-fiber plastics with different resistances to calibrated impacts, were compared. Impact loading diagrams of the samples in relation to their sizes and impact energy were analyzed. It is shown that the moisture saturation and moisture diffusion coefficient of the impact-damaged materials can be modeled by Fick's second law with account of impact energy and sample sizes.

  17. Characterizing fallout material using Cs and Pu atom ratios in environmental samples from the FDNPP fallout zone

    Science.gov (United States)

    Richards, David; Dunne, James; Martin, Peter; Scott, Tom; Yamashiki, Yosuke; Coath, Chris; Chen, Hart

    2017-04-01

    Here we report the use of combined of Cs and Pu isotope measurements to investigate the extensive plumes of radioactive fallout from the disaster at Fukishima Daiichi nuclear power plant (FNDPP) in March 2011. Among the aims of our study are improved assessment of the physico-chemical nature and changing distribution of land-based fallout. 135Cs/137Cs and 134Cs/137Cs atom ratios are indicative of conditions that relate to the nuclear fission reactions responsible for producing the respective radiocaesium isotopes, and offer much more in terms of forensic and chronological analysis than monitoring 137Cs alone. We briefly present methods to quantify the atom ratios of Cs and Pu isotopes in soil, lichen and moss samples from FDNPP catchment using mass spectrometry (ThermoTRITON for Cs and ThermoNEPTUNE for Pu). High precision data from Fukushima are presented (e.g decay corrected 135Cs/137Cs atom ratio = 0.384 ± 0.001 (n = 5) for roadside dust from Iitate region), and these are in agreement with prelimary estimates by others. We also confirm results for IAEA-330, a spinach sample collected from Polesskoe, Ukraine and subject to contamination from the Chernobyl accident. In addition to Cs isotopes, we adopt Pu isotopes to add a further dimension to the forensic analysis. We discuss the corrections required for background levels prior to the disaster, possibility for multiple components of fallout and complicating factors associated with remobilisation during the clean-up operation. In parallel with this work on digests and leaches from bulk environmental samples, we are refining methods for particle identification, isolation and characterisation using a complementary sequence of cutting-edge materials and manipulation techniques, including combined electron microscopy, focused ion beam techniques (Dualbeam), nano/micro manipulators and nano-scale imaging x-ray photoelectron spectroscopy (NanoESCA) and microCT.

  18. ELEMENTAL ABUNDANCE RATIOS IN STARS OF THE OUTER GALACTIC DISK. IV. A NEW SAMPLE OF OPEN CLUSTERS

    International Nuclear Information System (INIS)

    Yong, David; Carney, Bruce W.; Friel, Eileen D.

    2012-01-01

    We present radial velocities and chemical abundances for nine stars in the old, distant open clusters Be18, Be21, Be22, Be32, and PWM4. For Be18 and PWM4, these are the first chemical abundance measurements. Combining our data with literature results produces a compilation of some 68 chemical abundance measurements in 49 unique clusters. For this combined sample, we study the chemical abundances of open clusters as a function of distance, age, and metallicity. We confirm that the metallicity gradient in the outer disk is flatter than the gradient in the vicinity of the solar neighborhood. We also confirm that the open clusters in the outer disk are metal-poor with enhancements in the ratios [α/Fe] and perhaps [Eu/Fe]. All elements show negligible or small trends between [X/Fe] and distance ( –1 ), but for some elements, there is a hint that the local (R GC GC > 13 kpc) samples may have different trends with distance. There is no evidence for significant abundance trends versus age ( –1 ). We measure the linear relation between [X/Fe] and metallicity, [Fe/H], and find that the scatter about the mean trend is comparable to the measurement uncertainties. Comparison with solar neighborhood field giants shows that the open clusters share similar abundance ratios [X/Fe] at a given metallicity. While the flattening of the metallicity gradient and enhanced [α/Fe] ratios in the outer disk suggest a chemical enrichment history different from that of the solar neighborhood, we echo the sentiments expressed by Friel et al. that definitive conclusions await homogeneous analyses of larger samples of stars in larger numbers of clusters. Arguably, our understanding of the evolution of the outer disk from open clusters is currently limited by systematic abundance differences between various studies.

  19. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  20. Particle size dependence of biogenic secondary organic aerosol molecular composition

    Science.gov (United States)

    Tu, Peijun; Johnston, Murray V.

    2017-06-01

    Formation of secondary organic aerosol (SOA) is initiated by the oxidation of volatile organic compounds (VOCs) in the gas phase whose products subsequently partition to the particle phase. Non-volatile molecules have a negligible evaporation rate and grow particles at their condensation rate. Semi-volatile molecules have a significant evaporation rate and grow particles at a much slower rate than their condensation rate. Particle phase chemistry may enhance particle growth if it transforms partitioned semi-volatile molecules into non-volatile products. In principle, changes in molecular composition as a function of particle size allow non-volatile molecules that have condensed from the gas phase (a surface-limited process) to be distinguished from those produced by particle phase reaction (a volume-limited process). In this work, SOA was produced by β-pinene ozonolysis in a flow tube reactor. Aerosol exiting the reactor was size-selected with a differential mobility analyzer, and individual particle sizes between 35 and 110 nm in diameter were characterized by on- and offline mass spectrometry. Both the average oxygen-to-carbon (O / C) ratio and carbon oxidation state (OSc) were found to decrease with increasing particle size, while the relative signal intensity of oligomers increased with increasing particle size. These results are consistent with oligomer formation primarily in the particle phase (accretion reactions, which become more favored as the volume-to-surface-area ratio of the particle increases). Analysis of a series of polydisperse SOA samples showed similar dependencies: as the mass loading increased (and average volume-to-surface-area ratio increased), the average O / C ratio and OSc decreased, while the relative intensity of oligomer ions increased. The results illustrate the potential impact that particle phase chemistry can have on biogenic SOA formation and the particle size range where this chemistry becomes important.

  1. Particle size dependence of biogenic secondary organic aerosol molecular composition

    Directory of Open Access Journals (Sweden)

    P. Tu

    2017-06-01

    Full Text Available Formation of secondary organic aerosol (SOA is initiated by the oxidation of volatile organic compounds (VOCs in the gas phase whose products subsequently partition to the particle phase. Non-volatile molecules have a negligible evaporation rate and grow particles at their condensation rate. Semi-volatile molecules have a significant evaporation rate and grow particles at a much slower rate than their condensation rate. Particle phase chemistry may enhance particle growth if it transforms partitioned semi-volatile molecules into non-volatile products. In principle, changes in molecular composition as a function of particle size allow non-volatile molecules that have condensed from the gas phase (a surface-limited process to be distinguished from those produced by particle phase reaction (a volume-limited process. In this work, SOA was produced by β-pinene ozonolysis in a flow tube reactor. Aerosol exiting the reactor was size-selected with a differential mobility analyzer, and individual particle sizes between 35 and 110 nm in diameter were characterized by on- and offline mass spectrometry. Both the average oxygen-to-carbon (O ∕ C ratio and carbon oxidation state (OSc were found to decrease with increasing particle size, while the relative signal intensity of oligomers increased with increasing particle size. These results are consistent with oligomer formation primarily in the particle phase (accretion reactions, which become more favored as the volume-to-surface-area ratio of the particle increases. Analysis of a series of polydisperse SOA samples showed similar dependencies: as the mass loading increased (and average volume-to-surface-area ratio increased, the average O ∕ C ratio and OSc decreased, while the relative intensity of oligomer ions increased. The results illustrate the potential impact that particle phase chemistry can have on biogenic SOA formation and the particle size range where this chemistry becomes

  2. Day and night variation in chemical composition and toxicological responses of size segregated urban air PM samples in a high air pollution situation

    Science.gov (United States)

    Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.

    2015-11-01

    Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM

  3. Size dependent magnetism of mass selected deposited transition metal clusters

    International Nuclear Information System (INIS)

    Lau, T.

    2002-05-01

    The size dependent magnetic properties of small iron clusters deposited on ultrathin Ni/Cu(100) films have been studied with circularly polarised synchrotron radiation. For X-ray magnetic circular dichroism studies, the magnetic moments of size selected clusters were aligned perpendicular to the sample surface. Exchange coupling of the clusters to the ultrathin Ni/Cu(100) film determines the orientation of their magnetic moments. All clusters are coupled ferromagnetically to the underlayer. With the use of sum rules, orbital and spin magnetic moments as well as their ratios have been extracted from X-ray magnetic circular dichroism spectra. The ratio of orbital to spin magnetic moments varies considerably as a function of cluster size, reflecting the dependence of magnetic properties on cluster size and geometry. These variations can be explained in terms of a strongly size dependent orbital moment. Both orbital and spin magnetic moments are significantly enhanced in small clusters as compared to bulk iron, although this effect is more pronounced for the spin moment. Magnetic properties of deposited clusters are governed by the interplay of cluster specific properties on the one hand and cluster-substrate interactions on the other hand. Size dependent variations of magnetic moments are modified upon contact with the substrate. (orig.)

  4. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of

  5. A regression-based differential expression detection algorithm for microarray studies with ultra-low sample size.

    Directory of Open Access Journals (Sweden)

    Daniel Vasiliu

    Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.

  6. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Science.gov (United States)

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall pdecrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  7. The Peroxidation of Leukocytes Index Ratio Reveals the Prooxidant Effect of Green Tea Extract

    Directory of Open Access Journals (Sweden)

    Ilaria Peluso

    2016-01-01

    Full Text Available Despite tea increased plasma nonenzymatic antioxidant capacity, the European Food Safety Administration (EFSA denied claims related to tea and its protection from oxidative damage. Furthermore, the Supplement Information Expert Committee (DSI EC expressed some doubts on the safety of green tea extract (GTE. We performed a pilot study in order to evaluate the effect of a single dose of two capsules of a GTE supplement (200 mg × 2 on the peroxidation of leukocytes index ratio (PLIR in relation to uric acid (UA and ferric reducing antioxidant potential (FRAP, as well as the sample size to reach statistical significance. GTE induced a prooxidant effect on leukocytes, whereas FRAP did not change, in agreement with the EFSA and the DSI EC conclusions. Besides, our results confirm the primary role of UA in the antioxidant defences. The ratio based calculation of the PLIR reduced the sample size to reach statistical significance, compared to the resistance to an exogenous oxidative stress and to the functional capacity of oxidative burst. Therefore, PLIR could be a sensitive marker of redox status.

  8. Fruit size and sampling sites affect on dormancy, viability and germination of teak (Tectona grandis L.) seeds

    International Nuclear Information System (INIS)

    Akram, M.; Aftab, F.

    2016-01-01

    In the present study, fruits (drupes) were collected from Changa Manga Forest Plus Trees (CMF-PT), Changa Manga Forest Teak Stand (CMF-TS) and Punjab University Botanical Gardens (PUBG) and categorized into very large (= 17 mm dia.), large (12-16 mm dia.), medium (9-11 mm dia.) or small (6-8 mm dia.) fruit size grades. Fresh water as well as mechanical scarification and stratification were tested for breaking seed dormancy. Viability status of seeds was estimated by cutting test, X-rays and In vitro seed germination. Out of 2595 fruits from CMF-PT, 500 fruits were of very large grade. This fruit category also had highest individual fruit weight (0.58 g) with more number of 4-seeded fruits (5.29 percent) and fair germination potential (35.32 percent). Generally, most of the fruits were 1-seeded irrespective of size grades and sampling sites. Fresh water scarification had strong effect on germination (44.30 percent) as compared to mechanical scarification and cold stratification after 40 days of sowing. Similarly, sampling sites and fruit size grades also had significant influence on germination. Highest germination (82.33 percent) was obtained on MS (Murashige and Skoog) agar-solidified medium as compared to Woody Plant Medium (WPM) (69.22 percent). Seedlings from all the media were transferred to ex vitro conditions in the greenhouse and achieved highest survival (28.6 percent) from seedlings previously raised on MS agar-solidified medium after 40 days. There was an association between the studied parameters of teak seeds and the sampling sites and fruit size. (author)

  9. Sample-size resonance, ferromagnetic resonance and magneto-permittivity resonance in multiferroic nano-BiFeO3/paraffin composites at room temperature

    International Nuclear Information System (INIS)

    Wang, Lei; Li, Zhenyu; Jiang, Jia; An, Taiyu; Qin, Hongwei; Hu, Jifan

    2017-01-01

    In the present work, we demonstrate that ferromagnetic resonance and magneto-permittivity resonance can be observed in appropriate microwave frequencies at room temperature for multiferroic nano-BiFeO 3 /paraffin composite sample with an appropriate sample-thickness (such as 2 mm). Ferromagnetic resonance originates from the room-temperature weak ferromagnetism of nano-BiFeO 3 . The observed magneto-permittivity resonance in multiferroic nano-BiFeO 3 is connected with the dynamic magnetoelectric coupling through Dzyaloshinskii–Moriya (DM) magnetoelectric interaction or the combination of magnetostriction and piezoelectric effects. In addition, we experimentally observed the resonance of negative imaginary permeability for nano BiFeO 3 /paraffin toroidal samples with longer sample thicknesses D=3.7 and 4.9 mm. Such resonance of negative imaginary permeability belongs to sample-size resonance. - Highlights: • Nano-BiFeO 3 /paraffin composite shows a ferromagnetic resonance. • Nano-BiFeO 3 /paraffin composite shows a magneto-permittivity resonance. • Resonance of negative imaginary permeability in BiFeO 3 is a sample-size resonance. • Nano-BiFeO 3 /paraffin composite with large thickness shows a sample-size resonance.

  10. The Effect of Sterilization on Size and Shape of Fat Globules in Model Processed Cheese Samples

    Directory of Open Access Journals (Sweden)

    B. Tremlová

    2006-01-01

    Full Text Available Model cheese samples from 4 independent productions were heat sterilized (117 °C, 20 minutes after the melting process and packing with an aim to prolong their durability. The objective of the study was to assess changes in the size and shape of fat globules due to heat sterilization by using image analysis methods. The study included a selection of suitable methods of preparation mounts, taking microphotographs and making overlays for automatic processing of photographs by image analyser, ascertaining parameters to determine the size and shape of fat globules and statistical analysis of results obtained. The results of the experiment suggest that changes in shape of fat globules due to heat sterilization are not unequivocal. We found that the size of fat globules was significantly increased (p < 0.01 due to heat sterilization (117 °C, 20 min, and the shares of small fat globules (up to 500 μm2, or 100 μm2 in the samples of heat sterilized processed cheese were decreased. The results imply that the image analysis method is very useful when assessing the effect of technological process on the quality of processed cheese quality.

  11. Sampling bee communities using pan traps: alternative methods increase sample size

    Science.gov (United States)

    Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...

  12. Establishing a sample-to cut-off ratio for lab-diagnosis of hepatitis C virus in Indian context

    Directory of Open Access Journals (Sweden)

    Aseem K Tiwari

    2015-01-01

    Full Text Available Introduction: Lab-diagnosis of hepatitis C virus (HCV is based on detecting specific antibodies by enzyme immuno-assay (EIA or chemiluminescence immuno-assay (CIA. Center for Disease Control reported that signal-to-cut-off (s/co ratios in anti-HCV antibody tests like EIA/CIA can be used to predict the probable result of supplemental test; above a certain s/co value it is most likely to be true-HCV positive result and below that certain s/co it is most likely to be false-positive result. A prospective study was undertaken in patients in tertiary care setting for establishing this "certain" s/co value. Materials and Methods: The study was carried out in consecutive patients requiring HCV testing for screening/diagnosis and medical management. These samples were tested for anti-HCV on CIA (VITROS ® Anti-HCV assay, Ortho-Clinical Diagnostics, New Jersey for calculating s/co value. The supplemental nucleic acid test used was polymerase chain reaction (PCR (Abbott. PCR test results were used to define true negatives, false negatives, true positives, and false positives. Performance of different putative s/co ratios versus PCR was measured using sensitivity, specificity, positive predictive value and negative predictive value and most appropriate s/co was considered on basis of highest specificity at sensitivity of at least 95%. Results: An s/co ratio of ≥6 worked out to be over 95% sensitive and almost 92% specific in 438 consecutive patient samples tested. Conclusion: The s/co ratio of six can be used for lab-diagnosis of HCV infection; those with s/co higher than six can be diagnosed to have HCV infection without any need for supplemental assays.

  13. Tradeoffs in the evolution of caste and body size in the hyperdiverse ant genus Pheidole.

    Directory of Open Access Journals (Sweden)

    Terrence P McGlynn

    Full Text Available The efficient investment of resources is often the route to ecological success, and the adaptability of resource investment may play a critical role in promoting biodiversity. The ants of the "hyperdiverse" genus Pheidole produce two discrete sterile castes, soldiers and minor workers. Within Pheidole, there is tremendous interspecific variation in proportion of soldiers. The causes and correlates of caste ratio variation among species of Pheidole remain enigmatic. Here we test whether a body size threshold model accounts for interspecific variation in caste ratio in Pheidole, such that species with larger body sizes produce relatively fewer soldiers within their colonies. We evaluated the caste ratio of 26 species of Pheidole and found that the body size of workers accounts for interspecific variation in the production of soldiers as we predicted. Twelve species sampled from one forest in Costa Rica yielded the same relationship as found in previously published data from many localities. We conclude that production of soldiers in the most species-rich group of ants is regulated by a body size threshold mechanism, and that the great variation in body size and caste ratio in Pheidole plays a role in niche divergence in this rapidly evolving taxon.

  14. Measurements of Plutonium and Americium in Soil Samples from Project 57 using the Suspended Soil Particle Sizing System (SSPSS)

    International Nuclear Information System (INIS)

    John L. Bowen; Rowena Gonzalez; David S. Shafer

    2001-01-01

    As part of the preliminary site characterization conducted for Project 57, soils samples were collected for separation into several size-fractions using the Suspended Soil Particle Sizing System (SSPSS). Soil samples were collected specifically for separation by the SSPSS at three general locations in the deposited Project 57 plume, the projected radioactivity of which ranged from 100 to 600 pCi/g. The primary purpose in focusing on samples with this level of activity is that it would represent anticipated residual soil contamination levels at the site after corrective actions are completed. Consequently, the results of the SSPSS analysis can contribute to dose calculation and corrective action-level determinations for future land-use scenarios at the site

  15. Influence of secular trends and sample size on reference equations for lung function tests.

    Science.gov (United States)

    Quanjer, P H; Stocks, J; Cole, T J; Hall, G L; Stanojevic, S

    2011-03-01

    The aim of our study was to determine the contribution of secular trends and sample size to lung function reference equations, and establish the number of local subjects required to validate published reference values. 30 spirometry datasets collected between 1978 and 2009 provided data on healthy, white subjects: 19,291 males and 23,741 females aged 2.5-95 yrs. The best fit for forced expiratory volume in 1 s (FEV(1)), forced vital capacity (FVC) and FEV(1)/FVC as functions of age, height and sex were derived from the entire dataset using GAMLSS. Mean z-scores were calculated for individual datasets to determine inter-centre differences. This was repeated by subdividing one large dataset (3,683 males and 4,759 females) into 36 smaller subsets (comprising 18-227 individuals) to preclude differences due to population/technique. No secular trends were observed and differences between datasets comprising >1,000 subjects were small (maximum difference in FEV(1) and FVC from overall mean: 0.30- -0.22 z-scores). Subdividing one large dataset into smaller subsets reproduced the above sample size-related differences and revealed that at least 150 males and 150 females would be necessary to validate reference values to avoid spurious differences due to sampling error. Use of local controls to validate reference equations will rarely be practical due to the numbers required. Reference equations derived from large or collated datasets are recommended.

  16. EFFECT OF PARTICLE SIZE AND PACKING RATIO OF PID ON VIBRATION AMPLITUDE OF BEAM

    Directory of Open Access Journals (Sweden)

    P.S. Kachare

    2013-06-01

    Full Text Available Everything in the universe that has mass possesses stiffness and intrinsic damping. Owing to the stiffness property, mass will vibrate when excited and its intrinsic damping property will act to stop the vibration. The particle impact damper (PID is a very interesting damper that affects impact and friction effects of particles by means of energy dissipation. PID is a means for achieving high structural damping by using a particle-filled enclosure attached to a structure. The particles absorb the kinetic energy of the structure and convert it into heat through inelastic collisions between the particles themselves and between the particles and the walls of the enclosure. In this work, PID is measured for a cantilever mild steel beam with an enclosure attached to its free end; copper particles are used in this study. The PID is found to be highly nonlinear. The most useful observation is that for a very small weight penalty (about 7% to 8 %, the maximum damped amplitude of vibration at resonance with a PID, is about 9 to 10 times smaller than that without a PID. It is for more than that of with only intrinsic material damping of a majority of structural metals. A satisfactory comparison of damping with and without particles through experimentation is observed. The effect of the size of the particles on the damping performance of the beam and the effective packing ratio can be identified. It is also shown that as the packing ratio changes, the contributions of the phenomena of impact and friction towards damping also change. It is encouraging that despite its deceptive simplicity, the model captures the essential physics of PID.

  17. Cocaine tolerance: acute versus chronic effects as dependent upon fixed-ratio size.

    OpenAIRE

    Hoffman, S H; Branch, M N; Sizemore, G M

    1987-01-01

    The effects of cocaine on operant behavior were studied by examining fixed-ratio value as a factor in the development of tolerance. Pigeons pecked a response key under a three-component multiple schedule, with each bird being exposed to fixed-ratio values that were categorized as small, medium, or large. Administered acutely, cocaine (1.0 to 10.0 mg/kg) produced dose-related decreases in overall rate of responding. Responding maintained by the largest ratio was decreased by lower doses than t...

  18. Effects of LiDAR point density, sampling size and height threshold on estimation accuracy of crop biophysical parameters.

    Science.gov (United States)

    Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong

    2016-05-30

    Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.

  19. Two to five repeated measurements per patient reduced the required sample size considerably in a randomized clinical trial for patients with inflammatory rheumatic diseases

    Directory of Open Access Journals (Sweden)

    Smedslund Geir

    2013-02-01

    Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.

  20. Fluidic sampling

    International Nuclear Information System (INIS)

    Houck, E.D.

    1992-01-01

    This paper covers the development of the fluidic sampler and its testing in a fluidic transfer system. The major findings of this paper are as follows. Fluidic jet samples can dependably produce unbiased samples of acceptable volume. The fluidic transfer system with a fluidic sampler in-line will transfer water to a net lift of 37.2--39.9 feet at an average ratio of 0.02--0.05 gpm (77--192 cc/min). The fluidic sample system circulation rate compares very favorably with the normal 0.016--0.026 gpm (60--100 cc/min) circulation rate that is commonly produced for this lift and solution with the jet-assisted airlift sample system that is normally used at ICPP. The volume of the sample taken with a fluidic sampler is dependant on the motive pressure to the fluidic sampler, the sample bottle size and on the fluidic sampler jet characteristics. The fluidic sampler should be supplied with fluid having the motive pressure of the 140--150 percent of the peak vacuum producing motive pressure for the jet in the sampler. Fluidic transfer systems should be operated by emptying a full pumping chamber to nearly empty or empty during the pumping cycle, this maximizes the solution transfer rate

  1. PENGARUH STRUKTUR MODAL, KEBIJAKAN DIVIDEN DAN SIZE TERHADAP NILAI PERUSAHAAN(Studi Pada Perusahaan Properti Di Bursa Efek Indonesia

    Directory of Open Access Journals (Sweden)

    Zainal Abidin

    2016-04-01

    Full Text Available Company has a normative goal to maximize the value of the companies, which aims to maximize shareholder wealth. This study aims to analyze the factors that value effect of the company. The variables used in this study are the Debt to Equity Ratio (DER, Dividend Yield (DYD and Size. The research was conducted on property companies listed on the Stock Exchange, period from 2009 to 2011. The dependent variable in this study is measured by the value of the company’s, Price to Book Value (PBV. The independent variable in this study is capital structure is measured by Debt to Equity Ratio (DER, dividend policy as measured by Dividend Yield (DYD and Size. Sampling was conducted using purposive sampling method which produces 17 companies as the  study  of  a  population  sample  of  52  companies.  The  method  used  in  this research is the analysis of linier regression. This  result  showed  by  together  DER,  DYD  and  Size  has  effect  on  PBV.  The partially  DER  indicated  positive  and  significant  effect  of  PBV.  Size  had  no significant positive effect on PBV. DYD has significant effects and negatively to PBV. Keyword : Debt  to  Equity  Ratio  (DER,  Divident  Yield  (DYD,  Size  and  Proce  to  Book Value (PBV

  2. The Determinants of Brazilian Football Clubs’ Debt Ratios

    Directory of Open Access Journals (Sweden)

    Marke Geisy da Silva Dantas

    2017-01-01

    Full Text Available This paper explores the relationship between the debt ratio of Brazilian football clubs and several potential determinants, both financial and sports-related. Our explanatory variables are Current Ratio, Return on Assets, Score Percentage, Size, 12 Biggest Clubs, Access (to specific championships, e.g. Libertadores da América , Division, Title (won at time t and Relegated (at time t . Data was collected from several publicly available channels and our sample was mostly decided according to this availability. The time range adopted was 2010-2013. The model employed was Generalized Estimating Equation. Our results suggest that debt ratios are more associated with their popularity or their participation in the highest division of its main championship rather than titles held, access to different competitions or relegation to lower levels. We believe that our findings may be useful for both practitioners, who might know the impact of their sports-related choices in their clubs’ debts, and policymakers, that could prepare differentiated policies for specific groups (e.g divisions.

  3. Droplet Size-Aware and Error-Correcting Sample Preparation Using Micro-Electrode-Dot-Array Digital Microfluidic Biochips.

    Science.gov (United States)

    Li, Zipeng; Lai, Kelvin Yi-Tse; Chakrabarty, Krishnendu; Ho, Tsung-Yi; Lee, Chen-Yi

    2017-12-01

    Sample preparation in digital microfluidics refers to the generation of droplets with target concentrations for on-chip biochemical applications. In recent years, digital microfluidic biochips (DMFBs) have been adopted as a platform for sample preparation. However, there remain two major problems associated with sample preparation on a conventional DMFB. First, only a (1:1) mixing/splitting model can be used, leading to an increase in the number of fluidic operations required for sample preparation. Second, only a limited number of sensors can be integrated on a conventional DMFB; as a result, the latency for error detection during sample preparation is significant. To overcome these drawbacks, we adopt a next generation DMFB platform, referred to as micro-electrode-dot-array (MEDA), for sample preparation. We propose the first sample-preparation method that exploits the MEDA-specific advantages of fine-grained control of droplet sizes and real-time droplet sensing. Experimental demonstration using a fabricated MEDA biochip and simulation results highlight the effectiveness of the proposed sample-preparation method.

  4. A behavioral Bayes method to determine the sample size of a clinical trial considering efficacy and safety.

    Science.gov (United States)

    Kikuchi, Takashi; Gittins, John

    2009-08-15

    It is necessary for the calculation of sample size to achieve the best balance between the cost of a clinical trial and the possible benefits from a new treatment. Gittins and Pezeshk developed an innovative (behavioral Bayes) approach, which assumes that the number of users is an increasing function of the difference in performance between the new treatment and the standard treatment. The better a new treatment, the more the number of patients who want to switch to it. The optimal sample size is calculated in this framework. This BeBay approach takes account of three decision-makers, a pharmaceutical company, the health authority and medical advisers. Kikuchi, Pezeshk and Gittins generalized this approach by introducing a logistic benefit function, and by extending to the more usual unpaired case, and with unknown variance. The expected net benefit in this model is based on the efficacy of the new drug but does not take account of the incidence of adverse reactions. The present paper extends the model to include the costs of treating adverse reactions and focuses on societal cost-effectiveness as the criterion for determining sample size. The main application is likely to be to phase III clinical trials, for which the primary outcome is to compare the costs and benefits of a new drug with a standard drug in relation to national health-care. Copyright 2009 John Wiley & Sons, Ltd.

  5. Elaboration of austenitic stainless steel samples with bimodal grain size distributions and investigation of their mechanical behavior

    Science.gov (United States)

    Flipon, B.; de la Cruz, L. Garcia; Hug, E.; Keller, C.; Barbe, F.

    2017-10-01

    Samples of 316L austenitic stainless steel with bimodal grain size distributions are elaborated using two distinct routes. The first one is based on powder metallurgy using spark plasma sintering of two powders with different particle sizes. The second route applies the reverse-annealing method: it consists in inducing martensitic phase transformation by plastic strain and further annealing in order to obtain two austenitic grain populations with different sizes. Microstructural analy ses reveal that both methods are suitable to generate significative grain size contrast and to control this contrast according to the elaboration conditions. Mechanical properties under tension are then characterized for different grain size distributions. Crystal plasticity finite element modelling is further applied in a configuration of bimodal distribution to analyse the role played by coarse grains within a matrix of fine grains, considering not only their volume fraction but also their spatial arrangement.

  6. The N-Pact Factor: Evaluating the Quality of Empirical Journals with Respect to Sample Size and Statistical Power

    Science.gov (United States)

    Fraley, R. Chris; Vazire, Simine

    2014-01-01

    The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)—the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings. PMID:25296159

  7. The Effects of Test Length and Sample Size on Item Parameters in Item Response Theory

    Science.gov (United States)

    Sahin, Alper; Anil, Duygu

    2017-01-01

    This study investigates the effects of sample size and test length on item-parameter estimation in test development utilizing three unidimensional dichotomous models of item response theory (IRT). For this purpose, a real language test comprised of 50 items was administered to 6,288 students. Data from this test was used to obtain data sets of…

  8. Influence of pH, Temperature and Sample Size on Natural and Enforced Syneresis of Precipitated Silica

    Directory of Open Access Journals (Sweden)

    Sebastian Wilhelm

    2015-12-01

    Full Text Available The production of silica is performed by mixing an inorganic, silicate-based precursor and an acid. Monomeric silicic acid forms and polymerizes to amorphous silica particles. Both further polymerization and agglomeration of the particles lead to a gel network. Since polymerization continues after gelation, the gel network consolidates. This rather slow process is known as “natural syneresis” and strongly influences the product properties (e.g., agglomerate size, porosity or internal surface. “Enforced syneresis” is the superposition of natural syneresis with a mechanical, external force. Enforced syneresis may be used either for analytical or preparative purposes. Hereby, two open key aspects are of particular interest. On the one hand, the question arises whether natural and enforced syneresis are analogous processes with respect to their dependence on the process parameters: pH, temperature and sample size. On the other hand, a method is desirable that allows for correlating natural and enforced syneresis behavior. We can show that the pH-, temperature- and sample size-dependency of natural and enforced syneresis are indeed analogous. It is possible to predict natural syneresis using a correlative model. We found that our model predicts maximum volume shrinkages between 19% and 30% in comparison to measured values of 20% for natural syneresis.

  9. The Influence of the Size, Age and Sex on the Computed Tomographic Measured Size of the Pituitary Gland in Normal Horses.

    Science.gov (United States)

    Crijns, C P; Van Bree, H J; Broeckx, B J G; Schauvliege, S; Van Loon, G; Martens, A; Vanderperren, K; Dingemanse, W B; Gielen, I M

    2017-06-01

    The objective of this study was to examine the influence of the size, age and sex of the horse on the size of the pituitary gland and determine the possibility of using the pituitary gland height-to-brain area ratio (P:B ratio) to allow comparison of different sized and aged horses. Thirty-two horses without pituitary pars inter-media dysfunction that underwent a contrast-enhanced computed tomographic (CT) examination were included in a cross-sectional study. On the CT images, the pituitary gland height was measured and the P:B ratio was calculated. These measurements were correlated to the size, age and sex of the horses. The pituitary gland height was significantly associated with the size (P horses. No significant association was found between the P:B ratio and the size (P = 0.25), the age (P = 0.06) or the sex (P = 0.25) of the horses. In conclusion, the pituitary gland size varies between different sized and aged horses. The use of the P:B ratio is a valuable metric for making comparisons between the pituitary glands of these horses. © 2017 Blackwell Verlag GmbH.

  10. Optimum sample length for estimating anchovy size distribution and the proportion of juveniles per fishing set for the Peruvian purse-seine fleet

    Directory of Open Access Journals (Sweden)

    Rocío Joo

    2017-04-01

    Full Text Available The length distribution of catches represents a fundamental source of information for estimating growth and spatio-temporal dynamics of cohorts. The length distribution of caught is estimated based on samples of catched individuals. This work studies the optimum sample size of individuals at each fishing set in order to obtain a representative sample of the length and the proportion of juveniles in the fishing set. For that matter, we use anchovy (Engraulis ringens length data from different fishing sets recorded by observers at-sea from the On-board Observers Program from the Peruvian Marine Research Institute. Finally, we propose an optimum sample size for obtaining robust size and juvenile estimations. Though the application of this work corresponds to the anchovy fishery, the procedure can be applied to any fishery, either for on board or inland biometric measurements.

  11. (I Can’t Get No) Saturation: A simulation and guidelines for sample sizes in qualitative research

    NARCIS (Netherlands)

    van Rijnsoever, Frank J.

    2017-01-01

    I explore the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in

  12. Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples

    International Nuclear Information System (INIS)

    Lisboa-Filho, P N; Deimling, C V; Ortiz, W A

    2010-01-01

    In this contribution superconducting specimens of YBa 2 Cu 3 O 7-δ were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.

  13. Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples

    Energy Technology Data Exchange (ETDEWEB)

    Lisboa-Filho, P N [UNESP - Universidade Estadual Paulista, Grupo de Materiais Avancados, Departamento de Fisica, Bauru (Brazil); Deimling, C V; Ortiz, W A, E-mail: plisboa@fc.unesp.b [Grupo de Supercondutividade e Magnetismo, Departamento de Fisica, Universidade Federal de Sao Carlos, Sao Carlos (Brazil)

    2010-01-15

    In this contribution superconducting specimens of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.

  14. Sample-size resonance, ferromagnetic resonance and magneto-permittivity resonance in multiferroic nano-BiFeO{sub 3}/paraffin composites at room temperature

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Lei; Li, Zhenyu; Jiang, Jia; An, Taiyu; Qin, Hongwei; Hu, Jifan, E-mail: hujf@sdu.edu.cn

    2017-01-01

    In the present work, we demonstrate that ferromagnetic resonance and magneto-permittivity resonance can be observed in appropriate microwave frequencies at room temperature for multiferroic nano-BiFeO{sub 3}/paraffin composite sample with an appropriate sample-thickness (such as 2 mm). Ferromagnetic resonance originates from the room-temperature weak ferromagnetism of nano-BiFeO{sub 3}. The observed magneto-permittivity resonance in multiferroic nano-BiFeO{sub 3} is connected with the dynamic magnetoelectric coupling through Dzyaloshinskii–Moriya (DM) magnetoelectric interaction or the combination of magnetostriction and piezoelectric effects. In addition, we experimentally observed the resonance of negative imaginary permeability for nano BiFeO{sub 3}/paraffin toroidal samples with longer sample thicknesses D=3.7 and 4.9 mm. Such resonance of negative imaginary permeability belongs to sample-size resonance. - Highlights: • Nano-BiFeO{sub 3}/paraffin composite shows a ferromagnetic resonance. • Nano-BiFeO{sub 3}/paraffin composite shows a magneto-permittivity resonance. • Resonance of negative imaginary permeability in BiFeO{sub 3} is a sample-size resonance. • Nano-BiFeO{sub 3}/paraffin composite with large thickness shows a sample-size resonance.

  15. Determination of carbon isotope ratios for honey samples by means of a liquid chromatography/isotope ratio mass spectrometry system coupled with a post-column pump.

    Science.gov (United States)

    Kawashima, Hiroto; Suto, Momoka; Suto, Nana

    2018-05-20

    Liquid chromatography/isotope ratio mass spectrometry (LC/IRMS) has been used to authenticate and trace products such as honey, wine, and lemon juice, and compounds such as caffeine and pesticides. However, LC/IRMS has several disadvantages, including the high cost of the CO 2 membrane and blocking by solidified sodium persulfate. Here, we developed an improved system for determining carbon isotope ratios by LC/IRMS. The main improvement was the use of a post-column pump. Using the improved system, we determined δ 13 C values for glucose with high accuracy and precision (0.1‰ and 0.1‰, respectively; n = 3). The glucose, fructose, disaccharide, trisaccharide, and organic acid constituents of the honey samples were analyzed by LC/IRMS. The δ 13 C values for glucose, fructose, disaccharides, trisaccharides, and organic acids ranged from -27.0 to -24.2‰, -26.8 to -24.0‰, -28.8 to -24.0‰, -27.8 to -22.8‰, and -30.6 to -27.4‰, respectively. The analysis time was 1/3-1/2 the times required for analysis by previously reported methods. The column flow rate could be arbitrarily adjusted with the post-column pump. We applied the improved method to 26 commercial honey samples. Our results can be expected to be useful for other researchers who use LC/IRMS. This article is protected by copyright. All rights reserved.

  16. Reducing sample size by combining superiority and non-inferiority for two primary endpoints in the Social Fitness study.

    Science.gov (United States)

    Donkers, Hanneke; Graff, Maud; Vernooij-Dassen, Myrra; Nijhuis-van der Sanden, Maria; Teerenstra, Steven

    2017-01-01

    In randomized controlled trials, two endpoints may be necessary to capture the multidimensional concept of the intervention and the objectives of the study adequately. We show how to calculate sample size when defining success of a trial by combinations of superiority and/or non-inferiority aims for the endpoints. The randomized controlled trial design of the Social Fitness study uses two primary endpoints, which can be combined into five different scenarios for defining success of the trial. We show how to calculate power and sample size for each scenario and compare these for different settings of power of each endpoint and correlation between them. Compared to a single primary endpoint, using two primary endpoints often gives more power when success is defined as: improvement in one of the two endpoints and no deterioration in the other. This also gives better power than when success is defined as: improvement in one prespecified endpoint and no deterioration in the remaining endpoint. When two primary endpoints are equally important, but a positive effect in both simultaneously is not per se required, the objective of having one superior and the other (at least) non-inferior could make sense and reduce sample size. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Determination of trace element concentrations and stable lead, uranium and thorium isotope ratios by quadrupole-ICP-MS in NORM and NORM-polluted sample leachates

    International Nuclear Information System (INIS)

    Mas, J.L.; Villa, M.; Hurtado, S.; García-Tenorio, R.

    2012-01-01

    Highlights: ► Polluted sediment and NORM samples. ► An efficient yet fast process allowing multi-parametric determinations in 206 Pb/ 207 Pb/ 208 Pb, 238 U/ 234 U and 232 Th/ 230 Th isotope ratios using a single sample aliquot and a single instrument (ICP-QMS). Eichrom UTEVA ® extraction chromatography minicolumns were used to separate uranium and thorium in sample leachates. Independent ICP-MS determinations of uranium and thorium isotope ratios were carried out afterwards. Previously a small aliquot of the leachate was used for the determination of trace element concentrations and lead isotope ratios. Several radiochemical arrangements were tested to get maximum performances and simplicity of the method. The performances of the method were studied in terms of chemical yields of uranium and thorium and removal of the potentially interfering elements. The established method was applied to samples from a chemical industry and sediments collected in a NORM-polluted scenario. The results obtained from our method allowed us to infer not only the extent, but also the sources of the contamination in the area.

  18. Estimating population sizes for elusive animals: the forest elephants of Kakum National Park, Ghana.

    Science.gov (United States)

    Eggert, L S; Eggert, J A; Woodruff, D S

    2003-06-01

    African forest elephants are difficult to observe in the dense vegetation, and previous studies have relied upon indirect methods to estimate population sizes. Using multilocus genotyping of noninvasively collected samples, we performed a genetic survey of the forest elephant population at Kakum National Park, Ghana. We estimated population size, sex ratio and genetic variability from our data, then combined this information with field observations to divide the population into age groups. Our population size estimate was very close to that obtained using dung counts, the most commonly used indirect method of estimating the population sizes of forest elephant populations. As their habitat is fragmented by expanding human populations, management will be increasingly important to the persistence of forest elephant populations. The data that can be obtained from noninvasively collected samples will help managers plan for the conservation of this keystone species.

  19. Effect of the grain size of the soil on the measured activity and variation in activity in surface and subsurface soil samples

    International Nuclear Information System (INIS)

    Sulaiti, H.A.; Rega, P.H.; Bradley, D.; Dahan, N.A.; Mugren, K.A.; Dosari, M.A.

    2014-01-01

    Correlation between grain size and activity concentrations of soils and concentrations of various radionuclides in surface and subsurface soils has been measured for samples taken in the State of Qatar by gamma-spectroscopy using a high purity germanium detector. From the obtained gamma-ray spectra, the activity concentrations of the 238U (226Ra) and /sup 232/ Th (/sup 228/ Ac) natural decay series, the long-lived naturally occurring radionuclide 40 K and the fission product radionuclide 137CS have been determined. Gamma dose rate, radium equivalent, radiation hazard index and annual effective dose rates have also been estimated from these data. In order to observe the effect of grain size on the radioactivity of soil, three grain sizes were used i.e., smaller than 0.5 mm; smaller than 1 mm and greater than 0.5 mm; and smaller than 2 mm and greater than 1 mm. The weighted activity concentrations of the 238U series nuclides in 0.5-2 mm grain size of sample numbers was found to vary from 2.5:f:0.2 to 28.5+-0.5 Bq/kg, whereas, the weighted activity concentration of 4 degree K varied from 21+-4 to 188+-10 Bq/kg. The weighted activity concentrations of 238U series and 4 degree K have been found to be higher in the finest grain size. However, for the 232Th series, the activity concentrations in the 1-2 mm grain size of one sample were found to be higher than in the 0.5-1 mm grain size. In the study of surface and subsurface soil samples, the activity concentration levels of 238 U series have been found to range from 15.9+-0.3 to 24.1+-0.9 Bq/kg, in the surface soil samples (0-5 cm) and 14.5+-0.3 to 23.6+-0.5 Bq/kg in the subsurface soil samples (5-25 cm). The activity concentrations of 232Th series have been found to lie in the range 5.7+-0.2 to 13.7+-0.5 Bq/kg, in the surface soil samples (0-5 cm)and 4.1+-0.2 to 15.6+-0.3 Bq/kg in the subsurface soil samples (5-25 cm). The activity concentrations of 4 degree K were in the range 150+-8 to 290+-17 Bq/kg, in the surface

  20. Real-time photonic sampling with improved signal-to-noise and distortion ratio using polarization-dependent modulators

    Science.gov (United States)

    Liang, Dong; Zhang, Zhiyao; Liu, Yong; Li, Xiaojun; Jiang, Wei; Tan, Qinggui

    2018-04-01

    A real-time photonic sampling structure with effective nonlinearity suppression and excellent signal-to-noise ratio (SNR) performance is proposed. The key points of this scheme are the polarization-dependent modulators (P-DMZMs) and the sagnac loop structure. Thanks to the polarization sensitive characteristic of P-DMZMs, the differences between transfer functions of the fundamental signal and the distortion become visible. Meanwhile, the selection of specific biases in P-DMZMs is helpful to achieve a preferable linearized performance with a low noise level for real-time photonic sampling. Compared with the quadrature-biased scheme, the proposed scheme is capable of valid nonlinearity suppression and is able to provide a better SNR performance even in a large frequency range. The proposed scheme is proved to be effective and easily implemented for real time photonic applications.

  1. 135Cs activity and 135Cs/137Cs atom ratio in environmental samples before and after the Fukushima Daiichi Nuclear Power Plant accident.

    Science.gov (United States)

    Yang, Guosheng; Tazoe, Hirofumi; Yamada, Masatoshi

    2016-04-07

    (135)Cs/(137)Cs is a potential tracer for radiocesium source identification. However, due to the challenge to measure (135)Cs, there were no (135)Cs data available for Japanese environmental samples before the Fukushima Daiichi Nuclear Power Plant (FDNPP) accident. It was only 3 years after the accident that limited (135)Cs values could be measured in heavily contaminated environmental samples. In the present study, activities of (134)Cs, (135)Cs, and (137)Cs, along with their ratios in 67 soil and plant samples heavily and lightly contaminated by the FDNPP accident were measured by combining γ spectrometry with ICP-MS/MS. The arithmetic means of the (134)Cs/(137)Cs activity ratio (1.033 ± 0.006) and (135)Cs/(137)Cs atom ratio (0.334 ± 0.005) (decay corrected to March 11, 2011), from old leaves of plants collected immediately after the FDNPP accident, were confirmed to represent the FDNPP derived radiocesium signature. Subsequently, for the first time, trace (135)Cs amounts before the FDNPP accident were deduced according to the contribution of global and FDNPP accident-derived fallout. Apart from two soil samples with a tiny global fallout contribution, contributions of global fallout radiocesium in other soil samples were observed to be 0.338%-52.6%. The obtained (135)Cs/(137)Cs database will be useful for its application as a geochemical tracer in the future.

  2. Effects of Sample Size and Dimensionality on the Performance of Four Algorithms for Inference of Association Networks in Metabonomics

    NARCIS (Netherlands)

    Suarez Diez, M.; Saccenti, E.

    2015-01-01

    We investigated the effect of sample size and dimensionality on the performance of four algorithms (ARACNE, CLR, CORR, and PCLRC) when they are used for the inference of metabolite association networks. We report that as many as 100-400 samples may be necessary to obtain stable network estimations,

  3. Lead isotope ratios in lichen samples evaluated by ICP-ToF-MS to assess possible atmospheric pollution sources in Havana, Cuba.

    Science.gov (United States)

    Alvarez, Alfredo Montero; Estévez Alvarez, Juan R; do Nascimento, Clístenes Williams Araújo; González, Iván Pupo; Rizo, Oscar Díaz; Carzola, Lázaro Lima; Torres, Roberto Ayllón; Pascual, Jorge Gómez

    2017-01-01

    Epiphytic lichens, collected from 119 sampling sites grown over "Roistonea Royal Palm" trees, were used to assess the spatial distribution pattern of lead (Pb) and identify possible pollution sources in Havana (Cuba). Lead concentrations in lichens and topsoils were determined by flame atomic absorption spectrophotometry and inductively coupled plasma (ICP) atomic emission spectrometry, respectively, while Pb in crude oils and gasoline samples were measured by ICP-time of flight mass spectrometry (ICP-ToF-MS). Lead isotopic ratios measurements for lichens, soils, and crude oils were obtained by ICP-ToF-MS. We found that enrichment factors (EF) reflected a moderate contamination for 71% of the samples (EF > 10). The 206 Pb/ 207 Pb ratio values for lichens ranged from 1.17 to 1.20 and were a mixture of natural radiogenic and industrial activities (e.g., crude oils and fire plants). The low concentration of Pb found in gasoline (leaded gasoline is no longer used in Cuba.

  4. What about N? A methodological study of sample-size reporting in focus group studies.

    Science.gov (United States)

    Carlsen, Benedicte; Glenton, Claire

    2011-03-11

    Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96). Thirty seven (17%) studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Based on these findings we suggest that journals adopt more stringent requirements for focus group method reporting. The often poor and inconsistent reporting seen in these

  5. What about N? A methodological study of sample-size reporting in focus group studies

    Directory of Open Access Journals (Sweden)

    Glenton Claire

    2011-03-01

    Full Text Available Abstract Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96. Thirty seven (17% studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method

  6. Multilayered samples reconstructed by measuring Kα/Kβ or Lα/Lβ X-ray intensity ratios by EDXRF

    Science.gov (United States)

    Cesareo, Roberto; de Assis, Joaquim T.; Roldán, Clodoaldo; Bustamante, Angel D.; Brunetti, Antonio; Schiavon, Nick

    2013-10-01

    In this paper a general method based on energy-dispersive X-ray fluorescence (EDXRF) analysis has been tested to assess its possible use as a tool to reconstruct the structure and determine the thickness of two and/or multi-layered materials. The method utilizes the X-ray intensity ratios of Kα/Kβ or Lα/Lβ peaks (or the ratio of these peaks) for selected elements present in multi-layered objects of various materials (Au alloys, gilded Cu, gilded Ag, gilded Pb, Ag-Au Tumbaga, stone surfaces with protective treatments, Zn or Nickel plating on metals). Results show that, in the case of multi-layered samples, a correct calculation of the peak ratio (Kα /Kβ and/or Lα/Lβ) of relevant elements from energy-dispersive X-ray fluorescence spectra, can provide important information in assessing the exact location of each layer and for calculating its thickness. The methodological approach shown may have important applications not only in materials science but also when dealing with the conservation and restoration of multi-layered cultural heritage objects where the use of a Non-Destructive techniques to determine slight chemical and thickness variations in the layered structure is often of paramount importance to achieve the best results.

  7. The effects of parameter estimation on minimizing the in-control average sample size for the double sampling X bar chart

    Directory of Open Access Journals (Sweden)

    Michael B.C. Khoo

    2013-11-01

    Full Text Available The double sampling (DS X bar chart, one of the most widely-used charting methods, is superior for detecting small and moderate shifts in the process mean. In a right skewed run length distribution, the median run length (MRL provides a more credible representation of the central tendency than the average run length (ARL, as the mean is greater than the median. In this paper, therefore, MRL is used as the performance criterion instead of the traditional ARL. Generally, the performance of the DS X bar chart is investigated under the assumption of known process parameters. In practice, these parameters are usually estimated from an in-control reference Phase-I dataset. Since the performance of the DS X bar chart is significantly affected by estimation errors, we study the effects of parameter estimation on the MRL-based DS X bar chart when the in-control average sample size is minimised. This study reveals that more than 80 samples are required for the MRL-based DS X bar chart with estimated parameters to perform more favourably than the corresponding chart with known parameters.

  8. Sizing of intergranular stress corrosion cracking using low frequency ultrasound

    International Nuclear Information System (INIS)

    Fuller, M.D.; Avioli, M.J.; Rose, J.L.

    1985-01-01

    Based upon the work thus far accomplished on low frequency sizing, the following conclusions can be drawn: the potential of low frequency ultrasound for the sizing of IGSCC seams encouraging as demonstrated in this work. If minimal walking is expected, larger values of crack height/wavelength ratios should not affect the reliability of estimates; notch data points out the validity of signal amplitude for sizing. With care in frequency consideration, the technique can be extended to cracks; when wavelength is greater than flaw size, importance of orientation and reflector shape diminishes although less so for deeper cracks; when beam profile is larger than the defect size, echo amplitude is proportional to defect area when using shear wave probes and corner reflectors; other factors, in addition to crack size, affect signal amplitude. Reference data to compensate for depth and material (HAZ) is a must; additional crack samples should be studied in order to further develop and characterize the use of low frequency ultrasonics

  9. Particle shape accounts for instrumental discrepancy in ice core dust size distributions

    Science.gov (United States)

    Folden Simonsen, Marius; Cremonesi, Llorenç; Baccolo, Giovanni; Bosch, Samuel; Delmonte, Barbara; Erhardt, Tobias; Kjær, Helle Astrid; Potenza, Marco; Svensson, Anders; Vallelonga, Paul

    2018-05-01

    The Klotz Abakus laser sensor and the Coulter counter are both used for measuring the size distribution of insoluble mineral dust particles in ice cores. While the Coulter counter measures particle volume accurately, the equivalent Abakus instrument measurement deviates substantially from the Coulter counter. We show that the difference between the Abakus and the Coulter counter measurements is mainly caused by the irregular shape of dust particles in ice core samples. The irregular shape means that a new calibration routine based on standard spheres is necessary for obtaining fully comparable data. This new calibration routine gives an increased accuracy to Abakus measurements, which may improve future ice core record intercomparisons. We derived an analytical model for extracting the aspect ratio of dust particles from the difference between Abakus and Coulter counter data. For verification, we measured the aspect ratio of the same samples directly using a single-particle extinction and scattering instrument. The results demonstrate that the model is accurate enough to discern between samples of aspect ratio 0.3 and 0.4 using only the comparison of Abakus and Coulter counter data.

  10. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Science.gov (United States)

    Aghamohammadi, Cina; Loomis, Samuel P.; Mahoney, John R.; Crutchfield, James P.

    2018-02-01

    We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r , for any large real number r . Then, for a sequence of processes each labeled by an integer size N , we compare how the classical and quantum required memories scale with N . In this setting, since both memories can diverge as N →∞ , the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N →∞ , but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  11. Extreme Quantum Memory Advantage for Rare-Event Sampling

    Directory of Open Access Journals (Sweden)

    Cina Aghamohammadi

    2018-02-01

    Full Text Available We introduce a quantum algorithm for memory-efficient biased sampling of rare events generated by classical memoryful stochastic processes. Two efficiency metrics are used to compare quantum and classical resources for rare-event sampling. For a fixed stochastic process, the first is the classical-to-quantum ratio of required memory. We show for two example processes that there exists an infinite number of rare-event classes for which the memory ratio for sampling is larger than r, for any large real number r. Then, for a sequence of processes each labeled by an integer size N, we compare how the classical and quantum required memories scale with N. In this setting, since both memories can diverge as N→∞, the efficiency metric tracks how fast they diverge. An extreme quantum memory advantage exists when the classical memory diverges in the limit N→∞, but the quantum memory has a finite bound. We then show that finite-state Markov processes and spin chains exhibit memory advantage for sampling of almost all of their rare-event classes.

  12. Uniform Sampling Table Method and its Applications II--Evaluating the Uniform Sampling by Experiment.

    Science.gov (United States)

    Chen, Yibin; Chen, Jiaxi; Chen, Xuan; Wang, Min; Wang, Wei

    2015-01-01

    A new method of uniform sampling is evaluated in this paper. The items and indexes were adopted to evaluate the rationality of the uniform sampling. The evaluation items included convenience of operation, uniformity of sampling site distribution, and accuracy and precision of measured results. The evaluation indexes included operational complexity, occupation rate of sampling site in a row and column, relative accuracy of pill weight, and relative deviation of pill weight. They were obtained from three kinds of drugs with different shape and size by four kinds of sampling methods. Gray correlation analysis was adopted to make the comprehensive evaluation by comparing it with the standard method. The experimental results showed that the convenience of uniform sampling method was 1 (100%), odds ratio of occupation rate in a row and column was infinity, relative accuracy was 99.50-99.89%, reproducibility RSD was 0.45-0.89%, and weighted incidence degree exceeded the standard method. Hence, the uniform sampling method was easy to operate, and the selected samples were distributed uniformly. The experimental results demonstrated that the uniform sampling method has good accuracy and reproducibility, which can be put into use in drugs analysis.

  13. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    Directory of Open Access Journals (Sweden)

    Pitchaiah Mandava

    Full Text Available OBJECTIVE: Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS, a range of scores ("Shift" is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. METHODS: We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. RESULTS: Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD. Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001. Taking errors into account, SAINT I would have required 24% more subjects than were randomized. CONCLUSION: We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We

  14. Estimating negative likelihood ratio confidence when test sensitivity is 100%: A bootstrapping approach.

    Science.gov (United States)

    Marill, Keith A; Chang, Yuchiao; Wong, Kim F; Friedman, Ari B

    2017-08-01

    Objectives Assessing high-sensitivity tests for mortal illness is crucial in emergency and critical care medicine. Estimating the 95% confidence interval (CI) of the likelihood ratio (LR) can be challenging when sample sensitivity is 100%. We aimed to develop, compare, and automate a bootstrapping method to estimate the negative LR CI when sample sensitivity is 100%. Methods The lowest population sensitivity that is most likely to yield sample sensitivity 100% is located using the binomial distribution. Random binomial samples generated using this population sensitivity are then used in the LR bootstrap. A free R program, "bootLR," automates the process. Extensive simulations were performed to determine how often the LR bootstrap and comparator method 95% CIs cover the true population negative LR value. Finally, the 95% CI was compared for theoretical sample sizes and sensitivities approaching and including 100% using: (1) a technique of individual extremes, (2) SAS software based on the technique of Gart and Nam, (3) the Score CI (as implemented in the StatXact, SAS, and R PropCI package), and (4) the bootstrapping technique. Results The bootstrapping approach demonstrates appropriate coverage of the nominal 95% CI over a spectrum of populations and sample sizes. Considering a study of sample size 200 with 100 patients with disease, and specificity 60%, the lowest population sensitivity with median sample sensitivity 100% is 99.31%. When all 100 patients with disease test positive, the negative LR 95% CIs are: individual extremes technique (0,0.073), StatXact (0,0.064), SAS Score method (0,0.057), R PropCI (0,0.062), and bootstrap (0,0.048). Similar trends were observed for other sample sizes. Conclusions When study samples demonstrate 100% sensitivity, available methods may yield inappropriately wide negative LR CIs. An alternative bootstrapping approach and accompanying free open-source R package were developed to yield realistic estimates easily. This

  15. Dependence of fracture mechanical and fluid flow properties on fracture roughness and sample size

    International Nuclear Information System (INIS)

    Tsang, Y.W.; Witherspoon, P.A.

    1983-01-01

    A parameter study has been carried out to investigate the interdependence of mechanical and fluid flow properties of fractures with fracture roughness and sample size. A rough fracture can be defined mathematically in terms of its aperture density distribution. Correlations were found between the shapes of the aperture density distribution function and the specific fractures of the stress-strain behavior and fluid flow characteristics. Well-matched fractures had peaked aperture distributions that resulted in very nonlinear stress-strain behavior. With an increasing degree of mismatching between the top and bottom of a fracture, the aperture density distribution broadened and the nonlinearity of the stress-strain behavior became less accentuated. The different aperture density distributions also gave rise to qualitatively different fluid flow behavior. Findings from this investigation make it possible to estimate the stress-strain and fluid flow behavior when the roughness characteristics of the fracture are known and, conversely, to estimate the fracture roughness from an examination of the hydraulic and mechanical data. Results from this study showed that both the mechanical and hydraulic properties of the fracture are controlled by the large-scale roughness of the joint surface. This suggests that when the stress-flow behavior of a fracture is being investigated, the size of the rock sample should be larger than the typical wave length of the roughness undulations

  16. Sample preparation techniques for the determination of natural 15N/14N variations in amino acids by gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS).

    Science.gov (United States)

    Hofmann, D; Gehre, M; Jung, K

    2003-09-01

    In order to identify natural nitrogen isotope variations of biologically important amino acids four derivatization reactions (t-butylmethylsilylation, esterification with subsequent trifluoroacetylation, acetylation and pivaloylation) were tested with standard mixtures of 17 proteinogenic amino acids and plant (moss) samples using GC-C-IRMS. The possible fractionation of the nitrogen isotopes, caused for instance by the formation of multiple reaction products, was investigated. For biological samples, the esterification of the amino acids with subsequent trifluoroacetylation is recommended for nitrogen isotope ratio analysis. A sample preparation technique is described for the isotope ratio mass spectrometric analysis of amino acids from the non-protein (NPN) fraction of terrestrial moss. 14N/15N ratios from moss (Scleropodium spec.) samples from different anthropogenically polluted areas were studied with respect to ecotoxicologal bioindication.

  17. Grain size effect on Sr and Nd isotopic compositions in eolian dust. Implications for tracing dust provenance and Nd model age

    International Nuclear Information System (INIS)

    Feng Jinliang; Zhu Liping; Zhen Xiaolin; Hu Zhaoguo

    2009-01-01

    Strontium (Sr) and neodymium (Nd) isotopic compositions enable identification of dust sources and reconstruction of atmospheric dispersal pathways. The Sr and Nd isotopic compositions in eolian dust change systematically with grain size in ways not yet fully understood. This study demonstrates the grain size effect on the Sr and Nd isotopic compositions in loess and 2006 dust fall, based on analyses of seven separated grain size fractions. The analytical results indicate that Sr isotopic ratios strongly depend on the grain size fractions in samples from all types of eolian dust. In contrast, the Nd isotopic ratios exhibit little variation in loess, although they vary significantly with grain size in samples from a 2006 dust fall. Furthermore, Nd model ages tend to increase with increasing grain size in samples from all types of eolian dust. Comparatively, Sr isotopic compositions exhibit high sensitively to wind sorting, while Nd isotopic compositions show greater sensitively to dust origin. The principal cause for the different patterns of Sr and Nd isotopic composition variability with grain size appears related to the different geochemical behaviors between rubidium (Rb) and Sr, and the similar geochemical behaviors between samarium (Sm) and Nd. The Nd isotope data indicate that the various grain size fractions in loess have similar origins for each sample. In contrast, various provenance components may separate into different grain size fractions for the studied 2006 dust fall. The Sr and Nd isotope compositions further confirm that the 2006 dust fall and Pleistocene loess in Beijing have different sources. The loess deposits found in Beijing and those found on the Chinese Loess Plateau also derive from different sources. Variations between Sr and Nd isotopic compositions and Nd model ages with grain size need to be considered when directly comparing analyses of eolian dust of different grain size. (author)

  18. Thickness measurement of multilayered samples by Kα/Kβ or Lα/Lβ X-ray ratios

    Energy Technology Data Exchange (ETDEWEB)

    Cesareo, Roberto; Brunetti, Antonio, E-mail: roberto.cesareo@gmail.com, E-mail: brunetti@uniss.it [Universita di Sassari (UNISS), Sassari, (Italy); Assis, Joaquim T. de, E-mail: rcbarros@pq.cnpq.br [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil)

    2013-07-01

    Objects composed of two or more layers are relatively common among industrial and electronic materials, works of art and common tools. For example plated objects (with zinc, nickel, silver, gold) are composed of two or three layers, a painting is generally composed of several layers, a decorated vase is composed of two or three layers, just as a stone, marble or bronze covered with a protective layer. In this paper a general method and some results are described to reconstruct structure and to determine thicknesses of multilayered material, when energy dispersive X-ray fluorescence is employed to analyze the material: the X-ray ratios of Kα/Kβ and Lα/Lβ for elements present in the multilayered samples are employed. (author)

  19. Thickness measurement of multilayered samples by Kα/Kβ or Lα/Lβ X-ray ratios

    International Nuclear Information System (INIS)

    Cesareo, Roberto; Brunetti, Antonio; Assis, Joaquim T. de

    2013-01-01

    Objects composed of two or more layers are relatively common among industrial and electronic materials, works of art and common tools. For example plated objects (with zinc, nickel, silver, gold) are composed of two or three layers, a painting is generally composed of several layers, a decorated vase is composed of two or three layers, just as a stone, marble or bronze covered with a protective layer. In this paper a general method and some results are described to reconstruct structure and to determine thicknesses of multilayered material, when energy dispersive X-ray fluorescence is employed to analyze the material: the X-ray ratios of Kα/Kβ and Lα/Lβ for elements present in the multilayered samples are employed. (author)

  20. Estimated ventricle size using Evans index: reference values from a population-based sample.

    Science.gov (United States)

    Jaraj, D; Rabiei, K; Marlow, T; Jensen, C; Skoog, I; Wikkelsø, C

    2017-03-01

    Evans index is an estimate of ventricular size used in the diagnosis of idiopathic normal-pressure hydrocephalus (iNPH). Values >0.3 are considered pathological and are required by guidelines for the diagnosis of iNPH. However, there are no previous epidemiological studies on Evans index, and normal values in adults are thus not precisely known. We examined a representative sample to obtain reference values and descriptive data on Evans index. A population-based sample (n = 1235) of men and women aged ≥70 years was examined. The sample comprised people living in private households and residential care, systematically selected from the Swedish population register. Neuropsychiatric examinations, including head computed tomography, were performed between 1986 and 2000. Evans index ranged from 0.11 to 0.46. The mean value in the total sample was 0.28 (SD, 0.04) and 20.6% (n = 255) had values >0.3. Among men aged ≥80 years, the mean value of Evans index was 0.3 (SD, 0.03). Individuals with dementia had a mean value of Evans index of 0.31 (SD, 0.05) and those with radiological signs of iNPH had a mean value of 0.36 (SD, 0.04). A substantial number of subjects had ventricular enlargement according to current criteria. Clinicians and researchers need to be aware of the range of values among older individuals. © 2017 EAN.

  1. Relationship between sleep characteristics and measures of body size and composition in a nationally-representative sample.

    Science.gov (United States)

    Xiao, Qian; Gu, Fangyi; Caporaso, Neil; Matthews, Charles E

    2016-01-01

    Short sleep has been linked to obesity. However, sleep is a multidimensional behavior that cannot be characterized solely by sleep duration. There is limited study that comprehensively examined different sleep characteristics in relation to obesity. We examined various aspects of sleep in relation to adiposity in 2005-2006 NHANES participants who were 18 or older and free of cardiovascular disease, cancer, emphysema, chronic bronchitis and depression ( N  = 3995). Sleep characteristics were self-reported, and included duration, overall quality, onset latency, fragmentation, daytime sleepiness, snoring, and sleep disorders. Body measurements included weight, height, waist circumference, and dual-energy X-ray absorptiometry measured fat mass. Snoring was associated with higher BMI (adjusted difference in kg/m 2 comparing snoring for 5+ nights/week with no snoring (95 % confidence interval), 1.85 (0.88, 2.83)), larger waist circumference (cm, 4.52 (2.29, 6.75)), higher percentage of body fat (%, 1.61 (0.84, 2.38)), and higher android/gynoid ratio (0.03 (0.01, 0.06)). The associations were independent of sleep duration and sleep quality, and cannot be explained by the existence of sleep disorders such as sleep apnea. Poor sleep quality (two or more problematic sleep conditions) and short sleep duration (body size and fat composition, although the effects were attenuated after snoring was adjusted. In a nationally representative sample of healthy US adults, snoring, short sleep, and poor sleep quality were associated with higher adiposity.

  2. Brain size and brain/intracranial volume ratio in major mental illness

    Directory of Open Access Journals (Sweden)

    Teale Peter

    2010-10-01

    Full Text Available Abstract Background This paper summarizes the findings of a long term study addressing the question of how several brain volume measure are related to three major mental illnesses in a Colorado subject group. It reports results obtained from a large N, collected and analyzed by the same laboratory over a multiyear period, with visually guided MRI segmentation being the primary initial analytic tool. Methods Intracerebral volume (ICV, total brain volume (TBV, ventricular volume (VV, ventricular/brain ratio (VBR, and TBV/ICV ratios were calculated from a total of 224 subject MRIs collected over a period of 13 years. Subject groups included controls (C, N = 89, and patients with schizophrenia (SZ, N = 58, bipolar disorder (BD, N = 51, and schizoaffective disorder (SAD, N = 26. Results ICV, TBV, and VV measures compared favorably with values obtained by other research groups, but in this study did not differ significantly between groups. TBV/ICV ratios were significantly decreased, and VBR increased, in the SZ and BD groups compared to the C group. The SAD group did not differ from C on any measure. Conclusions In this study TBV/ICV and VBR ratios separated SZ and BD patients from controls. Of interest however, SAD patients did not differ from controls on these measures. The findings suggest that the gross measure of TBV may not reliably differ in the major mental illnesses to a degree useful in diagnosis, likely due to the intrinsic variability of the measures in question; the differences in VBR appear more robust across studies. Differences in some of these findings compared to earlier reports from several laboratories finding significant differences between groups in VV and TBV may relate to phenomenological drift, differences in analytic techniques, and possibly the "file drawer problem".

  3. THE INFLUENCE OF CONDITIONS OF THE STOCK MARKET AND MONETARY POLICY ON THE BEHAVIOROF RISK INDICATORS SIZE, BOOK-TO-MARKET RATIO AND MOMENTUM, ON THE BRAZILIAN STOCK MARKET

    Directory of Open Access Journals (Sweden)

    Adriano Mussa

    2011-04-01

    Full Text Available Last years, empirical tests of APT (Arbitrage Pricing Theory models have been intensified on the national and international literature, mainly using firm´s characteristics to construct risk factors in addition to the market beta. The Fama-French 3-factors model and the Carhart 4-factors model are two samples intensively tested of this type of models, with evidences of relative success. On this scenario, it is important to deepen the studies of the factor´s behavior that compose these models. Following the way of international researches, the purpose of this article is to investigate the behavior of the factors: size, book-to-market ratio and momentum, on the Brazilian stock market, in conditions of i up and down markets ii expansive and restrictive monetary policy. The sample was composed by all stocks listed on BOVESPA, from June of 1995 to June of 2007. The methodology was the same used by Fama & French (1993 to construct the portfolios and risk factors. The results indicated that the stock market and monetary environment influence the regularities in the factor´s behavior, on the Brazilian Stock markets.

  4. SU-E-I-46: Sample-Size Dependence of Model Observers for Estimating Low-Contrast Detection Performance From CT Images

    International Nuclear Information System (INIS)

    Reiser, I; Lu, Z

    2014-01-01

    Purpose: Recently, task-based assessment of diagnostic CT systems has attracted much attention. Detection task performance can be estimated using human observers, or mathematical observer models. While most models are well established, considerable bias can be introduced when performance is estimated from a limited number of image samples. Thus, the purpose of this work was to assess the effect of sample size on bias and uncertainty of two channelized Hotelling observers and a template-matching observer. Methods: The image data used for this study consisted of 100 signal-present and 100 signal-absent regions-of-interest, which were extracted from CT slices. The experimental conditions included two signal sizes and five different x-ray beam current settings (mAs). Human observer performance for these images was determined in 2-alternative forced choice experiments. These data were provided by the Mayo clinic in Rochester, MN. Detection performance was estimated from three observer models, including channelized Hotelling observers (CHO) with Gabor or Laguerre-Gauss (LG) channels, and a template-matching observer (TM). Different sample sizes were generated by randomly selecting a subset of image pairs, (N=20,40,60,80). Observer performance was quantified as proportion of correct responses (PC). Bias was quantified as the relative difference of PC for 20 and 80 image pairs. Results: For n=100, all observer models predicted human performance across mAs and signal sizes. Bias was 23% for CHO (Gabor), 7% for CHO (LG), and 3% for TM. The relative standard deviation, σ(PC)/PC at N=20 was highest for the TM observer (11%) and lowest for the CHO (Gabor) observer (5%). Conclusion: In order to make image quality assessment feasible in the clinical practice, a statistically efficient observer model, that can predict performance from few samples, is needed. Our results identified two observer models that may be suited for this task

  5. (I Can’t Get No) Saturation: A Simulation and Guidelines for Minimum Sample Sizes in Qualitative Research

    NARCIS (Netherlands)

    van Rijnsoever, F.J.

    2015-01-01

    This paper explores the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the

  6. Point Counts of Birds in Bottomland Hardwood Forests of the Mississippi Alluvial Valley: Duration, Minimum Sample Size, and Points Versus Visits

    Science.gov (United States)

    Winston Paul Smith; Daniel J. Twedt; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford; Robert J. Cooper

    1993-01-01

    To compare efficacy of point count sampling in bottomland hardwood forests, duration of point count, number of point counts, number of visits to each point during a breeding season, and minimum sample size are examined.

  7. Sample Size of One: Operational Qualitative Analysis in the Classroom

    Directory of Open Access Journals (Sweden)

    John Hoven

    2015-10-01

    Full Text Available Qualitative analysis has two extraordinary capabilities: first, finding answers to questions we are too clueless to ask; and second, causal inference – hypothesis testing and assessment – within a single unique context (sample size of one. These capabilities are broadly useful, and they are critically important in village-level civil-military operations. Company commanders need to learn quickly, "What are the problems and possibilities here and now, in this specific village? What happens if we do A, B, and C?" – and that is an ill-defined, one-of-a-kind problem. The U.S. Army's Eighty-Third Civil Affairs Battalion is our "first user" innovation partner in a new project to adapt qualitative research methods to an operational tempo and purpose. Our aim is to develop a simple, low-cost methodology and training program for local civil-military operations conducted by non-specialist conventional forces. Complementary to that, this paper focuses on some essential basics that can be implemented by college professors without significant cost, effort, or disruption.

  8. Impact of electrocardiogram-gated multi-slice computed tomography-based aortic annular measurement in the evaluation of paravalvular leakage following transcatheter aortic valve replacement: the efficacy of the OverSized AortiC Annular ratio (OSACA ratio) in TAVR.

    Science.gov (United States)

    Maeda, Koichi; Kuratani, Toru; Torikai, Kei; Shimamura, Kazuo; Mizote, Isamu; Ichibori, Yasuhiro; Takeda, Yasuharu; Daimon, Takashi; Nakatani, Satoshi; Nanto, Shinsuke; Sawa, Yoshiki

    2013-07-01

    Even mild paravalvular leakage (PVL) after transcatheter aortic valve replacement (TAVR) is associated with increased late mortality. Electrocardiogram-gated multi-slice computed tomography (MSCT) enables detailed aortic annulus assessment. We describe the impact of MSCT for PVL following TAVR. Congruence between the prosthesis and annulus diameters affects PVL; therefore, we calculated the OverSized AortiC Annular ratio (OSACA ratio) and OSACA (transesophageal echocardiography, TEE) ratio as prosthesis diameter/annulus diameter on MSCT or TEE, respectively, and compared their relationship with PVL ≤ trace following TAVR. Of 36 consecutive patients undergoing TAVR (Group A), the occurrence of PVL ≤ trace (33.3%) was significantly related to the OSACA ratio (p = 0.00020). In receiver-operating characteristics analysis, the cutoff value of 1.03 for the OSACA ratio had the highest sum of sensitivity (75.0%) and specificity (91.7%; AUC = 0.87) with significantly higher discriminatory performance for PVL as compared to the OSACA (TEE) ratio (AUC = 0.69, p = 0.028). In nine consecutive patients (Group B) undergoing TAVR based on guidelines formulated from our experience with Group A, PVL ≤ trace was significantly more frequent (88.9%) than that in Group A (p = 0.0060). The OSACA ratio has a significantly higher discriminatory performance for PVL ≤ trace than the OSACA (TEE) ratio, and aortic annular measurement from MSCT is more accurate than that from TEE. © 2013 Wiley Periodicals, Inc.

  9. The effect of Fe, Mn, Ni and Pb Load on Soil and its enrichment factor ratios in different soil grain size fractions as an Indicator for soil pollution

    International Nuclear Information System (INIS)

    Rabie, F.H.; Abdel-Sabour, M.F.

    2000-01-01

    An industrial area north of greater Cairo was selected to investigate the impact of intensive industrial activities on soil characteristics and Fe, Mn, Ni and Pb total content. The studied area was divided to six sectors according to its source of irrigation water and/or probability of pollution. Sixteen soil profiles were dug and soil samples were taken, air dried, fractionated to different grain size fractions, then total heavy metals (Fe, Mn, Ni and Pb) were determined using ICP technique. The enrichment factor for each metal for each soil fraction/soil layer was estimated and discussed. The highest EF ratios in the clay fraction was mainly with Pb which indicated the industrial impact on the soil. In case of sand fraction, Mn was the highest always compared to other studied metals. Concerning silt fraction, a varied accumulation of Fe, Mn, and Pb was observed with soil depth and different soil profiles

  10. Pengaruh Debt to Equty Ratio, Current Ratio , Net Profit Margin Terhadap Harga Saham dengan Price Earning Ratio Sebagai Variabel Pemoderasi pada Perusahaan Manufaktur yang Terdaftar di BEI Periode 2012-2014

    OpenAIRE

    Theresia, Paskah Lia

    2017-01-01

    This study conducted to analyze the effect of variable Debt to Equity Ratio (DER), Current Ratio (CR), Net Profit Margin (NPM) andPrice Earnings Ratio (PER) to the Stock Prices with Price Earnings Ratio (PER) as an moderating variable on companies listed on Indonesian Stock Exchange from 2012 - 2014.The samplingtechnique used is purposive sampling and number of samples used by 23 companies. The analysis technique used are Descriptive Statistic Analysis, Classical Assumption Test, Hypothesis T...

  11. Ratio of 210Po and 210Pb in fresh, brackish and saline water in Kuala Selangor river

    International Nuclear Information System (INIS)

    Tan Chin Siang; Che Abdul Rahim Mohamed; Zaharuddin Ahmad

    2007-01-01

    Sediment cores were carried out from Kuala Selangor river to amine sea water via coastal and brackish water ambient. Sample size fraction with size less than 125 μm was spiked with tracer 209 Po and leached with mix concentrated nitric acid, perchloric acid, hydrogen peroxide, hydrochloric acid and mineralized with 50 ml of 0.5M HCl. The sample solution was used for spontaneously deposit polonium on a silver disk at 80-85 degree Celsius and measured with the Alpha Spectrometry. The distribution of two radionuclides especially 210 Po, 210 Pb and 210 po/ 210 Pb were useful in identifying the origin of 210 Po. Ratio values of 210 Po/ 210 Pb in the freshwater, brackish water and saline water were 3.3459, 5.8385 and 2.9831, respectively. From the high ratio of 210 Po/ 210 Pb, the widespread occurrence of excess 210 Po in Kuala Selangor river water may came from the atmosphere sources such as stratospheric aerosols, sea spray of the surface micro layer and bio-volatile 210 Po organism from productive species. (author)

  12. Self-navigation of a scanning tunneling microscope tip toward a micron-sized graphene sample.

    Science.gov (United States)

    Li, Guohong; Luican, Adina; Andrei, Eva Y

    2011-07-01

    We demonstrate a simple capacitance-based method to quickly and efficiently locate micron-sized conductive samples, such as graphene flakes, on insulating substrates in a scanning tunneling microscope (STM). By using edge recognition, the method is designed to locate and to identify small features when the STM tip is far above the surface, allowing for crash-free search and navigation. The method can be implemented in any STM environment, even at low temperatures and in strong magnetic field, with minimal or no hardware modifications.

  13. Application of Conventional and K0-Based Internal Monostandard NAA Using Reactor Neutrons for Compositional Analysis of Large Samples

    International Nuclear Information System (INIS)

    Reddy, A.V.R.; Acharya, R.; Swain, K. K.; Pujari, P.K.

    2018-01-01

    Large sample neutron activation analysis (LSNAA) work was carried out for samples of coal, uranium ore, stainless steel, ancient and new clay potteries, dross and clay pottery replica from Peru using low flux high thermalized irradiation sites. Large as well as non-standard geometry samples (1 g - 0.5 kg) were irradiated using thermal column (TC) facility of Apsara reactor as well as graphite reflector position of critical facility (CF) at Bhabha Atomic Research Centre, Mumbai. Small size (10 - 500 mg) samples were also irradiated at core position of Apsara reactor, pneumatic carrier facility (PCF) of Dhruva reactor and pneumatic fast transfer facility (PFTS) of KAMINI reactor. Irradiation positions were characterized using indium flux monitor for TC and CF whereas multi monitors were used at other positions. Radioactive assay was carried out using high resolution gamma ray spectrometry. The k0-based internal monostandard NAA (IM-NAA) method was used to determine elemental concentration ratios with respect to Na in coal and uranium ore samples, Sc in pottery samples and Fe in stainless steel. Insitu relative detection efficiency for each irradiated sample was obtained using γ rays of activation products in the required energy range. Representative sample sizes were arrived at for coal and uranium ore from the plots of La/Na ratios as a function of the mass of the sample. For stainless steel sample of SS 304L, the absolute concentrations were calculated from concentration ratios by mass balance approach since all the major elements (Fe, Cr, Ni and Mn) were amenable to NAA. Concentration ratios obtained by IM-NAA were used for provenance study of 30 clay potteries, obtained from excavated Buddhist sites of AP, India. The La to Ce concentration ratios were used for preliminary grouping and concentration ratios of 15 elements with respect to Sc were used by statistical cluster analysis for confirmation of grouping. Concentrations of Au and Ag were determined in not so

  14. Development of a versatile sample preparation method and its application for rare-earth pattern and Nd isotope ratio analysis in nuclear forensics

    International Nuclear Information System (INIS)

    Krajko, J.

    2015-01-01

    An improved sample preparation procedure for trace-levels of lanthanides in uranium-bearing samples was developed. The method involves a simple co-precipitation using Fe(III) carrier in ammonium carbonate medium to remove the uranium matrix. The procedure is an effective initial pre-concentration step for the subsequent extraction chromatographic separations. The applicability of the method was demonstrated by the measurement of REE pattern and 143 Nd/ 144 Nd isotope ratio in uranium ore concentrate samples. (author)

  15. A Systematic Review of Surgical Randomized Controlled Trials: Part 2. Funding Source, Conflict of Interest, and Sample Size in Plastic Surgery.

    Science.gov (United States)

    Voineskos, Sophocles H; Coroneos, Christopher J; Ziolkowski, Natalia I; Kaur, Manraj N; Banfield, Laura; Meade, Maureen O; Chung, Kevin C; Thoma, Achilleas; Bhandari, Mohit

    2016-02-01

    The authors examined industry support, conflict of interest, and sample size in plastic surgery randomized controlled trials that compared surgical interventions. They hypothesized that industry-funded trials demonstrate statistically significant outcomes more often, and randomized controlled trials with small sample sizes report statistically significant results more frequently. An electronic search identified randomized controlled trials published between 2000 and 2013. Independent reviewers assessed manuscripts and performed data extraction. Funding source, conflict of interest, primary outcome direction, and sample size were examined. Chi-squared and independent-samples t tests were used in the analysis. The search identified 173 randomized controlled trials, of which 100 (58 percent) did not acknowledge funding status. A relationship between funding source and trial outcome direction was not observed. Both funding status and conflict of interest reporting improved over time. Only 24 percent (six of 25) of industry-funded randomized controlled trials reported authors to have independent control of data and manuscript contents. The mean number of patients randomized was 73 per trial (median, 43, minimum, 3, maximum, 936). Small trials were not found to be positive more often than large trials (p = 0.87). Randomized controlled trials with small sample size were common; however, this provides great opportunity for the field to engage in further collaboration and produce larger, more definitive trials. Reporting of trial funding and conflict of interest is historically poor, but it greatly improved over the study period. Underreporting at author and journal levels remains a limitation when assessing the relationship between funding source and trial outcomes. Improved reporting and manuscript control should be goals that both authors and journals can actively achieve.

  16. Sample size effect on the determination of the irreversibility line of high-Tc superconductors

    International Nuclear Information System (INIS)

    Li, Q.; Suenaga, M.; Li, Q.; Freltoft, T.

    1994-01-01

    The irreversibility lines of a high-J c superconducting Bi 2 Sr 2 Ca 2 Cu 3 O x /Ag tape were systematically measured upon a sequence of subdivisions of the sample. The irreversibility field H r (T) (parallel to the c axis) was found to change approximately as L 0.13 , where L is the effective dimension of the superconducting tape. Furthermore, it was found that the irreversibility line for a grain-aligned Bi 2 Sr 2 Ca 2 Cu 3 O x specimen can be approximately reproduced by the extrapolation of this relation down to a grain size of a few tens of micrometers. The observed size effect could significantly obscure the real physical meaning of the irreversibility lines. In addition, this finding surprisingly indicated that the Bi 2 Sr 2 Ca 2 Cu 2 O x /Ag tape and grain-aligned specimen may have similar flux line pinning strength

  17. Multiple sensitive estimation and optimal sample size allocation in the item sum technique.

    Science.gov (United States)

    Perri, Pier Francesco; Rueda García, María Del Mar; Cobo Rodríguez, Beatriz

    2018-01-01

    For surveys of sensitive issues in life sciences, statistical procedures can be used to reduce nonresponse and social desirability response bias. Both of these phenomena provoke nonsampling errors that are difficult to deal with and can seriously flaw the validity of the analyses. The item sum technique (IST) is a very recent indirect questioning method derived from the item count technique that seeks to procure more reliable responses on quantitative items than direct questioning while preserving respondents' anonymity. This article addresses two important questions concerning the IST: (i) its implementation when two or more sensitive variables are investigated and efficient estimates of their unknown population means are required; (ii) the determination of the optimal sample size to achieve minimum variance estimates. These aspects are of great relevance for survey practitioners engaged in sensitive research and, to the best of our knowledge, were not studied so far. In this article, theoretical results for multiple estimation and optimal allocation are obtained under a generic sampling design and then particularized to simple random sampling and stratified sampling designs. Theoretical considerations are integrated with a number of simulation studies based on data from two real surveys and conducted to ascertain the efficiency gain derived from optimal allocation in different situations. One of the surveys concerns cannabis consumption among university students. Our findings highlight some methodological advances that can be obtained in life sciences IST surveys when optimal allocation is achieved. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. Acceptance Sampling Plans Based on Truncated Life Tests for Sushila Distribution

    Directory of Open Access Journals (Sweden)

    Amer Ibrahim Al-Omari

    2018-03-01

    Full Text Available An acceptance sampling plan problem based on truncated life tests when the lifetime following a Sushila distribution is considered in this paper. For various acceptance numbers, confidence levels and values of the ratio between fixed experiment time and particular mean lifetime, the minimum sample sizes required to ascertain a specified mean life were found. The operating characteristic function values of the suggested sampling plans and the producer’s risk are presented. Some tables are provided and the results are illustrated by an example of a real data set.

  19. Sample size determinations for group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms.

    Science.gov (United States)

    Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H

    2017-02-01

    We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.

  20. Gridsampler – A Simulation Tool to Determine the Required Sample Size for Repertory Grid Studies

    OpenAIRE

    Heckmann, Mark; Burk, Lukas

    2017-01-01

    The repertory grid is a psychological data collection technique that is used to elicit qualitative data in the form of attributes as well as quantitative ratings. A common approach for evaluating multiple repertory grid data is sorting the elicited bipolar attributes (so called constructs) into mutually exclusive categories by means of content analysis. An important question when planning this type of study is determining the sample size needed to a) discover all attribute categories relevant...