Sample size calculation in metabolic phenotyping studies.
Billoir, Elise; Navratil, Vincent; Blaise, Benjamin J
2015-09-01
The number of samples needed to identify significant effects is a key question in biomedical studies, with consequences on experimental designs, costs and potential discoveries. In metabolic phenotyping studies, sample size determination remains a complex step. This is due particularly to the multiple hypothesis-testing framework and the top-down hypothesis-free approach, with no a priori known metabolic target. Until now, there was no standard procedure available to address this purpose. In this review, we discuss sample size estimation procedures for metabolic phenotyping studies. We release an automated implementation of the Data-driven Sample size Determination (DSD) algorithm for MATLAB and GNU Octave. Original research concerning DSD was published elsewhere. DSD allows the determination of an optimized sample size in metabolic phenotyping studies. The procedure uses analytical data only from a small pilot cohort to generate an expanded data set. The statistical recoupling of variables procedure is used to identify metabolic variables, and their intensity distributions are estimated by Kernel smoothing or log-normal density fitting. Statistically significant metabolic variations are evaluated using the Benjamini-Yekutieli correction and processed for data sets of various sizes. Optimal sample size determination is achieved in a context of biomarker discovery (at least one statistically significant variation) or metabolic exploration (a maximum of statistically significant variations). DSD toolbox is encoded in MATLAB R2008A (Mathworks, Natick, MA) for Kernel and log-normal estimates, and in GNU Octave for log-normal estimates (Kernel density estimates are not robust enough in GNU octave). It is available at http://www.prabi.fr/redmine/projects/dsd/repository, with a tutorial at http://www.prabi.fr/redmine/projects/dsd/wiki. PMID:25600654
How to calculate sample size in animal studies?
Jaykaran Charan; N D Kantharia
2013-01-01
Calculation of sample size is one of the important component of design of any research including animal studies. If a researcher select less number of animals it may lead to missing of any significant difference even if it exist in population and if more number of animals selected then it may lead to unnecessary wastage of resources and may lead to ethical issues. In this article, on the basis of review of literature done by us we suggested few methods of sample size calculations for animal s...
Simple nomograms to calculate sample size in diagnostic studies
Carley, S; Dosman, S; Jones, S; Harrison, M
2005-01-01
Objectives: To produce an easily understood and accessible tool for use by researchers in diagnostic studies. Diagnostic studies should have sample size calculations performed, but in practice, they are performed infrequently. This may be due to a reluctance on the part of researchers to use mathematical formulae.
Sample size calculation for meta-epidemiological studies.
Giraudeau, Bruno; Higgins, Julian P T; Tavernier, Elsa; Trinquart, Ludovic
2016-01-30
Meta-epidemiological studies are used to compare treatment effect estimates between randomized clinical trials with and without a characteristic of interest. To our knowledge, there is presently nothing to help researchers to a priori specify the required number of meta-analyses to be included in a meta-epidemiological study. We derived a theoretical power function and sample size formula in the framework of a hierarchical model that allows for variation in the impact of the characteristic between trials within a meta-analysis and between meta-analyses. A simulation study revealed that the theoretical function overestimated power (because of the assumption of equal weights for each trial within and between meta-analyses). We also propose a simulation approach that allows for relaxing the constraints used in the theoretical approach and is more accurate. We illustrate that the two variables that mostly influence power are the number of trials per meta-analysis and the proportion of trials with the characteristic of interest. We derived a closed-form power function and sample size formula for estimating the impact of trial characteristics in meta-epidemiological studies. Our analytical results can be used as a 'rule of thumb' for sample size calculation for a meta-epidemiologic study. A more accurate sample size can be derived with a simulation study.
Sample Size Calculation for Controlling False Discovery Proportion
Directory of Open Access Journals (Sweden)
Shulian Shang
2012-01-01
Full Text Available The false discovery proportion (FDP, the proportion of incorrect rejections among all rejections, is a direct measure of abundance of false positive findings in multiple testing. Many methods have been proposed to control FDP, but they are too conservative to be useful for power analysis. Study designs for controlling the mean of FDP, which is false discovery rate, have been commonly used. However, there has been little attempt to design study with direct FDP control to achieve certain level of efficiency. We provide a sample size calculation method using the variance formula of the FDP under weak-dependence assumptions to achieve the desired overall power. The relationship between design parameters and sample size is explored. The adequacy of the procedure is assessed by simulation. We illustrate the method using estimated correlations from a prostate cancer dataset.
Weeks, Scott; Atlas, Alvin
2015-01-01
A priori sample size calculations are used to determine the adequate sample size to estimate the prevalence of the target population with good precision. However, published audits rarely report a priori calculations for their sample size. This article discusses a process in health services delivery mapping to generate a comprehensive sampling frame, which was used to calculate an a priori sample size for a targeted clinical record audit. We describe how we approached methodological and defini...
patients. Calculated sample size (target population): 1000 patients
DEFF Research Database (Denmark)
Jensen, Jens-Ulrik; Lundgren, Bettina; Hein, Lars;
2008-01-01
and signs may present atypically. The established biological markers of inflammation (leucocytes, C-reactive protein) may often be influenced by other parameters than infection, and may be unacceptably slowly released after progression of an infection. At the same time, lack of a relevant...... hypertriglyceridaemia, 2) Likely that safety is compromised by blood sampling, 3) Pregnant or breast feeding.Computerized Randomisation: Two arms (1:1), n = 500 per arm: Arm 1: standard of care. Arm 2: standard of care and Procalcitonin guided diagnostics and treatment of infection.Primary Trial Objective: To address......-guided strategy compared to the best standard of care, is conducted in an Intensive care setting. Results will, with a high statistical power answer the question: Can the survival of critically ill patients be improved by actively using biomarker procalcitonin in the treatment of infections? 700 critically ill...
Weeks, Scott; Atlas, Alvin
2015-01-01
A priori sample size calculations are used to determine the adequate sample size to estimate the prevalence of the target population with good precision. However, published audits rarely report a priori calculations for their sample size. This article discusses a process in health services delivery mapping to generate a comprehensive sampling frame, which was used to calculate an a priori sample size for a targeted clinical record audit. We describe how we approached methodological and definitional issues in the following steps: (1) target population definition, (2) sampling frame construction, and (3) a priori sample size calculation. We recommend this process for clinicians, researchers, or policy makers when detailed information on a reference population is unavailable. PMID:26122044
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
[Sample size calculation in clinical post-marketing evaluation of traditional Chinese medicine].
Fu, Yingkun; Xie, Yanming
2011-10-01
In recent years, as the Chinese government and people pay more attention on the post-marketing research of Chinese Medicine, part of traditional Chinese medicine breed has or is about to begin after the listing of post-marketing evaluation study. In the post-marketing evaluation design, sample size calculation plays a decisive role. It not only ensures the accuracy and reliability of post-marketing evaluation. but also assures that the intended trials will have a desired power for correctly detecting a clinically meaningful difference of different medicine under study if such a difference truly exists. Up to now, there is no systemic method of sample size calculation in view of the traditional Chinese medicine. In this paper, according to the basic method of sample size calculation and the characteristic of the traditional Chinese medicine clinical evaluation, the sample size calculation methods of the Chinese medicine efficacy and safety are discussed respectively. We hope the paper would be beneficial to medical researchers, and pharmaceutical scientists who are engaged in the areas of Chinese medicine research. PMID:22292397
Exact Power and Sample Size Calculations for the Two One-Sided Tests of Equivalence.
Shieh, Gwowen
2016-01-01
Equivalent testing has been strongly recommended for demonstrating the comparability of treatment effects in a wide variety of research fields including medical studies. Although the essential properties of the favorable two one-sided tests of equivalence have been addressed in the literature, the associated power and sample size calculations were illustrated mainly for selecting the most appropriate approximate method. Moreover, conventional power analysis does not consider the allocation restrictions and cost issues of different sample size choices. To extend the practical usefulness of the two one-sided tests procedure, this article describes exact approaches to sample size determinations under various allocation and cost considerations. Because the presented features are not generally available in common software packages, both R and SAS computer codes are presented to implement the suggested power and sample size computations for planning equivalence studies. The exact power function of the TOST procedure is employed to compute optimal sample sizes under four design schemes allowing for different allocation and cost concerns. The proposed power and sample size methodology should be useful for medical sciences to plan equivalence studies. PMID:27598468
Tavernier, Elsa; Trinquart, Ludovic; Giraudeau, Bruno
2016-01-01
Sample sizes for randomized controlled trials are typically based on power calculations. They require us to specify values for parameters such as the treatment effect, which is often difficult because we lack sufficient prior information. The objective of this paper is to provide an alternative design which circumvents the need for sample size calculation. In a simulation study, we compared a meta-experiment approach to the classical approach to assess treatment efficacy. The meta-experiment approach involves use of meta-analyzed results from 3 randomized trials of fixed sample size, 100 subjects. The classical approach involves a single randomized trial with the sample size calculated on the basis of an a priori-formulated hypothesis. For the sample size calculation in the classical approach, we used observed articles to characterize errors made on the formulated hypothesis. A prospective meta-analysis of data from trials of fixed sample size provided the same precision, power and type I error rate, on average, as the classical approach. The meta-experiment approach may provide an alternative design which does not require a sample size calculation and addresses the essential need for study replication; results may have greater external validity. PMID:27362939
Shao, Quanxi; Wang, You-Gan
2009-09-01
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty
Energy Technology Data Exchange (ETDEWEB)
Ferson, S. [Applied Biomathematics, Setauket, NY (United States)
1996-12-31
A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.
DEFF Research Database (Denmark)
Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.;
2008-01-01
in publications, sample size calculations and statistical methods were often explicitly discrepant with the protocol or not pre-specified. Such amendments were rarely acknowledged in the trial publication. The reliability of trial reports cannot be assessed without having access to the full protocols...
Divine, George; Norton, H James; Hunt, Ronald; Dienemann, Jacqueline
2013-09-01
When a study uses an ordinal outcome measure with unknown differences in the anchors and a small range such as 4 or 7, use of the Wilcoxon rank sum test or the Wilcoxon signed rank test may be most appropriate. However, because nonparametric methods are at best indirect functions of standard measures of location such as means or medians, the choice of the most appropriate summary measure can be difficult. The issues underlying use of these tests are discussed. The Wilcoxon-Mann-Whitney odds directly reflects the quantity that the rank sum procedure actually tests, and thus it can be a superior summary measure. Unlike the means and medians, its value will have a one-to-one correspondence with the Wilcoxon rank sum test result. The companion article appearing in this issue of Anesthesia & Analgesia ("Aromatherapy as Treatment for Postoperative Nausea: A Randomized Trial") illustrates these issues and provides an example of a situation for which the medians imply no difference between 2 groups, even though the groups are, in fact, quite different. The trial cited also provides an example of a single sample that has a median of zero, yet there is a substantial shift for much of the nonzero data, and the Wilcoxon signed rank test is quite significant. These examples highlight the potential discordance between medians and Wilcoxon test results. Along with the issues surrounding the choice of a summary measure, there are considerations for the computation of sample size and power, confidence intervals, and multiple comparison adjustment. In addition, despite the increased robustness of the Wilcoxon procedures relative to parametric tests, some circumstances in which the Wilcoxon tests may perform poorly are noted, along with alternative versions of the procedures that correct for such limitations. PMID:23456667
The impact of metrology study sample size on uncertainty in IAEA safeguards calculations
Directory of Open Access Journals (Sweden)
Burr Tom
2016-01-01
Full Text Available Quantitative conclusions by the International Atomic Energy Agency (IAEA regarding States' nuclear material inventories and flows are provided in the form of material balance evaluations (MBEs. MBEs use facility estimates of the material unaccounted for together with verification data to monitor for possible nuclear material diversion. Verification data consist of paired measurements (usually operators' declarations and inspectors' verification results that are analysed one-item-at-a-time to detect significant differences. Also, to check for patterns, an overall difference of the operator-inspector values using a “D (difference statistic” is used. The estimated DP and false alarm probability (FAP depend on the assumed measurement error model and its random and systematic error variances, which are estimated using data from previous inspections (which are used for metrology studies to characterize measurement error variance components. Therefore, the sample sizes in both the previous and current inspections will impact the estimated DP and FAP, as is illustrated by simulated numerical examples. The examples include application of a new expression for the variance of the D statistic assuming the measurement error model is multiplicative and new application of both random and systematic error variances in one-item-at-a-time testing.
To discuss different calculation methods of sample size%样本含量估算方法探讨
Institute of Scientific and Technical Information of China (English)
喻宁芳
2014-01-01
目的：介绍和比较医学实验设计中不同的样本含量估算方法。方法：以PI3K抑制剂对小鼠气道炎症影响的实验研究*为例运用不同方法计算样本含量。结果：①公式法计算需12例②PASS软件Simple法计算需10例③Stata软件计算需8例，验算其检验效能：1-β>0.9结论：3种不同方法估算的样本含量都是合理有效的，实验研究人员可以以多种计算结果为依据，分析实验研究性质，综合考虑研究成本、可行性与伦理学要求对样本含量的影响确定合适的样本数。%Objective: To introduce and compare different calculation Methods of sample size in experiment design.Methods: As an example of PI3K inhibitor reduces respiratory tract inflammation in a murine model of Asthma.Results: In method of formula,12;in PASS software,8;in Stata software, 10.1-β>0.9.Conclusion: Proper analysis of the nature of research design,setting the correct parameters,Based on a variety of calculations to estimate the sample size, and then considering the research costs, feasibility and ethics requirements impact on sample size, and ultimately determine the most appropriate number of samples.
Directory of Open Access Journals (Sweden)
Finch Stephen J
2005-04-01
Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.
Desu, M M
2012-01-01
One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria
Directory of Open Access Journals (Sweden)
Hayashi Naoyuki
2009-09-01
Full Text Available Abstract Background Many patients with diabetes mellitus (DM require a combination of antidiabetic drugs with complementary mechanisms of action to lower their hemoglobin A1c levels to achieve therapeutic targets and reduce the risk of cardiovascular complications. Linagliptin is a novel member of the dipeptidyl peptidase-4 (DPP-4 inhibitor class of antidiabetic drugs. DPP-4 inhibitors increase incretin (glucagon-like peptide-1 and gastric inhibitory polypeptide levels, inhibit glucagon release and, more importantly, increase insulin secretion and inhibit gastric emptying. Currently, phase III clinical studies with linagliptin are underway to evaluate its clinical efficacy and safety. Linagliptin is expected to be one of the most appropriate therapies for Japanese patients with DM, as deficient insulin secretion is a greater concern than insulin resistance in this population. The number of patients with DM in Japan is increasing and this trend is predicted to continue. Several antidiabetic drugs are currently marketed in Japan; however there is no information describing the effective dose of linagliptin for Japanese patients with DM. Methods This prospective, randomized, double-blind study will compare linagliptin with placebo over a 12-week period. The study has also been designed to evaluate the safety and efficacy of linagliptin by comparing it with another antidiabetic, voglibose, over a 26-week treatment period. Four treatment groups have been established for these comparisons. A phase IIb/III combined study design has been utilized for this purpose and the approach for calculating sample size is described. Discussion This is the first phase IIb/III study to examine the long-term safety and efficacy of linagliptin in diabetes patients in the Japanese population. Trial registration Clinicaltrials.gov (NCT00654381.
Calculating Optimal Inventory Size
Directory of Open Access Journals (Sweden)
Ruby Perez
2010-01-01
Full Text Available The purpose of the project is to find the optimal value for the Economic Order Quantity Model and then use a lean manufacturing Kanban equation to find a numeric value that will minimize the total cost and the inventory size.
Sample size: from formulae to concepts - II
Directory of Open Access Journals (Sweden)
Rakesh R. Pathak
2013-02-01
Full Text Available Sample size formulae need some input data or to say it otherwise we need some parameters to calculate sample size. This second part on the formula explanation gives ideas of Z, population size, precision of error, standard deviation, contingency etc which influence sample size. [Int J Basic Clin Pharmacol 2013; 2(1.000: 94-95
Basic Statistical Concepts for Sample Size Estimation
Directory of Open Access Journals (Sweden)
Vithal K Dhulkhed
2008-01-01
Full Text Available For grant proposals the investigator has to include an estimation of sample size .The size of the sample should be adequate enough so that there is sufficient data to reliably answer the research question being addressed by the study. At the very planning stage of the study the investigator has to involve the statistician. To have meaningful dialogue with the statistician every research worker should be familiar with the basic concepts of statistics. This paper is concerned with simple principles of sample size calculation. Concepts are explained based on logic rather than rigorous mathematical calculations to help him assimilate the fundamentals.
Calculating body frame size (image)
... boned category. Determining frame size: To determine the body frame size, measure the wrist with a tape measure and use the following chart to determine whether the person is small, medium, or large boned. Women: Height under 5'2" Small = wrist size less ...
Improved sample size determination for attributes and variables sampling
International Nuclear Information System (INIS)
Earlier INMM paper have addressed the attributes/variables problem and, under conservative/limiting approximations, have reported analytical solutions for the attributes and variables sample sizes. Through computer simulation of this problem, the authors have calculated attributes and variables sample sizes as a function of falsification, measurement uncertainties, and required detection probability without using approximations. Using realistic assumptions for uncertainty parameters of measurement, the simulation results support the conclusions: (1) previously used conservative approximations can be expensive because they lead to larger sample sizes than needed, and (2) the optimal verification strategy, as well as the falsification strategy, are highly dependent on the underlying uncertainty parameters of the measurement instruments
Directory of Open Access Journals (Sweden)
Carlos Fabián Flórez Valero
2010-04-01
Full Text Available Using a percentage of a city’s households is a common practice in transport engineering leading to knowing the inhabit- ants’ lourney pattern. The procedure theoretically consists of calculating the sample based on the statistical parameters of population variable which one wishes to measure. This requires carrying out a pilot survey which cannot be done in countries having few resources because of the costs involved in knowing the value of such population parameters, because resources are sometimes exclusively destined to making an estimated sample according to a pre-established percentage. Percentages between 3% and 6% are usually used in Colombian cities, depending on population size. The city of Manizales (located 300 km to the west of Colombia’s capital carried out two household surveys in less than four years; when the second survey was carried out the values of the estimator parameters were thus already known. The Manizales’ mayor’s office made an agreement with the Universidad Nacional de Colombia for drawing up the new origin-destiny matrix, where it was possible to calculate the sample based on the pertinent statistical variables. The article makes a comparative analysis of both methodologies, concluding that when statistically estimating the sample it is possible to greatly reduce the number of surveys to be carried out, but obtaining practically equal results.
Sample Size Dependent Species Models
Zhou, Mingyuan; Walker, Stephen G.
2014-01-01
Motivated by the fundamental problem of measuring species diversity, this paper introduces the concept of a cluster structure to define an exchangeable cluster probability function that governs the joint distribution of a random count and its exchangeable random partitions. A cluster structure, naturally arising from a completely random measure mixed Poisson process, allows the probability distribution of the random partitions of a subset of a sample to be dependent on the sample size, a dist...
Biostatistics Series Module 5: Determining Sample Size.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 - β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the
Biostatistics Series Module 5: Determining Sample Size.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 - β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the
Sample sizes for confidence limits for reliability.
Energy Technology Data Exchange (ETDEWEB)
Darby, John L.
2010-02-01
We recently performed an evaluation of the implications of a reduced stockpile of nuclear weapons for surveillance to support estimates of reliability. We found that one technique developed at Sandia National Laboratories (SNL) under-estimates the required sample size for systems-level testing. For a large population the discrepancy is not important, but for a small population it is important. We found that another technique used by SNL provides the correct required sample size. For systems-level testing of nuclear weapons, samples are selected without replacement, and the hypergeometric probability distribution applies. Both of the SNL techniques focus on samples without defects from sampling without replacement. We generalized the second SNL technique to cases with defects in the sample. We created a computer program in Mathematica to automate the calculation of confidence for reliability. We also evaluated sampling with replacement where the binomial probability distribution applies.
Sample size for morphological traits of pigeonpea
Directory of Open Access Journals (Sweden)
Giovani Facco
2015-12-01
Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons.
Sample size and power analysis in medical research
Directory of Open Access Journals (Sweden)
Zodpey Sanjay
2004-03-01
Full Text Available Among the questions that a researcher should ask when planning a study is "How large a sample do I need?" If the sample size is too small, even a well conducted study may fail to answer its research question, may fail to detect important effects or associations, or may estimate those effects or associations too imprecisely. Similarly, if the sample size is too large, the study will be more difficult and costly, and may even lead to a loss in accuracy. Hence, optimum sample size is an essential component of any research. When the estimated sample size can not be included in a study, post-hoc power analysis should be carried out. Approaches for estimating sample size and performing power analysis depend primarily on the study design and the main outcome measure of the study. There are distinct approaches for calculating sample size for different study designs and different outcome measures. Additionally, there are also different procedures for calculating sample size for two approaches of drawing statistical inference from the study results, i.e. confidence interval approach and test of significance approach. This article describes some commonly used terms, which need to be specified for a formal sample size calculation. Examples for four procedures (use of formulae, readymade tables, nomograms, and computer software, which are conventionally used for calculating sample size, are also given
Institute of Scientific and Technical Information of China (English)
林洁; 孙志明
2015-01-01
Objective To analyze the differences between SAS, PASS and Stata for sample size calculation in a test of two means (rates) and recommend the appropriate software for sample size calculation. Methods By setting different pa-rameters, sample sizes were calculated using three kinds of software respectively and compared with the formula results. Results In two sample means test, Stata and PASS had the most accurate results, the results in SAS were affected by different parameters. In two sample rates test, the SAS results were the best of three, the accuracy of PASS was related with the sample size, the results in Stata were larger than others and affected by different parameters. Conclusion The results are not consistent using different software, SAS is recommended for two sample mean (rate) of sample size calcu-lation.%目的：分析和探讨运用SAS、PASS、Stata 3种软件在两均数(率)比较中进行样本量估计的结果差异,推荐合适的样本量估计软件。方法通过设定不同的参数情况,分别运用3种软件计算各自样本量,并且与公式计算结果进行比较。结果在两均数比较时,Stata和PASS的样本量估计结果最准确,不同的参数会影响SAS的结果；在两个率比较时,SAS最准确,PASS的准确性与样本量大小有关系,Stata结果偏大且受不同参数的影响。结论不同软件计算结果并不一致,综合考虑推荐用SAS软件进行两样本均数(率)比较的样本量估计。
Sample size determination in clinical trials with multiple endpoints
Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R
2015-01-01
This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Experimental determination of size distributions: analyzing proper sample sizes
Buffo, A.; Alopaeus, V.
2016-04-01
The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used.
Predicting sample size required for classification performance
Directory of Open Access Journals (Sweden)
Figueroa Rosa L
2012-02-01
Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.
Sample size considerations for clinical research studies in nuclear cardiology.
Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J
2015-12-01
Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.
Sample size re-estimation in a breast cancer trial
Hade, Erinn; Jarjoura, David; Wei, Lai
2016-01-01
Background During the recruitment phase of a randomized breast cancer trial, investigating the time to recurrence, we found evidence that the failure probabilities used at the design stage were too high. Since most of the methodological research involving sample size re-estimation has focused on normal or binary outcomes, we developed a method which preserves blinding to re-estimate sample size in our time to event trial. Purpose A mistakenly high estimate of the failure rate at the design stage may reduce the power unacceptably for a clinically important hazard ratio. We describe an ongoing trial and an application of a sample size re-estimation method that combines current trial data with prior trial data or assumes a parametric model to re-estimate failure probabilities in a blinded fashion. Methods Using our current blinded trial data and additional information from prior studies, we re-estimate the failure probabilities to be used in sample size re-calculation. We employ bootstrap resampling to quantify uncertainty in the re-estimated sample sizes. Results At the time of re-estimation data from 278 patients was available, averaging 1.2 years of follow up. Using either method, we estimated an increase of 0 for the hazard ratio proposed at the design stage. We show that our method of blinded sample size re-estimation preserves the Type I error rate. We show that when the initial guess of the failure probabilities are correct; the median increase in sample size is zero. Limitations Either some prior knowledge of an appropriate survival distribution shape or prior data is needed for re-estimation. Conclusions In trials when the accrual period is lengthy, blinded sample size re-estimation near the end of the planned accrual period should be considered. In our examples, when assumptions about failure probabilities and HRs are correct the methods usually do not increase sample size or otherwise increase it by very little. PMID:20392786
Defining sample size and sampling strategy for dendrogeomorphic rockfall reconstructions
Morel, Pauline; Trappmann, Daniel; Corona, Christophe; Stoffel, Markus
2015-05-01
Optimized sampling strategies have been recently proposed for dendrogeomorphic reconstructions of mass movements with a large spatial footprint, such as landslides, snow avalanches, and debris flows. Such guidelines have, by contrast, been largely missing for rockfalls and cannot be transposed owing to the sporadic nature of this process and the occurrence of individual rocks and boulders. Based on a data set of 314 European larch (Larix decidua Mill.) trees (i.e., 64 trees/ha), growing on an active rockfall slope, this study bridges this gap and proposes an optimized sampling strategy for the spatial and temporal reconstruction of rockfall activity. Using random extractions of trees, iterative mapping, and a stratified sampling strategy based on an arbitrary selection of trees, we investigate subsets of the full tree-ring data set to define optimal sample size and sampling design for the development of frequency maps of rockfall activity. Spatially, our results demonstrate that the sampling of only 6 representative trees per ha can be sufficient to yield a reasonable mapping of the spatial distribution of rockfall frequencies on a slope, especially if the oldest and most heavily affected individuals are included in the analysis. At the same time, however, sampling such a low number of trees risks causing significant errors especially if nonrepresentative trees are chosen for analysis. An increased number of samples therefore improves the quality of the frequency maps in this case. Temporally, we demonstrate that at least 40 trees/ha are needed to obtain reliable rockfall chronologies. These results will facilitate the design of future studies, decrease the cost-benefit ratio of dendrogeomorphic studies and thus will permit production of reliable reconstructions with reasonable temporal efforts.
Sample size estimation and sampling techniques for selecting a representative sample
Directory of Open Access Journals (Sweden)
Aamir Omair
2014-01-01
Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.
Hand calculations for transport of radioactive aerosols through sampling systems.
Hogue, Mark; Thompson, Martha; Farfan, Eduardo; Hadlock, Dennis
2014-05-01
Workplace air monitoring programs for sampling radioactive aerosols in nuclear facilities sometimes must rely on sampling systems to move the air to a sample filter in a safe and convenient location. These systems may consist of probes, straight tubing, bends, contractions and other components. Evaluation of these systems for potential loss of radioactive aerosols is important because significant losses can occur. However, it can be very difficult to find fully described equations to model a system manually for a single particle size and even more difficult to evaluate total system efficiency for a polydispersed particle distribution. Some software methods are available, but they may not be directly applicable to the components being evaluated and they may not be completely documented or validated per current software quality assurance requirements. This paper offers a method to model radioactive aerosol transport in sampling systems that is transparent and easily updated with the most applicable models. Calculations are shown with the R Programming Language, but the method is adaptable to other scripting languages. The method has the advantage of transparency and easy verifiability. This paper shows how a set of equations from published aerosol science models may be applied to aspiration and transport efficiency of aerosols in common air sampling system components. An example application using R calculation scripts is demonstrated. The R scripts are provided as electronic attachments. PMID:24667389
Hand calculations for transport of radioactive aerosols through sampling systems.
Hogue, Mark; Thompson, Martha; Farfan, Eduardo; Hadlock, Dennis
2014-05-01
Workplace air monitoring programs for sampling radioactive aerosols in nuclear facilities sometimes must rely on sampling systems to move the air to a sample filter in a safe and convenient location. These systems may consist of probes, straight tubing, bends, contractions and other components. Evaluation of these systems for potential loss of radioactive aerosols is important because significant losses can occur. However, it can be very difficult to find fully described equations to model a system manually for a single particle size and even more difficult to evaluate total system efficiency for a polydispersed particle distribution. Some software methods are available, but they may not be directly applicable to the components being evaluated and they may not be completely documented or validated per current software quality assurance requirements. This paper offers a method to model radioactive aerosol transport in sampling systems that is transparent and easily updated with the most applicable models. Calculations are shown with the R Programming Language, but the method is adaptable to other scripting languages. The method has the advantage of transparency and easy verifiability. This paper shows how a set of equations from published aerosol science models may be applied to aspiration and transport efficiency of aerosols in common air sampling system components. An example application using R calculation scripts is demonstrated. The R scripts are provided as electronic attachments.
Directory of Open Access Journals (Sweden)
Samuel Gustavo Ceballos Pérez
2011-08-01
Full Text Available Para realizar el seguimiento o auditoría nutricional al cultivo del plátano Hartón, fueron evaluadas dos alternativas de muestreo de su tejido foliar mediante: a la muestra preliminar completamente al azar; y b la muestra estratificada aleatoria, con el objetivo de determinar el marco muestreal a fin de obtener el menor tamaño de la muestra en 2008. La unidad de análisis o experimental estuvo, constituida por dos plantas, la planta "madre" al momento de la emisión de la inflorescencia, y su brote lateral o "hijo" en pleno desarrollo. La muestra foliar fue colectada, acorde a las condiciones establecidas en el muestreo internacional de referencia (MEIR. Esa unidad fue identificada con pintura brillante, de manera tal que permitiera identificarla 10 a 14 semanas después y fuese cosechada y pesado su racimo. Los resultados demostraron que el marco muestreal generado para muestreo aleatorio estratificado, permite determinar el menor tamaño de la muestra foliar en el ámbito del Sur del Lago de Maracaibo.In order to perform the monitoring or nutritional audit to cultivation Hartón plantain, two alternative sampling of leaf tissue were evaluated by: a the preliminary completely random sample and b stratified random sample, in order to determine the sampling frame and get the smaller sample in 2008. The analysis or experimental unit consisted of two plants, the "mother" plant at the time of inflorescence emergence and its lateral bud or "son" in full development. The leaf sample was collected according to the conditions laid down in the international reference sample (IRS. That unit was tagged with bright paint, in order to identify it at 10 to 14 weeks later and its cluster were harvested and weighed. Results showed that the sampling frame generated for stratified random sampling, allow us to determine the lowest leaf sample size in the South of Maracaibo Lake area.
Effect size estimates: current use, calculations, and interpretation.
Fritz, Catherine O; Morris, Peter E; Richler, Jennifer J
2012-02-01
The Publication Manual of the American Psychological Association (American Psychological Association, 2001, American Psychological Association, 2010) calls for the reporting of effect sizes and their confidence intervals. Estimates of effect size are useful for determining the practical or theoretical importance of an effect, the relative contributions of factors, and the power of an analysis. We surveyed articles published in 2009 and 2010 in the Journal of Experimental Psychology: General, noting the statistical analyses reported and the associated reporting of effect size estimates. Effect sizes were reported for fewer than half of the analyses; no article reported a confidence interval for an effect size. The most often reported analysis was analysis of variance, and almost half of these reports were not accompanied by effect sizes. Partial η2 was the most commonly reported effect size estimate for analysis of variance. For t tests, 2/3 of the articles did not report an associated effect size estimate; Cohen's d was the most often reported. We provide a straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis. PMID:21823805
Effect size estimates: current use, calculations, and interpretation.
Fritz, Catherine O; Morris, Peter E; Richler, Jennifer J
2012-02-01
The Publication Manual of the American Psychological Association (American Psychological Association, 2001, American Psychological Association, 2010) calls for the reporting of effect sizes and their confidence intervals. Estimates of effect size are useful for determining the practical or theoretical importance of an effect, the relative contributions of factors, and the power of an analysis. We surveyed articles published in 2009 and 2010 in the Journal of Experimental Psychology: General, noting the statistical analyses reported and the associated reporting of effect size estimates. Effect sizes were reported for fewer than half of the analyses; no article reported a confidence interval for an effect size. The most often reported analysis was analysis of variance, and almost half of these reports were not accompanied by effect sizes. Partial η2 was the most commonly reported effect size estimate for analysis of variance. For t tests, 2/3 of the articles did not report an associated effect size estimate; Cohen's d was the most often reported. We provide a straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis.
Sample size for detecting differentially expressed genes in microarray experiments
Directory of Open Access Journals (Sweden)
Li Jiangning
2004-11-01
Full Text Available Abstract Background Microarray experiments are often performed with a small number of biological replicates, resulting in low statistical power for detecting differentially expressed genes and concomitant high false positive rates. While increasing sample size can increase statistical power and decrease error rates, with too many samples, valuable resources are not used efficiently. The issue of how many replicates are required in a typical experimental system needs to be addressed. Of particular interest is the difference in required sample sizes for similar experiments in inbred vs. outbred populations (e.g. mouse and rat vs. human. Results We hypothesize that if all other factors (assay protocol, microarray platform, data pre-processing were equal, fewer individuals would be needed for the same statistical power using inbred animals as opposed to unrelated human subjects, as genetic effects on gene expression will be removed in the inbred populations. We apply the same normalization algorithm and estimate the variance of gene expression for a variety of cDNA data sets (humans, inbred mice and rats comparing two conditions. Using one sample, paired sample or two independent sample t-tests, we calculate the sample sizes required to detect a 1.5-, 2-, and 4-fold changes in expression level as a function of false positive rate, power and percentage of genes that have a standard deviation below a given percentile. Conclusions Factors that affect power and sample size calculations include variability of the population, the desired detectable differences, the power to detect the differences, and an acceptable error rate. In addition, experimental design, technical variability and data pre-processing play a role in the power of the statistical tests in microarrays. We show that the number of samples required for detecting a 2-fold change with 90% probability and a p-value of 0.01 in humans is much larger than the number of samples commonly used in
CALCULATION OF PARTICLE SIZE OF TITANIUM DIOXIDE HYDROSOL
Directory of Open Access Journals (Sweden)
L. M. Sliapniova
2014-01-01
Full Text Available One of the problems facing chemists who are involved in obtaining disperse systems with micro- and nanoscale particles of the disperse phase is a size evaluation of the obtained particles. Formation of hydrated sol is one of the stages for obtaining nanopowders while using sol-gel-method. We have obtained titanium dioxide hydrosol while using titanium tetrachloride hydrolysis in the presence of organic solvent with the purpose to get titanium dioxide powder It has been necessary to evaluate size of titanium dioxide hydrosol particles because particle dimensions of disperse hydrosol phase are directly interrelated with the obtained powder dispersiveness.Size calculation of titanium dioxide hydrosol particles of disperse phase has been executed in accordance with the Rayleigh equation and it has been shown that calculation results correspond to experimental data of atomic force microscopy and X-ray crystal analysis of the powder obtained from hydrosol.In order to calculate particle size in the disperse system it is possible to use the Rayleigh equation if the particle size is not more than 1/10 of wave length of impinging light or the Heller equation for the system including particles with diameter less than wave length of the impinging light but which is more than 1/10 of its value. Titaniun dioxide hydrosol has been obtained and an index of the wave ration has been calculated in the Heller equation. The obtained value has testified about high dispersiveness of the system and possibility to use the Rayleigh equation for calculation of the particle size in the disperse phase. Calculation of disperse-phase particle size of titanium dioxide hydrosol has corresponded to experimental data of the atomic force microscopy and X-ray crystal analysis for the powder obtained from the system.
Estimation of individual reference intervals in small sample sizes
DEFF Research Database (Denmark)
Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz;
2007-01-01
In occupational health studies, the study groups most often comprise healthy subjects performing their work. Sampling is often planned in the most practical way, e.g., sampling of blood in the morning at the work site just after the work starts. Optimal use of reference intervals requires...... that the population, on which the reference interval is based, is representative for the study group in question. The International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) recommends estimating reference interval on at least 120 subjects. It may be costly and difficult to gain group sizes...... presented in this study. The presented method enables occupational health researchers to calculate reference intervals for specific groups, i.e. smokers versus non-smokers, etc. In conclusion, the variance component models provide an appropriate tool to estimate reference intervals based on small sample...
Simultaneous calculation of aircraft design loads and structural member sizes
Giles, G. L.; Mccullers, L. A.
1975-01-01
A design process which accounts for the interaction between aerodynamic loads and changes in member sizes during sizing of aircraft structures is described. A simultaneous iteration procedure is used wherein both design loads and member sizes are updated during each cycle yielding converged, compatible loads and member sizes. A description is also given of a system of programs which incorporates this process using lifting surface theory to calculate aerodynamic pressure distributions, using a finite-element method for structural analysis, and using a fully stressed design technique to size structural members. This system is tailored to perform the entire process with computational efficiency in a single computer run so that it can be used effectively during preliminary design. Selected results, considering maneuver, taxi, and fatigue design conditions, are presented to illustrate convergence characteristics of this iterative procedure.
Optimal flexible sample size design with robust power.
Zhang, Lanju; Cui, Lu; Yang, Bo
2016-08-30
It is well recognized that sample size determination is challenging because of the uncertainty on the treatment effect size. Several remedies are available in the literature. Group sequential designs start with a sample size based on a conservative (smaller) effect size and allow early stop at interim looks. Sample size re-estimation designs start with a sample size based on an optimistic (larger) effect size and allow sample size increase if the observed effect size is smaller than planned. Different opinions favoring one type over the other exist. We propose an optimal approach using an appropriate optimality criterion to select the best design among all the candidate designs. Our results show that (1) for the same type of designs, for example, group sequential designs, there is room for significant improvement through our optimization approach; (2) optimal promising zone designs appear to have no advantages over optimal group sequential designs; and (3) optimal designs with sample size re-estimation deliver the best adaptive performance. We conclude that to deal with the challenge of sample size determination due to effect size uncertainty, an optimal approach can help to select the best design that provides most robust power across the effect size range of interest. Copyright © 2016 John Wiley & Sons, Ltd. PMID:26999385
Sample size and power for comparing two or more treatment groups in clinical trials.
Day, S. J.; Graham, D F
1989-01-01
Methods for determining sample size and power when comparing two groups in clinical trials are widely available. Studies comparing three or more treatments are not uncommon but are more difficult to analyse. A linear nomogram was devised to help calculate the sample size required when comparing up to five parallel groups. It may also be used retrospectively to determine the power of a study of given sample size. In two worked examples the nomogram was efficient. Although the nomogram offers o...
On bootstrap sample size in extreme value theory
J.L. Geluk (Jaap); L.F.M. de Haan (Laurens)
2002-01-01
textabstractIt has been known for a long time that for bootstrapping the probability distribution of the maximum of a sample consistently, the bootstrap sample size needs to be of smaller order than the original sample size. See Jun Shao and Dongsheng Tu (1995), Ex. 3.9,p. 123. We show that the same
Parasite prevalence and sample size: misconceptions and solutions
Jovani, Roger; Tella, José Luis
2006-01-01
Parasite prevalence (the proportion of infected hosts) is a common measure used to describe parasitaemias and to unravel ecological and evolutionary factors that influence host–parasite relationships. Prevalence esti- mates are often based on small sample sizes because of either low abundance of the hosts or logistical problems associated with their capture or laboratory analysis. Because the accuracy of prevalence estimates is lower with small sample sizes, addressing sample size h...
A computer program for sample size computations for banding studies
Wilson, K.R.; Nichols, J.D.; Hines, J.E.
1989-01-01
Sample sizes necessary for estimating survival rates of banded birds, adults and young, are derived based on specified levels of precision. The banding study can be new or ongoing. The desired coefficient of variation (CV) for annual survival estimates, the CV for mean annual survival estimates, and the length of the study must be specified to compute sample sizes. A computer program is available for computation of the sample sizes, and a description of the input and output is provided.
A review of software for sample size determination.
Dattalo, Patrick
2009-09-01
The size of a sample is an important element in determining the statistical precision with which population values can be estimated. This article identifies and describes free and commercial programs for sample size determination. Programs are categorized as follows: (a) multiple procedure for sample size determination; (b) single procedure for sample size determination; and (c) Web-based. Programs are described in terms of (a) cost; (b) ease of use, including interface, operating system and hardware requirements, and availability of documentation and technical support; (c) file management, including input and output formats; and (d) analytical and graphical capabilities. PMID:19696082
Dose Rate Calculations for Rotary Mode Core Sampling Exhauster
Foust, D J
2000-01-01
This document provides the calculated estimated dose rates for three external locations on the Rotary Mode Core Sampling (RMCS) exhauster HEPA filter housing, per the request of Characterization Field Engineering.
7 CFR 52.775 - Sample unit size.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sample unit size. 52.775 Section 52.775 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... United States Standards for Grades of Canned Red Tart Pitted Cherries 1 Sample Unit Size § 52.775...
An adaptive sampling scheme for deep-penetration calculation
International Nuclear Information System (INIS)
As we know, the deep-penetration problem has been one of the important and difficult problems in shielding calculation with Monte Carlo Method for several decades. In this paper, an adaptive Monte Carlo method under the emission point as a sampling station for shielding calculation is investigated. The numerical results show that the adaptive method may improve the efficiency of the calculation of shielding and might overcome the under-estimation problem easy to happen in deep-penetration calculation in some degree
Power Analysis and Sample Size Determination in Metabolic Phenotyping.
Blaise, Benjamin J; Correia, Gonçalo; Tin, Adrienne; Young, J Hunter; Vergnaud, Anne-Claire; Lewis, Matthew; Pearce, Jake T M; Elliott, Paul; Nicholson, Jeremy K; Holmes, Elaine; Ebbels, Timothy M D
2016-05-17
Estimation of statistical power and sample size is a key aspect of experimental design. However, in metabolic phenotyping, there is currently no accepted approach for these tasks, in large part due to the unknown nature of the expected effect. In such hypothesis free science, neither the number or class of important analytes nor the effect size are known a priori. We introduce a new approach, based on multivariate simulation, which deals effectively with the highly correlated structure and high-dimensionality of metabolic phenotyping data. First, a large data set is simulated based on the characteristics of a pilot study investigating a given biomedical issue. An effect of a given size, corresponding either to a discrete (classification) or continuous (regression) outcome is then added. Different sample sizes are modeled by randomly selecting data sets of various sizes from the simulated data. We investigate different methods for effect detection, including univariate and multivariate techniques. Our framework allows us to investigate the complex relationship between sample size, power, and effect size for real multivariate data sets. For instance, we demonstrate for an example pilot data set that certain features achieve a power of 0.8 for a sample size of 20 samples or that a cross-validated predictivity QY(2) of 0.8 is reached with an effect size of 0.2 and 200 samples. We exemplify the approach for both nuclear magnetic resonance and liquid chromatography-mass spectrometry data from humans and the model organism C. elegans.
Hickson, Kevin J; O'Keefe, Graeme J
2014-09-01
The scalable XCAT voxelised phantom was used with the GATE Monte Carlo toolkit to investigate the effect of voxel size on dosimetry estimates of internally distributed radionuclide calculated using direct Monte Carlo simulation. A uniformly distributed Fluorine-18 source was simulated in the Kidneys of the XCAT phantom with the organ self dose (kidney ← kidney) and organ cross dose (liver ← kidney) being calculated for a number of organ and voxel sizes. Patient specific dose factors (DF) from a clinically acquired FDG PET/CT study have also been calculated for kidney self dose and liver ← kidney cross dose. Using the XCAT phantom it was found that significantly small voxel sizes are required to achieve accurate calculation of organ self dose. It has also been used to show that a voxel size of 2 mm or less is suitable for accurate calculations of organ cross dose. To compensate for insufficient voxel sampling a correction factor is proposed. This correction factor is applied to the patient specific dose factors calculated with the native voxel size of the PET/CT study.
SNS Sample Activation Calculator Flux Recommendations and Validation
Energy Technology Data Exchange (ETDEWEB)
McClanahan, Tucker C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Gallmeier, Franz X. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Iverson, Erik B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Lu, Wei [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS)
2015-02-01
The Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) uses the Sample Activation Calculator (SAC) to calculate the activation of a sample after the sample has been exposed to the neutron beam in one of the SNS beamlines. The SAC webpage takes user inputs (choice of beamline, the mass, composition and area of the sample, irradiation time, decay time, etc.) and calculates the activation for the sample. In recent years, the SAC has been incorporated into the user proposal and sample handling process, and instrument teams and users have noticed discrepancies in the predicted activation of their samples. The Neutronics Analysis Team validated SAC by performing measurements on select beamlines and confirmed the discrepancies seen by the instrument teams and users. The conclusions were that the discrepancies were a result of a combination of faulty neutron flux spectra for the instruments, improper inputs supplied by SAC (1.12), and a mishandling of cross section data in the Sample Activation Program for Easy Use (SAPEU) (1.1.2). This report focuses on the conclusion that the SAPEU (1.1.2) beamline neutron flux spectra have errors and are a significant contributor to the activation discrepancies. The results of the analysis of the SAPEU (1.1.2) flux spectra for all beamlines will be discussed in detail. The recommendations for the implementation of improved neutron flux spectra in SAPEU (1.1.3) are also discussed.
Sample size in qualitative interview studies: guided by information power
DEFF Research Database (Denmark)
Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane
2016-01-01
the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... depends on (a) the aim of the study, (b) sample specificity, (c) use of established theory, (d) quality of dialogue, and (e) analysis strategy. We present a model where these elements of information and their relevant dimensions are related to information power. Application of this model in the planning...
40 CFR 600.208-77 - Sample calculation.
2010-07-01
... 600.208-77 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel Economy Values § 600.208-77 Sample...
Hellman-Feynman operator sampling in diffusion Monte Carlo calculations.
Gaudoin, R; Pitarke, J M
2007-09-21
Diffusion Monte Carlo (DMC) calculations typically yield highly accurate results in solid-state and quantum-chemical calculations. However, operators that do not commute with the Hamiltonian are at best sampled correctly up to second order in the error of the underlying trial wave function once simple corrections have been applied. This error is of the same order as that for the energy in variational calculations. Operators that suffer from these problems include potential energies and the density. This Letter presents a new method, based on the Hellman-Feynman theorem, for the correct DMC sampling of all operators diagonal in real space. Our method is easy to implement in any standard DMC code.
Size variation in samples of fossil and recent murid teeth
Freudenthal, M.; Martín Suárez, E.
1990-01-01
The variability coefficient proposed by Freudenthal & Cuenca Bescós (1984) for samples of fossil cricetid teeth, is calculated for about 200 samples of fossil and recent murid teeth. The results are discussed, and compared with those obtained for the Cricetidae.
Aircraft studies of size-dependent aerosol sampling through inlets
Porter, J. N.; Clarke, A. D.; Ferry, G.; Pueschel, R. F.
1992-01-01
Representative measurement of aerosol from aircraft-aspirated systems requires special efforts in order to maintain near isokinetic sampling conditions, estimate aerosol losses in the sample system, and obtain a measurement of sufficient duration to be statistically significant for all sizes of interest. This last point is especially critical for aircraft measurements which typically require fast response times while sampling in clean remote regions. This paper presents size-resolved tests, intercomparisons, and analysis of aerosol inlet performance as determined by a custom laser optical particle counter. Measurements discussed here took place during the Global Backscatter Experiment (1988-1989) and the Central Pacific Atmospheric Chemistry Experiment (1988). System configurations are discussed including (1) nozzle design and performance, (2) system transmission efficiency, (3) nonadiabatic effects in the sample line and its effect on the sample-line relative humidity, and (4) the use and calibration of a virtual impactor.
Sample size considerations for livestock movement network data.
Pfeiffer, Caitlin N; Firestone, Simon M; Campbell, Angus J D; Larsen, John W A; Stevenson, Mark A
2015-12-01
The movement of animals between farms contributes to infectious disease spread in production animal populations, and is increasingly investigated with social network analysis methods. Tangible outcomes of this work include the identification of high-risk premises for targeting surveillance or control programs. However, knowledge of the effect of sampling or incomplete network enumeration on these studies is limited. In this study, a simulation algorithm is presented that provides an estimate of required sampling proportions based on predicted network size, density and degree value distribution. The algorithm may be applied a priori to ensure network analyses based on sampled or incomplete data provide population estimates of known precision. Results demonstrate that, for network degree measures, sample size requirements vary with sampling method. The repeatability of the algorithm output under constant network and sampling criteria was found to be consistent for networks with at least 1000 nodes (in this case, farms). Where simulated networks can be constructed to closely mimic the true network in a target population, this algorithm provides a straightforward approach to determining sample size under a given sampling procedure for a network measure of interest. It can be used to tailor study designs of known precision, for investigating specific livestock movement networks and their impact on disease dissemination within populations. PMID:26276397
Surprise Calculator: Estimating relative entropy and Surprise between samples
Seehars, Sebastian
2016-05-01
The Surprise is a measure for consistency between posterior distributions and operates in parameter space. It can be used to analyze either the compatibility of separately analyzed posteriors from two datasets, or the posteriors from a Bayesian update. The Surprise Calculator estimates relative entropy and Surprise between two samples, assuming they are Gaussian. The software requires the R package CompQuadForm to estimate the significance of the Surprise, and rpy2 to interface R with Python.
Current sample size conventions: Flaws, harms, and alternatives
Directory of Open Access Journals (Sweden)
Bacchetti Peter
2010-03-01
Full Text Available Abstract Background The belief remains widespread that medical research studies must have statistical power of at least 80% in order to be scientifically sound, and peer reviewers often question whether power is high enough. Discussion This requirement and the methods for meeting it have severe flaws. Notably, the true nature of how sample size influences a study's projected scientific or practical value precludes any meaningful blanket designation of value of information methods, simple choices based on cost or feasibility that have recently been justified, sensitivity analyses that examine a meaningful array of possible findings, and following previous analogous studies. To promote more rational approaches, research training should cover the issues presented here, peer reviewers should be extremely careful before raising issues of "inadequate" sample size, and reports of completed studies should not discuss power. Summary Common conventions and expectations concerning sample size are deeply flawed, cause serious harm to the research process, and should be replaced by more rational alternatives.
On an Approach to Bayesian Sample Sizing in Clinical Trials
Muirhead, Robb J
2012-01-01
This paper explores an approach to Bayesian sample size determination in clinical trials. The approach falls into the category of what is often called "proper Bayesian", in that it does not mix frequentist concepts with Bayesian ones. A criterion for a "successful trial" is defined in terms of a posterior probability, its probability is assessed using the marginal distribution of the data, and this probability forms the basis for choosing sample sizes. We illustrate with a standard problem in clinical trials, that of establishing superiority of a new drug over a control.
Calculating Confidence Intervals for Effect Sizes Using Noncentral Distributions.
Norris, Deborah
This paper provides a brief review of the concepts of confidence intervals, effect sizes, and central and noncentral distributions. The use of confidence intervals around effect sizes is discussed. A demonstration of the Exploratory Software for Confidence Intervals (G. Cuming and S. Finch, 2001; ESCI) is given to illustrate effect size confidence…
International Nuclear Information System (INIS)
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95. percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input. Particularly it was shown that during the burnup, the variances when considering all the parameters uncertainties is equivalent to the sum of variances if the parameter uncertainties are sampled separately
Detecting Neuroimaging Biomarkers for Psychiatric Disorders: Sample Size Matters.
Schnack, Hugo G; Kahn, René S
2016-01-01
In a recent review, it was suggested that much larger cohorts are needed to prove the diagnostic value of neuroimaging biomarkers in psychiatry. While within a sample, an increase of diagnostic accuracy of schizophrenia (SZ) with number of subjects (N) has been shown, the relationship between N and accuracy is completely different between studies. Using data from a recent meta-analysis of machine learning (ML) in imaging SZ, we found that while low-N studies can reach 90% and higher accuracy, above N/2 = 50 the maximum accuracy achieved steadily drops to below 70% for N/2 > 150. We investigate the role N plays in the wide variability in accuracy results in SZ studies (63-97%). We hypothesize that the underlying cause of the decrease in accuracy with increasing N is sample heterogeneity. While smaller studies more easily include a homogeneous group of subjects (strict inclusion criteria are easily met; subjects live close to study site), larger studies inevitably need to relax the criteria/recruit from large geographic areas. A SZ prediction model based on a heterogeneous group of patients with presumably a heterogeneous pattern of structural or functional brain changes will not be able to capture the whole variety of changes, thus being limited to patterns shared by most patients. In addition to heterogeneity (sample size), we investigate other factors influencing accuracy and introduce a ML effect size. We derive a simple model of how the different factors, such as sample heterogeneity and study setup determine this ML effect size, and explain the variation in prediction accuracies found from the literature, both in cross-validation and independent sample testing. From this, we argue that smaller-N studies may reach high prediction accuracy at the cost of lower generalizability to other samples. Higher-N studies, on the other hand, will have more generalization power, but at the cost of lower accuracy. In conclusion, when comparing results from different
Sample size cognizant detection of signals in white noise
Rao, N Raj
2007-01-01
The detection and estimation of signals in noisy, limited data is a problem of interest to many scientific and engineering communities. We present a computationally simple, sample eigenvalue based procedure for estimating the number of high-dimensional signals in white noise when there are relatively few samples. We highlight a fundamental asymptotic limit of sample eigenvalue based detection of weak high-dimensional signals from a limited sample size and discuss its implication for the detection of two closely spaced signals. This motivates our heuristic definition of the 'effective number of identifiable signals.' Numerical simulations are used to demonstrate the consistency of the algorithm with respect to the effective number of signals and the superior performance of the algorithm with respect to Wax and Kailath's "asymptotically consistent" MDL based estimator.
Hydrophobicity of soil samples and soil size fractions
Energy Technology Data Exchange (ETDEWEB)
Lowen, H.A.; Dudas, M.J. [Alberta Univ., Edmonton, AB (Canada). Dept. of Renewable Resources; Roy, J.L. [Imperial Oil Resources Canada, Calgary, AB (Canada); Johnson, R.L. [Alberta Research Council, Vegreville, AB (Canada); McGill, W.B. [Alberta Univ., Edmonton, AB (Canada). Dept. of Renewable Resources
2001-07-01
The inability of dry soil to absorb water droplets within 10 seconds or less is defined as soil hydrophobicity. The severity, persistence and circumstances causing it vary greatly. There is a possibility that hydrophobicity in Alberta is a symptom of crude oil spills. In this study, the authors investigated the severity of soil hydrophobicity, as determined by the molarity of ethanol droplet test (MED) and dichloromethane extractable organic (DEO) concentration. The soil samples were collected from pedons within 12 hydrophobic soil sites, located northeast from Calgary to Cold Lake, Alberta. All the sites were located at an elevation ranging from 450 metres to 990 metres above sea level. The samples contained compounds from the Chernozemic, Gleysolic, Luvisolic, and Solonetzic soil orders. The results obtained indicated that the MED and DEO were positively correlated in whole soil samples. No relationships were found between MED and DEO in soil samples divided in soil fractions. More severe hydrophobicity and lower DEO concentrations were exhibited in clay- and silt-sized particles in the less than 53 micrometres, when compared to the samples in the other fraction (between 53 and 2000 micrometres). It was concluded that hydrophobicity was not restricted to a particular soil particle size class. 5 refs., 4 figs.
Saturated hydraulic conductivity measured on differently sized soil core samples
Danaa, Batjargal
2012-01-01
ABSTRACT This study is aiming at the estimation of different core sample size as parameter for soil physical properties and saturated hydraulic conductivity measurement. It concerns the manual reading of field experimental data collection quality at an experimental site of Czech University of Life Sciences in Prague equipped by a constant head infiltrometer. A newly constructed constant head infiltrometer of the department design was used. It consists of two Mariotte’s bottles system ...
PLOT SIZE AND APPROPRIATE SAMPLE SIZE TO STUDY NATURAL REGENERATION IN AMAZONIAN FLOODPLAIN FOREST
Directory of Open Access Journals (Sweden)
João Ricardo Vasconcellos Gama
2001-01-01
Full Text Available ABSTRACT: The aim of this study was to determine the optimum plot size as well as the appropriate sample size in order to provide an accurate sampling of natural regeneration surveys in high floodplain forests, low floodplain forests and in floodplain forests without stratification in the Amazonian estuary. Data were obtained at Exportadora de Madeira do Pará Ltda. – EMAPA forestlands, located in Afuá County, State of Pará. Based on the results, the following plot sizes were recommended: 70m2 - SC1 (0,3m ≤ h < 1,5m, 80m2 - SC2 (h ≥ 1,50m to DAP < 5,0cm, 90m2 - SC3 (5,0cm ≤ DAP < 15,0 cm and 70m2 – ASP (h ≥ 0,3m to DAP < 15,0cm. Considering these optimumplot sizes, it is possible to obtain a representative sampling of the floristic composition when using 19sub-plots in high floodplain, 14 sub-plots in low floodplain, and 19 sub-plots in the forest without stratification to survey the species of SC1 and the species of all sampled population (ASP, while 39 sub-plots are needed for sampling the natural regeneration species in SC2 and SC3.
Directory of Open Access Journals (Sweden)
Ismet DOGAN
2015-10-01
Full Text Available Objective: Choosing the most efficient statistical test is one of the essential problems of statistics. Asymptotic relative efficiency is a notion which enables to implement in large samples the quantitative comparison of two different tests used for testing of the same statistical hypothesis. The notion of the asymptotic efficiency of tests is more complicated than that of asymptotic efficiency of estimates. This paper discusses the effect of sample size on expected values and variances of non-parametric tests for independent two samples and determines the most effective test for different sample sizes using Fraser efficiency value. Material and Methods: Since calculating the power value in comparison of the tests is not practical most of the time, using the asymptotic relative efficiency value is favorable. Asymptotic relative efficiency is an indispensable technique for comparing and ordering statistical test in large samples. It is especially useful in nonparametric statistics where there exist numerous heuristic tests such as the linear rank tests. In this study, the sample size is determined as 2 ≤ n ≤ 50. Results: In both balanced and unbalanced cases, it is found that, as the sample size increases expected values and variances of all the tests discussed in this paper increase as well. Additionally, considering the Fraser efficiency, Mann-Whitney U test is found as the most efficient test among the non-parametric tests that are used in comparison of independent two samples regardless of their sizes. Conclusion: According to Fraser efficiency, Mann-Whitney U test is found as the most efficient test.
A simple nomogram for sample size for estimating sensitivity and specificity of medical tests
Directory of Open Access Journals (Sweden)
Malhotra Rajeev
2010-01-01
Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.
Directory of Open Access Journals (Sweden)
David Normando
2011-12-01
Full Text Available INTRODUÇÃO: o dimensionamento adequado da amostra estudada e a análise apropriada do erro do método são passos importantes na validação dos dados obtidos em determinado estudo científico, além das questões éticas e econômicas. OBJETIVO: esta investigação tem o objetivo de avaliar, quantitativamente, com que frequência os pesquisadores da ciência ortodôntica têm empregado o cálculo amostral e a análise do erro do método em pesquisas publicadas no Brasil e nos Estados Unidos. MÉTODOS: dois importantes periódicos, de acordo com a Capes (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, foram analisados, a Revista Dental Press de Ortodontia e Ortopedia Facial (Dental Press e o American Journal of Orthodontics and Dentofacial Orthopedics (AJO-DO. Apenas artigos publicados entre os anos de 2005 e 2008 foram analisados. RESULTADOS: a maioria das pesquisas publicadas em ambas as revistas emprega alguma forma de análise do erro do método, quando essa metodologia pode ser aplicada. Porém, apenas um número muito pequeno dos artigos publicados nesses periódicos apresenta qualquer descrição de como foram dimensionadas as amostras estudadas. Essa proporção, já pequena (21,1% na revista editada nos Estados Unidos (AJO-DO, é significativamente menor (p=0,008 na revista editada no Brasil (Dental Press (3,9%. CONCLUSÃO: os pesquisadores e o corpo editorial, de ambas as revistas, deveriam dedicar uma maior atenção ao exame dos erros inerentes à ausência de tais análises na pesquisa científica, em especial aos erros inerentes a um dimensionamento inadequado das amostras.INTRODUCTION: Reliable sample size and an appropriate analysis of error are important steps to validate the data obtained in a scientific study, in addition to the ethical and economic issues. OBJECTIVE: To evaluate, quantitatively, how often the researchers of orthodontic science have used the calculation of sample size and evaluated the
Study for particulate sampling, sizing and analysis for composition
Energy Technology Data Exchange (ETDEWEB)
King, A.M.; Jones, A.M. [IMC Technical Services Ltd., Burton-on-Trent (United Kingdom); Dorling, S.R. [University of East Angila (United Kingdom); Merefield, J.R.; Stone, I.M. [Exeter Univ. (United Kingdom); Hall, K.; Garner, G.V.; Hall, P.A. [Hall Analytical Labs., Ltd. (United Kingdom); Stokes, B. [CRE Group Ltd. (United Kingdom)
1999-07-01
This report summarises the findings of a study investigating the origin of particulate matter by analysis of the size distribution and composition of particulates in rural, semi-rural and urban areas of the UK. Details are given of the sampling locations; the sampling; monitoring, and inorganic and organic analyses; the review of archive material. The analysis carried out at St Margaret's/Stoke Ferry, comparisons of data with other locations, and the composition of ambient airborne matter are discussed, and recommendations are given. Results of PM2.5/PM10 samples collected at St Margaret's and Stoke Ferry in 1998, and back trajectories for five sites are considered in appendices.
Sample size for monitoring sirex populations and their natural enemies
Directory of Open Access Journals (Sweden)
Susete do Rocio Chiarello Penteado
2016-09-01
Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.
Allocating Sample Sizes to Reduce Budget for Fixed-Effect 2×2 Heterogeneous Analysis of Variance
Luh, Wei-Ming; Guo, Jiin-Huarng
2016-01-01
This article discusses the sample size requirements for the interaction, row, and column effects, respectively, by forming a linear contrast for a 2×2 factorial design for fixed-effects heterogeneous analysis of variance. The proposed method uses the Welch t test and its corresponding degrees of freedom to calculate the final sample size in a…
Load calculations of radiant cooling systems for sizing the plant
DEFF Research Database (Denmark)
Bourdakis, Eleftherios; Kazanci, Ongun Berk; Olesen, Bjarne W.
2015-01-01
% of the maximum cooling load. It was concluded that all tested systems were able to provide an acceptable thermal environment even when the 50% of the maximum cooling load was used. From all the simulated systems the one that performed the best under both control principles was the ESCS ceiling system. Finally......The aim of this study was, by using a building simulation software, to prove that a radiant cooling system should not be sized based on the maximum cooling load but at a lower value. For that reason six radiant cooling models were simulated with two control principles using 100%, 70% and 50...... it was proved that ventilation systems should be sized based on the maximum cooling load....
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous
Detecting Neuroimaging Biomarkers for Psychiatric Disorders: Sample Size Matters
Directory of Open Access Journals (Sweden)
Hugo eSchnack
2016-03-01
Full Text Available Recently it was suggested that much larger cohorts are needed to prove the diagnostic value of neuroimaging biomarkers in psychiatry. While within a sample increase of diagnostic accuracy of schizophrenia with number of subjects (N has been shown, the relationship between N and accuracy is completely different between studies. Using data from a meta-analysis of machine learning in imaging schizophrenia, we found that while low-N studies can reach 90% and higher accuracy, above N/2=50 the maximum accuracy achieved steadily drops to below 70% for N/2>150. We investigate the role N plays in the wide variability in accuracy results (63-97%. We hypothesize that the underlying cause of the decrease in accuracy with increasing N is sample heterogeneity. While smaller studies more easily include a homogeneous group of subjects (strict inclusion criteria are easily met; subjects live close to study site, larger studies inevitably need to relax the criteria / recruit from large geographic areas. A schizophrenia prediction model based on a heterogeneous group of patients with presumably a heterogeneous pattern of structural or functional brain changes will not be able to capture the whole variety of changes, thus being limited to patterns shared by most patients.In addition to heterogeneity, we investigate other factors influencing accuracy and introduce a machine learning effect size. We derive a simple model of how the different factors such as sample heterogeneity determine this effect size, and explain the variation in prediction accuracies found from the literature, both in cross-validation and independent sample testing. From this we argue that smaller-N studies may reach high prediction accuracy at the cost of lower generalizability to other samples. Higher-N studies, on the other hand, will have more generalization power, but at the cost of lower accuracy.In conclusion, when comparing results from different machine learning studies, the sample
Institute of Scientific and Technical Information of China (English)
HE Gui-chun; NI Wen
2006-01-01
Based on various ultrasonic loss mechanisms, the formula of the cumulative mass percentage of minerals with different particle sizes was given, with which the particle size distribution was integrated into an ultrasonic attenuation model. And then the correlations between the ultrasonic attenuation and the pulp density, and the particle size were obtained. The derived model was combined with the experiment and the analysis of experimental data to determine the inverse model relating ultrasonic attenuation coefficient with size distribution. Finally, an optimization method of inverse parameter, genetic algorithm was applied for particle size distribution. The results of inverse calculation show that the precision of measurement was high.
GUIDE TO CALCULATING TRANSPORT EFFICIENCY OF AEROSOLS IN OCCUPATIONAL AIR SAMPLING SYSTEMS
Energy Technology Data Exchange (ETDEWEB)
Hogue, M.; Hadlock, D.; Thompson, M.; Farfan, E.
2013-11-12
This report will present hand calculations for transport efficiency based on aspiration efficiency and particle deposition losses. Because the hand calculations become long and tedious, especially for lognormal distributions of aerosols, an R script (R 2011) will be provided for each element examined. Calculations are provided for the most common elements in a remote air sampling system, including a thin-walled probe in ambient air, straight tubing, bends and a sample housing. One popular alternative approach would be to put such calculations in a spreadsheet, a thorough version of which is shared by Paul Baron via the Aerocalc spreadsheet (Baron 2012). To provide greater transparency and to avoid common spreadsheet vulnerabilities to errors (Burns 2012), this report uses R. The particle size is based on the concept of activity median aerodynamic diameter (AMAD). The AMAD is a particle size in an aerosol where fifty percent of the activity in the aerosol is associated with particles of aerodynamic diameter greater than the AMAD. This concept allows for the simplification of transport efficiency calculations where all particles are treated as spheres with the density of water (1g cm-3). In reality, particle densities depend on the actual material involved. Particle geometries can be very complicated. Dynamic shape factors are provided by Hinds (Hinds 1999). Some example factors are: 1.00 for a sphere, 1.08 for a cube, 1.68 for a long cylinder (10 times as long as it is wide), 1.05 to 1.11 for bituminous coal, 1.57 for sand and 1.88 for talc. Revision 1 is made to correct an error in the original version of this report. The particle distributions are based on activity weighting of particles rather than based on the number of particles of each size. Therefore, the mass correction made in the original version is removed from the text and the calculations. Results affected by the change are updated.
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account
Sample size for estimating average trunk diameter and plant height in eucalyptus hybrids
Directory of Open Access Journals (Sweden)
Alberto Cargnelutti Filho
2016-01-01
Full Text Available ABSTRACT: In eucalyptus crops, it is important to determine the number of plants that need to be evaluated for a reliable inference of growth. The aim of this study was to determine the sample size needed to estimate average trunk diameter at breast height and plant height of inter-specific eucalyptus hybrids. In 6,694 plants of twelve inter-specific hybrids it was evaluated trunk diameter at breast height at three (DBH3 and seven years (DBH7 and tree height at seven years (H7 of age. The statistics: minimum, maximum, mean, variance, standard deviation, standard error, and coefficient of variation were calculated. The hypothesis of variance homogeneity was tested. The sample size was determined by re sampling with replacement of 10,000 re samples. There was an increase in the sample size from DBH3 to H7 and DBH7. A sample size of 16, 59 and 31 plants is adequate to estimate DBH3, DBH7 and H7 means, respectively, of inter-specific hybrids of eucalyptus, with amplitude of confidence interval of 95% equal to 20% of the estimated mean.
Calculated Grain Size-Dependent Vacancy Supersaturation and its Effect on Void Formation
DEFF Research Database (Denmark)
Singh, Bachu Narain; Foreman, A. J. E.
1974-01-01
In order to study the effect of grain size on void formation during high-energy electron irradiations, the steady-state point defect concentration and vacancy supersaturation profiles have been calculated for three-dimensional spherical grains up to three microns in size. In the calculations of...... vacancy supersaturation as a function of grain size, the effects of internal sink density and the dislocation preference for interstitial attraction have been included. The computations show that the level of vacancy supersaturation achieved in a grain decreases with decreasing grain size. The grain size...
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David
2013-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4-8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and type-I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of 8 fish could detect an increase of ∼ 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of ∼ 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2 this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of ∼ 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated by increased precision of composites for estimating mean
A simulation study of sample size for multilevel logistic regression models
Directory of Open Access Journals (Sweden)
Moineddin Rahim
2007-07-01
Full Text Available Abstract Background Many studies conducted in health and social sciences collect individual level data as outcome measures. Usually, such data have a hierarchical structure, with patients clustered within physicians, and physicians clustered within practices. Large survey data, including national surveys, have a hierarchical or clustered structure; respondents are naturally clustered in geographical units (e.g., health regions and may be grouped into smaller units. Outcomes of interest in many fields not only reflect continuous measures, but also binary outcomes such as depression, presence or absence of a disease, and self-reported general health. In the framework of multilevel studies an important problem is calculating an adequate sample size that generates unbiased and accurate estimates. Methods In this paper simulation studies are used to assess the effect of varying sample size at both the individual and group level on the accuracy of the estimates of the parameters and variance components of multilevel logistic regression models. In addition, the influence of prevalence of the outcome and the intra-class correlation coefficient (ICC is examined. Results The results show that the estimates of the fixed effect parameters are unbiased for 100 groups with group size of 50 or higher. The estimates of the variance covariance components are slightly biased even with 100 groups and group size of 50. The biases for both fixed and random effects are severe for group size of 5. The standard errors for fixed effect parameters are unbiased while for variance covariance components are underestimated. Results suggest that low prevalent events require larger sample sizes with at least a minimum of 100 groups and 50 individuals per group. Conclusion We recommend using a minimum group size of 50 with at least 50 groups to produce valid estimates for multi-level logistic regression models. Group size should be adjusted under conditions where the prevalence
40 CFR Appendix II to Part 600 - Sample Fuel Economy Calculations
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Calculations II... FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App. II Appendix II to Part 600—Sample Fuel Economy Calculations (a) This sample fuel economy calculation is applicable...
Variational Approach to Enhanced Sampling and Free Energy Calculations
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
A Variational Approach to Enhanced Sampling and Free Energy Calculations
Parrinello, Michele
2015-03-01
The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.
Space resection model calculation based on Random Sample Consensus algorithm
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
Hauschke, D; Steinijans, W V; Diletti, E; Schall, R; Luus, H G; Elze, M; Blume, H
1994-07-01
Bioequivalence studies are generally performed as crossover studies and, therefore, information on the intrasubject coefficient of variation is needed for sample size planning. Unfortunately, this information is usually not presented in publications on bioequivalence studies, and only the pooled inter- and intrasubject coefficient of variation for either test or reference formulation is reported. Thus, the essential information for sample size planning of future studies is not made available to other researchers. In order to overcome such shortcomings, the presentation of results from bioequivalence studies should routinely include the intrasubject coefficient of variation. For the relevant coefficients of variation, theoretical background together with modes of calculation and presentation are given in this communication with particular emphasis on the multiplicative model.
Institute of Scientific and Technical Information of China (English)
Hong Tang; Xiaogang Sun; Guibin Yuan
2007-01-01
In total light scattering particle sizing technique, the relationship among Sauter mean diameter D32, mean extinction efficiency Q, and particle size distribution function is studied in order to inverse the mean diameter and particle size distribution simply. We propose a method which utilizes the mean extinction efficiency ratio at only two selected wavelengths to solve D32 and then to inverse the particle size distribution associated with (Q) and D32. Numerical simulation results show that the particle size distribution is inversed accurately with this method, and the number of wavelengths used is reduced to the greatest extent in the measurement range. The calculation method has the advantages of simplicity and rapidness.
Shieh, Gwowen
2013-01-01
The a priori determination of a proper sample size necessary to achieve some specified power is an important problem encountered frequently in practical studies. To establish the needed sample size for a two-sample "t" test, researchers may conduct the power analysis by specifying scientifically important values as the underlying population means…
IMAGE PROFILE AREA CALCULATION BASED ON CIRCULAR SAMPLE MEASUREMENT CALIBRATION
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A practical approach of measurement calibration is presented for obtaining the true area of the photographed objects projected in the 2-D image scene. The calibration is performed using three circular samples with given diameters. The process is first to obtain the ratio mm/pixel in two orthogonal directions, and then use the obtained ratios with the total number of pixels scanned within projected area of the object of interest to compute the desired area. Compared the optically measured areas with their corresponding true areas, the results show that the proposed method is quite encouraging and the relevant application also proves the approach adequately accurate.
Comparing Server Energy Use and Efficiency Using Small Sample Sizes
Energy Technology Data Exchange (ETDEWEB)
Coles, Henry C.; Qin, Yong; Price, Phillip N.
2014-11-01
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a
Institute of Scientific and Technical Information of China (English)
R. A. KUHNLE; D. G. WREN; J. P. CHAMBERS
2007-01-01
Collection of samples of suspended sediment transported by streams and rivers is difficult and expensive. Emerging technologies, such as acoustic backscatter, have promise to decrease costs and allow more thorough sampling of transported sediment in streams and rivers. Acoustic backscatter information may be used to calculate the concentration of suspended sand-sized sediment given the vertical distribution of sediment size. Therefore, procedures to accurately compute suspended sediment size distributions from easily obtained river data are badly needed. In this study, techniques to predict the size of suspended sand are examined and their application to measuring concentrations using acoustic backscatter data are explored. Three methods to predict the size of sediment in suspension using bed sediment, flow criteria, and a modified form of the Rouse equation yielded mean suspended sediment sizes that differed from means of measured data by 7 to 50 percent. When one sample near the bed was used as a reference, mean error was reduced to about 5 percent. These errors in size determination translate into errors of 7 to 156 percent in the prediction of sediment concentration using backscatter data from 1 MHz single frequency acoustics.
Communication: Finite size correction in periodic coupled cluster theory calculations of solids
Liao, Ke; Grüneis, Andreas
2016-10-01
We present a method to correct for finite size errors in coupled cluster theory calculations of solids. The outlined technique shares similarities with electronic structure factor interpolation methods used in quantum Monte Carlo calculations. However, our approach does not require the calculation of density matrices. Furthermore we show that the proposed finite size corrections achieve chemical accuracy in the convergence of second-order Møller-Plesset perturbation and coupled cluster singles and doubles correlation energies per atom for insulating solids with two atomic unit cells using 2 × 2 × 2 and 3 × 3 × 3 k-point meshes only.
Determining Sample Size for Accurate Estimation of the Squared Multiple Correlation Coefficient.
Algina, James; Olejnik, Stephen
2000-01-01
Discusses determining sample size for estimation of the squared multiple correlation coefficient and presents regression equations that permit determination of the sample size for estimating this parameter for up to 20 predictor variables. (SLD)
10 CFR Appendix to Part 474 - Sample Petroleum-Equivalent Fuel Economy Calculations
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Sample Petroleum-Equivalent Fuel Economy Calculations..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION Pt. 474, App. Appendix to Part 474—Sample Petroleum-Equivalent Fuel Economy Calculations Example 1: An electric vehicle...
A contemporary decennial global sample of changing agricultural field sizes
White, E.; Roy, D. P.
2011-12-01
In the last several hundred years agriculture has caused significant human induced Land Cover Land Use Change (LCLUC) with dramatic cropland expansion and a marked increase in agricultural productivity. The size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLUC. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, diffusion of disease pathogens and pests, and loss or degradation in buffers to nutrient, herbicide and pesticide flows. In this study, globally distributed locations with significant contemporary field size change were selected guided by a global map of agricultural yield and literature review and were selected to be representative of different driving forces of field size change (associated with technological innovation, socio-economic conditions, government policy, historic patterns of land cover land use, and environmental setting). Seasonal Landsat data acquired on a decadal basis (for 1980, 1990, 2000 and 2010) were used to extract field boundaries and the temporal changes in field size quantified and their causes discussed.
Limitations of mRNA amplification from small-size cell samples
Directory of Open Access Journals (Sweden)
Myklebost Ola
2005-10-01
Full Text Available Abstract Background Global mRNA amplification has become a widely used approach to obtain gene expression profiles from limited material. An important concern is the reliable reflection of the starting material in the results obtained. This is especially important with extremely low quantities of input RNA where stochastic effects due to template dilution may be present. This aspect remains under-documented in the literature, as quantitative measures of data reliability are most often lacking. To address this issue, we examined the sensitivity levels of each transcript in 3 different cell sample sizes. ANOVA analysis was used to estimate the overall effects of reduced input RNA in our experimental design. In order to estimate the validity of decreasing sample sizes, we examined the sensitivity levels of each transcript by applying a novel model-based method, TransCount. Results From expression data, TransCount provided estimates of absolute transcript concentrations in each examined sample. The results from TransCount were used to calculate the Pearson correlation coefficient between transcript concentrations for different sample sizes. The correlations were clearly transcript copy number dependent. A critical level was observed where stochastic fluctuations became significant. The analysis allowed us to pinpoint the gene specific number of transcript templates that defined the limit of reliability with respect to number of cells from that particular source. In the sample amplifying from 1000 cells, transcripts expressed with at least 121 transcripts/cell were statistically reliable and for 250 cells, the limit was 1806 transcripts/cell. Above these thresholds, correlation between our data sets was at acceptable values for reliable interpretation. Conclusion These results imply that the reliability of any amplification experiment must be validated empirically to justify that any gene exists in sufficient quantity in the input material. This
Efficiency of whole-body counter for various body size calculated by MCNP5 software
International Nuclear Information System (INIS)
The efficiency of a whole-body counter for 137Cs and 40K was calculated using the MCNP5 code. The ORNL phantoms of a human body of different body sizes were applied in a sitting position in front of a detector. The aim was to investigate the dependence of efficiency on the body size (age) and the detector position with respect to the body and to estimate the accuracy of real measurements. The calculation work presented here is related to the NaI detector, which is available in the Serbian Whole-body Counter facility in Vinca Inst.. (authors)
Detecting neuroimaging biomarkers for psychiatric disorders : Sample size matters
Schnack, Hugo G.; Kahn, René S.
2016-01-01
In a recent review, it was suggested that much larger cohorts are needed to prove the diagnostic value of neuroimaging biomarkers in psychiatry. While within a sample, an increase of diagnostic accuracy of schizophrenia (SZ) with number of subjects (N) has been shown, the relationship between N and
Sample size reduction in groundwater surveys via sparse data assimilation
Hussain, Z.
2013-04-01
In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.
Theory of Finite Size Effects for Electronic Quantum Monte Carlo Calculations of Liquids and Solids
Holzmann, Markus; Morales, Miguel A; Tubmann, Norm M; Ceperley, David M; Pierleoni, Carlo
2016-01-01
Concentrating on zero temperature Quantum Monte Carlo calculations of electronic systems, we give a general description of the theory of finite size extrapolations of energies to the thermodynamic limit based on one and two-body correlation functions. We introduce new effective procedures, such as using the potential and wavefunction split-up into long and short range functions to simplify the method and we discuss how to treat backflow wavefunctions. Then we explicitly test the accuracy of our method to correct finite size errors on example hydrogen and helium many-body systems and show that the finite size bias can be drastically reduced for even small systems.
Systematic study of finite-size effects in quantum Monte Carlo calculations of real metallic systems
Energy Technology Data Exchange (ETDEWEB)
Azadi, Sam, E-mail: s.azadi@imperial.ac.uk; Foulkes, W. M. C. [Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom)
2015-09-14
We present a systematic and comprehensive study of finite-size effects in diffusion quantum Monte Carlo calculations of metals. Several previously introduced schemes for correcting finite-size errors are compared for accuracy and efficiency, and practical improvements are introduced. In particular, we test a simple but efficient method of finite-size correction based on an accurate combination of twist averaging and density functional theory. Our diffusion quantum Monte Carlo results for lithium and aluminum, as examples of metallic systems, demonstrate excellent agreement between all of the approaches considered.
Enhanced Z-LDA for Small Sample Size Training in Brain-Computer Interface Systems
Directory of Open Access Journals (Sweden)
Dongrui Gao
2015-01-01
Full Text Available Background. Usually the training set of online brain-computer interface (BCI experiment is small. For the small training set, it lacks enough information to deeply train the classifier, resulting in the poor classification performance during online testing. Methods. In this paper, on the basis of Z-LDA, we further calculate the classification probability of Z-LDA and then use it to select the reliable samples from the testing set to enlarge the training set, aiming to mine the additional information from testing set to adjust the biased classification boundary obtained from the small training set. The proposed approach is an extension of previous Z-LDA and is named enhanced Z-LDA (EZ-LDA. Results. We evaluated the classification performance of LDA, Z-LDA, and EZ-LDA on simulation and real BCI datasets with different sizes of training samples, and classification results showed EZ-LDA achieved the best classification performance. Conclusions. EZ-LDA is promising to deal with the small sample size training problem usually existing in online BCI system.
40 CFR Appendix III to Part 600 - Sample Fuel Economy Label Calculation
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Label Calculation...) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App. III Appendix III to Part 600—Sample Fuel Economy Label Calculation Suppose that a manufacturer called...
Williams Test Required Sample Size For Determining The Minimum Effective Dose
Directory of Open Access Journals (Sweden)
Mustafa Agah TEKINDAL
2016-04-01
of groups has quite a big influence on sample size. Researchers may calculate the test’s power based on the recommended sample sizes prior to experimental designs.
Geoscience Education Research Methods: Thinking About Sample Size
Slater, S. J.; Slater, T. F.; CenterAstronomy; Physics Education Research
2011-12-01
Geoscience education research is at a critical point in which conditions are sufficient to propel our field forward toward meaningful improvements in geosciences education practices. Our field has now reached a point where the outcomes of our research is deemed important to endusers and funding agencies, and where we now have a large number of scientists who are either formally trained in geosciences education research, or who have dedicated themselves to excellence in this domain. At this point we now must collectively work through our epistemology, our rules of what methodologies will be considered sufficiently rigorous, and what data and analysis techniques will be acceptable for constructing evidence. In particular, we have to work out our answer to that most difficult of research questions: "How big should my 'N' be??" This paper presents a very brief answer to that question, addressing both quantitative and qualitative methodologies. Research question/methodology alignment, effect size and statistical power will be discussed, in addition to a defense of the notion that bigger is not always better.
Progressive prediction method for failure data with small sample size
Institute of Scientific and Technical Information of China (English)
WANG Zhi-hua; FU Hui-min; LIU Cheng-rui
2011-01-01
The small sample prediction problem which commonly exists in reliability analysis was discussed with the progressive prediction method in this paper.The modeling and estimation procedure,as well as the forecast and confidence limits formula of the progressive auto regressive（PAR） method were discussed in great detail.PAR model not only inherits the simple linear features of auto regressive（AR） model,but also has applicability for nonlinear systems.An application was illustrated for predicting the future fatigue failure for Tantalum electrolytic capacitors.Forecasting results of PAR model were compared with auto regressive moving average（ARMA） model,and it can be seen that the PAR method can be considered good and shows a promise for future applications.
Sample Size in Differential Item Functioning: An Application of Hierarchical Linear Modeling
Acar, Tulin
2011-01-01
The purpose of this study is to examine the number of DIF items detected by HGLM at different sample sizes. Eight different sized data files have been composed. The population of the study is 798307 students who had taken the 2006 OKS Examination. 10727 students of 798307 are chosen by random sampling method as the sample of the study. Turkish,…
Evaluation of design flood estimates with respect to sample size
Kobierska, Florian; Engeland, Kolbjorn
2016-04-01
Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.
A simple nomogram for sample size for estimating sensitivity and specificity of medical tests
Malhotra Rajeev; Indrayan A
2010-01-01
Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size ...
Crack lengths calculation by the unloading compliance technique for Charpy size specimens
International Nuclear Information System (INIS)
The problems with the crack length determination by the unloading compliance method are well known for Charpy size specimens. The final crack lengths calculated for bent specimens do not fulfil ASTM 1820 accuracy requirements. Therefore some investigations have been performed to resolve this problem. In those studies it was considered that the measured compliance should be corrected for various factors, but satisfying results were not obtained. In the presented work the problem was attacked from the other side, the measured specimen compliance was taken as a correct value and what had to be adjusted was the calculation procedure. On the basis of experimentally obtained compliances of bent specimens and optically measured crack lengths the investigation was carried out. Finally, a calculation procedure enabling accurate crack length calculation up to 5 mm of plastic deflection was developed. Applying the new procedure, out of investigated 238 measured crack lengths, more than 80% of the values fulfilled the ASTM 1820 accuracy requirements, while presently used procedure provided only about 30% of valid results. The newly proposed procedure can be also prospectively used in modified form for specimens of a size different than Charpy size. (orig.)
Sample size considerations for one-to-one animal transmission studies of the influenza A viruses.
Directory of Open Access Journals (Sweden)
Hiroshi Nishiura
Full Text Available BACKGROUND: Animal transmission studies can provide important insights into host, viral and environmental factors affecting transmission of viruses including influenza A. The basic unit of analysis in typical animal transmission experiments is the presence or absence of transmission from an infectious animal to a susceptible animal. In studies comparing two groups (e.g. two host genetic variants, two virus strains, or two arrangements of animal cages, differences between groups are evaluated by comparing the proportion of pairs with successful transmission in each group. The present study aimed to discuss the significance and power to estimate transmissibility and identify differences in the transmissibility based on one-to-one trials. The analyses are illustrated on transmission studies of influenza A viruses in the ferret model. METHODOLOGY/PRINCIPAL FINDINGS: Employing the stochastic general epidemic model, the basic reproduction number, R₀, is derived from the final state of an epidemic and is related to the probability of successful transmission during each one-to-one trial. In studies to estimate transmissibility, we show that 3 pairs of infectious/susceptible animals cannot demonstrate a significantly higher transmissibility than R₀= 1, even if infection occurs in all three pairs. In comparisons between two groups, at least 4 pairs of infectious/susceptible animals are required in each group to ensure high power to identify significant differences in transmissibility between the groups. CONCLUSIONS: These results inform the appropriate sample sizes for animal transmission experiments, while relating the observed proportion of infected pairs to R₀, an interpretable epidemiological measure of transmissibility. In addition to the hypothesis testing results, the wide confidence intervals of R₀ with small sample sizes also imply that the objective demonstration of difference or similarity should rest on firmly calculated sample size.
XAFSmass: a program for calculating the optimal mass of XAFS samples
Klementiev, K.; Chernikov, R.
2016-05-01
We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.
Impact of Sample Size on the Performance of Multiple-Model Pharmacokinetic Simulations▿
Tam, Vincent H.; Kabbara, Samer; Yeh, Rosa F.; Leary, Robert H.
2006-01-01
Monte Carlo simulations are increasingly used to predict pharmacokinetic variability of antimicrobials in a population. We investigated the sample size necessary to provide robust pharmacokinetic predictions. To obtain reasonably robust predictions, a nonparametric model derived from a sample population size of ≥50 appears to be necessary as the input information.
Sample Size for Confidence Interval of Covariate-Adjusted Mean Difference
Liu, Xiaofeng Steven
2010-01-01
This article provides a way to determine adequate sample size for the confidence interval of covariate-adjusted mean difference in randomized experiments. The standard error of adjusted mean difference depends on covariate variance and balance, which are two unknown quantities at the stage of planning sample size. If covariate observations are…
Analysis of AC loss in superconducting power devices calculated from short sample data
Rabbers, J.J.; Haken, ten, Bennie; Kate, ten, F.J.W.
2003-01-01
A method to calculate the AC loss of superconducting power devices from the measured AC loss of a short sample is developed. In coils and cables the magnetic field varies spatially. The position dependent field vector is calculated assuming a homogeneous current distribution. From this field profile and the transport current, the local AC loss is calculated. Integration over the conductor length yields the AC loss of the device. The total AC loss of the device is split up in different compone...
Thermomagnetic behavior of magnetic susceptibility – heating rate and sample size effects
Directory of Open Access Journals (Sweden)
Diana eJordanova
2016-01-01
Full Text Available Thermomagnetic analysis of magnetic susceptibility k(T was carried out for a number of natural powder materials from soils, baked clay and anthropogenic dust samples using fast (11oC/min and slow (6.5oC/min heating rates available in the furnace of Kappabridge KLY2 (Agico. Based on the additional data for mineralogy, grain size and magnetic properties of the studied samples, behaviour of k(T cycles and the observed differences in the curves for fast and slow heating rate are interpreted in terms of mineralogical transformations and Curie temperatures (Tc. The effect of different sample size is also explored, using large volume and small volume of powder material. It is found that soil samples show enhanced information on mineralogical transformations and appearance of new strongly magnetic phases when using fast heating rate and large sample size. This approach moves the transformation at higher temperature, but enhances the amplitude of the signal of newly created phase. Large sample size gives prevalence of the local micro- environment, created by evolving gases, released during transformations. The example from archeological brick reveals the effect of different sample sizes on the observed Curie temperatures on heating and cooling curves, when the magnetic carrier is substituted magnetite (Mn0.2Fe2.70O4. Large sample size leads to bigger differences in Tcs on heating and cooling, while small sample size results in similar Tcs for both heating rates.
Analysis of AC loss in superconducting power devices calculated from short sample data
Rabbers, J.J.; Haken, ten B.; Kate, ten H.H.J.
2003-01-01
A method to calculate the AC loss of superconducting power devices from the measured AC loss of a short sample is developed. In coils and cables the magnetic field varies spatially. The position dependent field vector is calculated assuming a homogeneous current distribution. From this field profile
40 CFR 600.211-08 - Sample calculation of fuel economy values for labeling.
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample calculation of fuel economy... AGENCY (CONTINUED) ENERGY POLICY FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Fuel Economy Regulations for 1977 and Later Model Year Automobiles-Procedures for Calculating Fuel...
Dose variations with varying calculation grid size in head and neck IMRT
Energy Technology Data Exchange (ETDEWEB)
Chung, Heeteak [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, Fl 32611-8300 (United States); Jin, Hosang [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, Fl 32611-8300 (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, Fl 32610-0385 (United States); Suh, Tae-Suk [Department of Biomedical Engineering, Catholic University of Korea (Korea, Republic of); Kim, Siyong [Department of Radiation Oncology, University of Florida, Gainesville, Fl 32610-0385 (United States)
2006-10-07
Ever since the advent and development of treatment planning systems, the uncertainty associated with calculation grid size has been an issue. Even to this day, with highly sophisticated 3D conformal and intensity-modulated radiation therapy (IMRT) treatment planning systems (TPS), dose uncertainty due to grid size is still a concern. A phantom simulating head and neck treatment was prepared from two semi-cylindrical solid water slabs and a radiochromic film was inserted between the two slabs for measurement. Plans were generated for a 5400 cGy prescribed dose using Philips Pinnacle{sup 3} TPS for two targets, one shallow ({approx}0.5 cm depth) and one deep ({approx}6 cm depth). Calculation grid sizes of 1.5, 2, 3 and 4 mm were considered. Three clinical cases were also evaluated. The dose differences for the varying grid sizes (2 mm, 3 mm and 4 mm from 1.5 mm) in the phantom study were 126 cGy (2.3% of the 5400 cGy dose prescription), 248.2 cGy (4.6% of the 5400 cGy dose prescription) and 301.8 cGy (5.6% of the 5400 cGy dose prescription), respectively for the shallow target case. It was found that the dose could be varied to about 100 cGy (1.9% of the 5400 cGy dose prescription), 148.9 cGy (2.8% of the 5400 cGy dose prescription) and 202.9 cGy (3.8% of the 5400 cGy dose prescription) for 2 mm, 3 mm and 4 mm grid sizes, respectively, simply by shifting the calculation grid origin. Dose difference with a different range of the relative dose gradient was evaluated and we found that the relative dose difference increased with an increase in the range of the relative dose gradient. When comparing varying calculation grid sizes and measurements, the variation of the dose difference histogram was insignificant, but a local effect was observed in the dose difference map. Similar results were observed in the case of the deep target and the three clinical cases also showed results comparable to those from the phantom study.
Energy Technology Data Exchange (ETDEWEB)
Nasrabadi, M.N. [Department of Nuclear Engineering, Faculty of Modern Sciences and Technologies, University of Isfahan, Isfahan 81746-73441 (Iran, Islamic Republic of)], E-mail: mnnasrabadi@ast.ui.ac.ir; Mohammadi, A. [Department of Physics, Payame Noor University (PNU), Kohandej, Isfahan (Iran, Islamic Republic of); Jalali, M. [Isfahan Nuclear Science and Technology Research Institute (NSTRT), Reactor and Accelerators Research and Development School, Atomic Energy Organization of Iran (Iran, Islamic Republic of)
2009-07-15
In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required.
Nasrabadi, M N; Mohammadi, A; Jalali, M
2009-01-01
In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required. PMID:19328700
Heo, Yongju; Park, Jiyeon; Lim, Sung-Il; Hur, Hor-Gil; Kim, Daesung; Park, Kihong
2010-08-01
Size-resolved bacterial concentrations in atmospheric aerosols sampled by using a six stage viable impactor at rice field, sanitary landfill, and waste incinerator sites were determined. Culture-based and Polymerase Chain Reaction (PCR) methods were used to identify the airborne bacteria. The culturable bacteria concentration in total suspended particles (TSP) was found to be the highest (848 Colony Forming Unit (CFU)/m(3)) at the sanitary landfill sampling site, while the rice field sampling site has the lowest (125 CFU/m(3)). The closed landfill would be the main source of the observed bacteria concentration at the sanitary landfill. The rice field sampling site was fully covered by rice grain with wetted conditions before harvest and had no significant contribution to the airborne bacteria concentration. This might occur because the dry conditions favor suspension of soil particles and this area had limited personnel and vehicle flow. The respirable fraction calculated by particles less than 3.3 mum was highest (26%) at the sanitary landfill sampling site followed by waste incinerator (19%) and rice field (10%), which showed a lower level of respiratory fraction compared to previous literature values. We identified 58 species in 23 genera of culturable bacteria, and the Microbacterium, Staphylococcus, and Micrococcus were the most abundant genera at the sanitary landfill, waste incinerator, and rice field sites, respectively. An antibiotic resistant test for the above bacteria (Micrococcus sp., Microbacterium sp., and Staphylococcus sp.) showed that the Staphylococcus sp. had the strongest resistance to both antibiotics (25.0% resistance for 32 microg ml(-1) of Chloramphenicol and 62.5% resistance for 4 microg ml(-1) of Gentamicin).
Calculation of the mean circle size does not circumvent the bottleneck of crowding.
Banno, Hayaki; Saiki, Jun
2012-10-22
Visually, we can extract a statistical summary of sets of elements efficiently. However, our visual system has a severe limitation in that the ability to recognize an object is remarkably impaired when it is surrounded by other objects. The goal of this study was to investigate whether the crowding effect obstructs the calculation of the mean size of objects. First, we verified that the crowding effect occurs when comparing the sizes of circles (Experiment 1). Next, we manipulated the distances between circles and measured the sensitivity when circles were on or off the limitation of crowding (Experiment 2). Participants were asked to compare the mean sizes of the circles in the left and right visual fields and to judge which was larger. Participants' sensitivity to mean size difference was lower when the circles were located in the nearer distance. Finally, we confirmed that crowding is responsible for the observed results by showing that displays without a crowded object eliminated the effects (Experiment 3). Our results indicate that the statistical information of size does not circumvent the bottleneck of crowding.
The validity of the transport approximation in critical-size and reactivity calculations
International Nuclear Information System (INIS)
The validity of the transport approximation in critical-size and reactivity calculations. Elastically scattered neutrons are, in general, not distributed isotropically in the laboratory system, and a convenient way of taking this into account in neutron- transport calculations is to use the transport approximation. In this, the elastic cross-section is replaced by an elastic transport cross-section with an isotropic angular distribution. This leads to a considerable simplification in the neutron-transport calculation. In the present paper, the theoretical bases of the transport approximation in both one-group and many-group formalisms are given. The accuracy of the approximation is then studied in the multi-group case for a number of typical systems by means of the Sn method using the isotropic and anisotropic versions of the method, which exist as alternative options of the machine code SAINT written at Aldermaston for use on IBM-709/7090 machines. The dependence of the results of the anisotropic calculations on the number of moments used to represent the angular distributions is also examined. The results of the various calculations are discussed, and an indication is given of the types of system for which the transport approximation is adequate and of those for which it is inadequate. (author)
Purity calculation method for event samples with two the same particles
Kuzmin, Valentin
2016-01-01
We present a method of the two dimensional background calculation for an analysis of events with two the same particles observing by a detector of high energy physics. Usual two dimensional integration is replaced by an approximation of a specially constructed one-dimensional function. The value of the signal events is found by the subtraction of the background from the value of all selected events. It allows to calculate the purity value of the selected events sample. The procedure does not require a hypothesis about background and signal shapes. The nice performance of the purity calculation method is shown on Monte Carlo examples of double J/psi samples.
International Nuclear Information System (INIS)
A new methodology for trace elemental analysis in plutonium metal samples was developed by interfacing the novel micro-FAST sample introduction system with an ICP-OES instrument. This integrated system, especially when coupled with a low flow rate nebulization technique, reduced the sample volume requirement significantly. Improvements to instrument sensitivity and measurement precision, as well as long term stability, were also achieved by this modified ICP-OES system. The sample size reduction, together with other instrument performance merits, is of great significance, especially to nuclear material analysis. (author)
Directory of Open Access Journals (Sweden)
Wei Lin Teoh
Full Text Available Designs of the double sampling (DS X chart are traditionally based on the average run length (ARL criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i the in-control average sample size (ASS and (ii both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed.
Teoh, Wei Lin; Khoo, Michael B C; Teh, Sin Yin
2013-01-01
Designs of the double sampling (DS) X chart are traditionally based on the average run length (ARL) criterion. However, the shape of the run length distribution changes with the process mean shifts, ranging from highly skewed when the process is in-control to almost symmetric when the mean shift is large. Therefore, we show that the ARL is a complicated performance measure and that the median run length (MRL) is a more meaningful measure to depend on. This is because the MRL provides an intuitive and a fair representation of the central tendency, especially for the rightly skewed run length distribution. Since the DS X chart can effectively reduce the sample size without reducing the statistical efficiency, this paper proposes two optimal designs of the MRL-based DS X chart, for minimizing (i) the in-control average sample size (ASS) and (ii) both the in-control and out-of-control ASSs. Comparisons with the optimal MRL-based EWMA X and Shewhart X charts demonstrate the superiority of the proposed optimal MRL-based DS X chart, as the latter requires a smaller sample size on the average while maintaining the same detection speed as the two former charts. An example involving the added potassium sorbate in a yoghurt manufacturing process is used to illustrate the effectiveness of the proposed MRL-based DS X chart in reducing the sample size needed. PMID:23935873
Johnson, Kenneth L.; White, K. Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.
Aerosol composition at Chacaltaya, Bolivia, as determined by size-fractionated sampling
Adams, F.; van Espen, P.; Maenhaut, W.
Thirty-four cascade-impactor samples were collected between September 1977 and November 1978 at Chacaltaya, Bolivia. The concentrations of 25 elements were measured for the six impaction stages of each sample by means of energy-dispersive X-ray fluorescence and proton-induced X-ray emission analysis. The results indicated that most elements are predominantly associated with a unimodal coarse-particle soil-dustdispersion component. Also chlorine and the alkali and alkaline earth elements belong to this group. The anomalously enriched elements (S, Br and the heavy metals Cu, Zn, Ga, As, Se, Pb and Bi) showed a bimodal size distribution. Correlation coefficient calculations and principal component analysis indicated the presence in the submicrometer aerosol mode of an important component, containing S, K, Zn, As and Br, which may originate from biomass burning. For certain enriched elements (i.e. Zn and perhaps Cu) the coarse-particle enrichments observed may be the result of the true crust-air fractionation during soil-dust dispersion.
Aleksandrov, V. D.; Pokyntelytsia, O. A.
2016-09-01
An alternative approach to calculating critical sizes l k of nucleation centers and work A k of their formation upon crystallization from a supercooled melt by analyzing the variation in the Gibbs energy during the phase transformation is considered. Unlike the classical variant, it is proposed that the transformation entropy be associated not with melting temperature T L but with temperature T < T L at which the nucleation of crystals occurs. New equations for l k and A k are derived. Based on the results from calculating these quantities for a series of compounds, it is shown that this approach is unbiased and it is possible to eliminate known conflicts in analyzing these parameters in the classical interpretation.
A NONPARAMETRIC PROCEDURE OF THE SAMPLE SIZE DETERMINATION FOR SURVIVAL RATE TEST
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
Objective This paper proposes a nonparametric procedure of the sample size determination for survival rate test. Methods Using the classical asymptotic normal procedure yields the required homogenetic effective sample size and using the inverse operation with the prespecified value of the survival function of censoring times yields the required sample size. Results It is matched with the rate test for censored data, does not involve survival distributions, and reduces to its classical counterpart when there is no censoring. The observed power of the test coincides with the prescribed power under usual clinical conditions. Conclusion It can be used for planning survival studies of chronic diseases.
Bice, K.; Clement, S. C.
1981-01-01
X-ray diffraction and spectroscopy were used to investigate the mineralogical and chemical properties of the Calvert, Ball Old Mine, Ball Martin, and Jordan Sediments. The particle size distribution and index of refraction of each sample were determined. The samples are composed primarily of quartz, kaolinite, and illite. The clay minerals are most abundant in the finer particle size fractions. The chemical properties of the four samples are similar. The Calvert sample is most notably different in that it contains a relatively high amount of iron. The dominant particle size fraction in each sample is silt, with lesser amounts of clay and sand. The indices of refraction of the sediments are the same with the exception of the Calvert sample which has a slightly higher value.
Size constrained unequal probability sampling with a non-integer sum of inclusion probabilities
Grafström, Anton; Qualité, Lionel; Tillé, Yves; Matei, Alina
2012-01-01
More than 50 methods have been developed to draw unequal probability samples with fixed sample size. All these methods require the sum of the inclusion probabilities to be an integer number. There are cases, however, where the sum of desired inclusion probabilities is not an integer. Then, classical algorithms for drawing samples cannot be directly applied. We present two methods to overcome the problem of sample selection with unequal inclusion probabilities when their sum is not an integer ...
Operational risk models and maximum likelihood estimation error for small sample-sizes
Paul Larsen
2015-01-01
Operational risk models commonly employ maximum likelihood estimation (MLE) to fit loss data to heavy-tailed distributions. Yet several desirable properties of MLE (e.g. asymptotic normality) are generally valid only for large sample-sizes, a situation rarely encountered in operational risk. We study MLE in operational risk models for small sample-sizes across a range of loss severity distributions. We apply these results to assess (1) the approximation of parameter confidence intervals by as...
Sample size choices for XRCT scanning of highly unsaturated soil mixtures
Directory of Open Access Journals (Sweden)
Smith Jonathan C.
2016-01-01
Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.
A margin based approach to determining sample sizes via tolerance bounds.
Energy Technology Data Exchange (ETDEWEB)
Newcomer, Justin T.; Freeland, Katherine Elizabeth
2013-09-01
This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.
Shrinkage anisotropy characteristics from soil structure and initial sample/layer size
Chertkov, V Y
2014-01-01
The objective of this work is a physical prediction of such soil shrinkage anisotropy characteristics as variation with drying of (i) different sample/layer sizes and (ii) the shrinkage geometry factor. With that, a new presentation of the shrinkage anisotropy concept is suggested through the sample/layer size ratios. The work objective is reached in two steps. First, the relations are derived between the indicated soil shrinkage anisotropy characteristics and three different shrinkage curves of a soil relating to: small samples (without cracking at shrinkage), sufficiently large samples (with internal cracking), and layers of similar thickness. Then, the results of a recent work with respect to the physical prediction of the three shrinkage curves are used. These results connect the shrinkage curves with the initial sample size/layer thickness as well as characteristics of soil texture and structure (both inter- and intra-aggregate) as physical parameters. The parameters determining the reference shrinkage c...
Bolton tooth size ratio among Sudanese Population sample: A preliminary study
Abdalla Hashim, Ala’a Hayder; Eldin, AL-Hadi Mohi; Hashim, Hayder Abdalla
2015-01-01
Background: The study of the mesiodistal size, the morphology of teeth and dental arch may play an important role in clinical dentistry, as well as other sciences such as Forensic Dentistry and Anthropology. Aims: The aims of the present study were to establish tooth-size ratio in Sudanese sample with Class I normal occlusion, to compare the tooth-size ratio between the present study and Bolton's study and between genders. Materials and Methods: The sample consisted of dental casts of 60 subj...
Finch, W. Holmes; Finch, Maria E. Hernandez
2016-01-01
Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates…
Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests
Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.
2015-01-01
The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…
Is there an alternative to increasing the sample size in microarray studies?
Klebanov, Lev; Yakovlev, Andrei
2007-01-01
Our answer to the question posed in the title is negative. This intentionally provocative note discusses the issue of sample size in microarray studies from several angles. We suggest that the current view of microarrays as no more than a screening tool be changed and small sample studies no longer be considered appropriate.
Du, Yunfei
This paper discusses the impact of sampling error on the construction of confidence intervals around effect sizes. Sampling error affects the location and precision of confidence intervals. Meta-analytic resampling demonstrates that confidence intervals can haphazardly bounce around the true population parameter. Special software with graphical…
Relative power and sample size analysis on gene expression profiling data
M. van Iterson; P.A.C. 't Hoen (Peter); P. Pedotti; G.J.E.J. Hooiveld; J.T. den Dunnen (Johan); G.J.B. van Ommen; J.M. Boer (Judith); R.X. de Menezes (Renee)
2009-01-01
textabstractBackground: With the increasing number of expression profiling technologies, researchers today are confronted with choosing the technology that has sufficient power with minimal sample size, in order to reduce cost and time. These depend on data variability, partly determined by sample t
Relative power and sample size analysis on gene expression profiling data
Iterson, van M.; Hoen, 't P.A.C.; Pedotti, P.; Hooiveld, G.J.E.J.; Dunnen, den J.T.; Ommen, van G.J.B.; Boer, J.M.; Menezes, R.X.
2009-01-01
Background - With the increasing number of expression profiling technologies, researchers today are confronted with choosing the technology that has sufficient power with minimal sample size, in order to reduce cost and time. These depend on data variability, partly determined by sample type, prepar
Fienen, Michael N.; Selbig, William R.
2012-01-01
A new sample collection system was developed to improve the representation of sediment entrained in urban storm water by integrating water quality samples from the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of suspended-sediment concentration and particle size distribution at three independent study locations. Use of the DISA decreased variability, which improved statistical regression to predict particle size distribution using surrogate environmental parameters, such as precipitation depth and intensity. The performance of this statistical modeling technique was compared to results using traditional fixed-point sampling methods and was found to perform better. When environmental parameters can be used to predict particle size distributions, environmental managers have more options when characterizing concentrations, loads, and particle size distributions in urban runoff.
Sample Size and Saturation in PhD Studies Using Qualitative Interviews
Mason, Mark
2010-01-01
A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted t...
Monte Carlo calculations for gamma-ray mass attenuation coefficients of some soil samples
International Nuclear Information System (INIS)
Highlights: • Gamma-ray mass attenuation coefficients of soils. • Radiation shielding properties of soil. • Comparison of calculated results with the theoretical and experimental ones. • The method can be applied to various media. - Abstract: We developed a simple Monte Carlo code to determine the mass attenuation coefficients of some soil samples at nine different gamma-ray energies (59.5, 80.9, 122.1, 159.0, 356.5, 511.0, 661.6, 1173.2 and 1332.5 keV). Results of the Monte Carlo calculations have been compared with tabulations based upon the results of photon cross section database (XCOM) and with experimental results by other researchers for the same samples. The calculated mass attenuation coefficients were found to be very close to the theoretical values and the experimental results
Scheuerell, Mark D
2016-01-01
Stock-recruitment models have been used for decades in fisheries management as a means of formalizing the expected number of offspring that recruit to a fishery based on the number of parents. In particular, Ricker's stock recruitment model is widely used due to its flexibility and ease with which the parameters can be estimated. After model fitting, the spawning stock size that produces the maximum sustainable yield (S MSY) to a fishery, and the harvest corresponding to it (U MSY), are two of the most common biological reference points of interest to fisheries managers. However, to date there has been no explicit solution for either reference point because of the transcendental nature of the equation needed to solve for them. Therefore, numerical or statistical approximations have been used for more than 30 years. Here I provide explicit formulae for calculating both S MSY and U MSY in terms of the productivity and density-dependent parameters of Ricker's model.
Constrained statistical inference: sample-size tables for ANOVA and regression
Directory of Open Access Journals (Sweden)
Leonard eVanbrabant
2015-01-01
Full Text Available Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient beta1 is larger than beta2 and beta3. The corresponding hypothesis is H: beta1 > {beta2, beta3} and this is known as an (order constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a prespecified power (say, 0.80 for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30% to 50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., beta1 > beta2 results in a higher power than assigning a positive or a negative sign to the parameters (e.g., beta1 > 0.
Sample size for collecting germplasms – a polyploid model with mixed mating system
Indian Academy of Sciences (India)
R L Sapra; Prem Narain; S V S Chauhan; S K Lal; B B Singh
2003-03-01
The present paper discusses a general expression for determining the minimum sample size (plants) for a given number of seeds or vice versa for capturing multiple allelic diversity. The model considers sampling from a large 2 k-ploid population under a broad range of mating systems. Numerous expressions/results developed for germplasm collection/regeneration for diploid populations by earlier workers can be directly deduced from our general expression by assigning appropriate values of the corresponding parameters. A seed factor which influences the plant sample size has also been isolated to aid the collectors in selecting the appropriate combination of number of plants and seeds per plant. When genotypic multiplicity of seeds is taken into consideration, a sample size of even less than 172 plants can conserve diversity of 20 alleles from 50,000 polymorphic loci with a very large probability of conservation (0.9999) in most of the cases.
Malm, William C.; Pitchford, Marc L.
Size distributions and resulting optical properties of sulfur aerosols were investigated at three national parks by a Davis Rotating-drum Universal-size-cut Monitoring (DRUM) impactor. Sulfur size distribution measurements for 88, 177, and 315 consecutive time periods were made at Grand Canyon National Park during January and February 1988, Meadview, AZ during July, August, and September 1992, and at Shenandoah National Park during summer, 1990, respectively. The DRUM impactor is designed to collect aerosols with an aerodynamic diameter between 0.07 and 15.0 μm in eight size ranges. Focused beam particle-induced X-ray emission (PIXE) analysis of the aerosol deposits produces a time history of size-resolved elemental composition of varied temporal resolution. As part of the quality assurance protocol, an interagency monitoring of protected visual environments (IMPROVE) channel A sampler collecting 0-2.5 μm diameter particles was operated simultaneously alongside the DRUM sampler. During these sampling periods, the average sulfur mass, interpreted as ammonium sulfate, is 0.49, 2.30, and 10.36 μg m -3 at Grand Canyon, Meadview, and Shenandoah, respectively. The five drum stages were "inverted" using the Twomey (1975) scheme to give 486 size distributions, each made up of 72 discreet pairs of d C/dlog( D) and diameter ( D). From these distributions mass mean diameters ( Dg), geometric standard deviations ( σg), and mass scattering efficiencies ( em)) were calculated. The geometric mass mean diameters in ascending order were 0.21 μm at Meadview, 0.32 μm at Grand Canyon, and 0.42 μm at Shenandoah corresponding σg were 2.1, 2.3, and 1.9. Mie theory mass scattering efficiencies calculated from d C/dlog( D) distributions for the three locations were 2.05, 2.59, and 3.81 m 2 g -1, respectively. At Shenandoah, mass scattering efficiencies approached five but only when the mass median diameters were approximately 0.4 μm and σg were about 1.5. σg near 1.5 were
Experimental and calculational analyses of actinide samples irradiated in EBR-II
International Nuclear Information System (INIS)
Higher actinides influence the characteristics of spent and recycled fuel and dominate the long-term hazards of the reactor waste. Reactor irradiation experiments provide useful benchmarks for testing the evaluated nuclear data for these actinides. During 1967 to 1970, several actinide samples were irradiated in the Idaho EBR-II fast reactor. These samples have now been analyzed, employing mass and alpha spectrometry, to determine the heavy element products. A simple spherical model for the EBR-II core and a recent version of the ORIGEN code with ENDF/B-V data were employed to calculate the exposure products. A detailed comparison between the experimental and calculated results has been made. For samples irradiated at locations near the core center, agreement within 10% was obtained for the major isotopes and their first daughters, and within 20% for the nuclides up the chain. A sensitivity analysis showed that the assumed flux should be increased by 10%
Confalonieri, Roberto; Perego, Alessia; CHIODINI Marcello Ermido; SCAGLIA Barbara; ROSENMUND Alexandra; Acutis, Marco
2009-01-01
Pre-samplings for sample size determination are strongly recommended to assure the reliability of collected data. However, there is a certain dearth of references about sample size determination in field experiments. Seldom if ever, differences in sample size were identified under different management conditions, plant traits, varieties grown and crop age. In order to analyze any differences in sample size for some of the variables measurable in rice field experiments, the visual jackknife me...
Reyer, Dorothea; Philipp, Sonja
2014-05-01
It is desirable to enlarge the profit margin of geothermal projects by reducing the total drilling costs considerably. Substantiated assumptions on uniaxial compressive strengths and failure criteria are important to avoid borehole instabilities and adapt the drilling plan to rock mechanical conditions to minimise non-productive time. Because core material is rare we aim at predicting in situ rock properties from outcrop analogue samples which are easy and cheap to provide. The comparability of properties determined from analogue samples with samples from depths is analysed by performing physical characterisation (P-wave velocities, densities), conventional triaxial tests, and uniaxial compressive strength tests of both quarry and equivalent core samples. "Equivalent" means that the quarry sample is of the same stratigraphic age and of comparable sedimentary facies and composition as the correspondent core sample. We determined the parameters uniaxial compressive strength (UCS) and Young's modulus for 35 rock samples from quarries and 14 equivalent core samples from the North German Basin. A subgroup of these samples was used for triaxial tests. For UCS versus Young's modulus, density and P-wave velocity, linear- and non-linear regression analyses were performed. We repeated regression separately for clastic rock samples or carbonate rock samples only as well as for quarry samples or core samples only. Empirical relations were used to calculate UCS values from existing logs of sampled wellbore. Calculated UCS values were then compared with measured UCS of core samples of the same wellbore. With triaxial tests we determined linearized Mohr-Coulomb failure criteria, expressed in both principal stresses and shear and normal stresses, for quarry samples. Comparison with samples from larger depths shows that it is possible to apply the obtained principal stress failure criteria to clastic and volcanic rocks, but less so for carbonates. Carbonate core samples have higher
The impact of different sampling rates and calculation time intervals on ROTI values
Directory of Open Access Journals (Sweden)
Jacobsen Knut Stanley
2014-01-01
Full Text Available The ROTI (Rate of TEC index is a commonly used measure of ionospheric irregularities level. The algorithm to calculate ROTI is easily implemented, and is the same from paper to paper. However, the sample rate of the GNSS data used, and the time interval over which a value of ROTI is calculated, varies from paper to paper. When comparing ROTI values from different studies, this must be taken into account. This paper aims to show what these differences are, to increase the awareness of this issue. We have investigated the effect of different parameters for the calculation of ROTI values, using one year of data from 8 receivers at latitudes ranging from 59° N to 79° N. We have found that the ROTI values calculated using different parameter choices are strongly positively correlated. However, the ROTI values are quite different. The effect of a lower sample rate is to lower the ROTI value, due to the loss of high-frequency parts of the ROT spectrum, while the effect of a longer calculation time interval is to remove or reduce short-lived peaks due to the inherent smoothing effect. The ratio of ROTI values based on data of different sampling rate is examined in relation to the ROT power spectrum. Of relevance to statistical studies, we find that the median level of ROTI depends strongly on sample rate, strongly on latitude at auroral latitudes, and weakly on time interval. Thus, a baseline “quiet” or “noisy” level for one location or choice or parameters may not be valid for another location or choice of parameters.
Information-based sample size re-estimation in group sequential design for longitudinal trials.
Zhou, Jing; Adewale, Adeniyi; Shentu, Yue; Liu, Jiajun; Anderson, Keaven
2014-09-28
Group sequential design has become more popular in clinical trials because it allows for trials to stop early for futility or efficacy to save time and resources. However, this approach is less well-known for longitudinal analysis. We have observed repeated cases of studies with longitudinal data where there is an interest in early stopping for a lack of treatment effect or in adapting sample size to correct for inappropriate variance assumptions. We propose an information-based group sequential design as a method to deal with both of these issues. Updating the sample size at each interim analysis makes it possible to maintain the target power while controlling the type I error rate. We will illustrate our strategy with examples and simulations and compare the results with those obtained using fixed design and group sequential design without sample size re-estimation.
Consideration of sample size for estimating contaminant load reductions using load duration curves
Babbar-Sebens, Meghna; Karthikeyan, R.
2009-06-01
SummaryIn Total Maximum Daily Load (TMDL) programs, load duration curves are often used to estimate reduction of contaminant loads in a watershed. A popular method for calculating these load reductions involves estimation of the 90th percentiles of monitored contaminant concentrations during different hydrologic conditions. However, water quality monitoring is expensive and can pose major limitations in collecting enough data. Availability of scarce water quality data can, therefore, deteriorate the precision in the estimates of the 90th percentiles, which, in turn, affects the accuracy of estimated load reductions. This paper proposes an adaptive sampling strategy that the data collection agencies can use for not only optimizing their collection of new samples across different hydrologic conditions, but also ensuring that newly collected samples provide opportunity for best possible improvements in the precision of the estimated 90th percentile with minimum sampling costs. The sampling strategy was used to propose sampling plans for Escherichia coli monitoring in an actual stream and different sampling procedures of the strategy were tested for hypothetical stream data. Results showed that improvement in precision using the proposed distributed sampling procedure is much better and faster than that attained via the lumped sampling procedure, for the same sampling cost. Hence, it is recommended that when agencies have a fixed sampling budget, they should collect samples in consecutive monitoring cycles as proposed by the distributed sampling procedure, rather than investing all their resources in only one monitoring cycle.
Minimum sample size for detection of Gutenberg-Richter's b-value
Kamer, Yavor
2014-01-01
In this study we address the question of the minimum sample size needed for distinguishing between Gutenberg-Richter distributions with varying b-values at different resolutions. In order to account for both the complete and incomplete parts of a catalog we use the recently introduced angular frequency magnitude distribution (FMD). Unlike the gradually curved FMD, the angular FMD is fully compatible with Aki's maximum likelihood method for b-value estimation. To obtain generic results we conduct our analysis on synthetic catalogs with Monte Carlo methods. Our results indicate that the minimum sample size used in many studies is strictly below the value required for detecting significant variations.
Morgera, S. D.; Cooper, D. B.
1976-01-01
The experimental observation that a surprisingly small sample size vis-a-vis dimension is needed to achieve good signal-to-interference ratio (SIR) performance with an adaptive predetection filter is explained. The adaptive filter requires estimates as obtained by a recursive stochastic algorithm of the inverse of the filter input data covariance matrix. The SIR performance with sample size is compared for the situations where the covariance matrix estimates are of unstructured (generalized) form and of structured (finite Toeplitz) form; the latter case is consistent with weak stationarity of the input data stochastic process.
Dunst, Carl J.; Hamby, Deborah W.
2012-01-01
This paper includes a nontechnical description of methods for calculating effect sizes in intellectual and developmental disability studies. Different hypothetical studies are used to illustrate how null hypothesis significance testing (NHST) and effect size findings can result in quite different outcomes and therefore conflicting results. Whereas…
The effect of molecular dynamics sampling on the calculated observable gas-phase structures.
Tikhonov, Denis S; Otlyotov, Arseniy A; Rybkin, Vladimir V
2016-07-21
In this study, we compare the performance of various ab initio molecular dynamics (MD) sampling methods for the calculation of the observable vibrationally-averaged gas-phase structures of benzene, naphthalene and anthracene molecules. Nose-Hoover (NH), canonical and quantum generalized-Langevin-equation (GLE) thermostats as well as the a posteriori quantum correction to the classical trajectories have been tested and compared to the accurate path-integral molecular dynamics (PIMD), static anharmonic vibrational calculations as well as to the experimental gas electron diffraction data. Classical sampling methods neglecting quantum effects (NH and canonical GLE thermostats) dramatically underestimate vibrational amplitudes for the bonded atom pairs, both C-H and C-C, the resulting radial distribution functions exhibit nonphysically narrow peaks. This deficiency is almost completely removed by taking the quantum effects on the nuclei into account. The quantum GLE thermostat and a posteriori correction to the canonical GLE and NH thermostatted trajectories capture most vibrational quantum effects and closely reproduce computationally expensive PIMD and experimental radial distribution functions. These methods are both computationally feasible and accurate and are therefore recommended for calculations of the observable gas-phase structures. A good performance of the quantum GLE thermostat for the gas-phase calculations is encouraging since its parameters have been originally fitted for the condensed-phase calculations. Very accurate molecular structures can be predicted by combining the equilibrium geometry obtained at a high level of electronic structure theory with vibrational amplitudes and corrections calculated using MD driven by a lower level of electronic structure theory. PMID:27331660
Stevens, June; Bryant, Maria; Wang, Chin-Hua; Cai, Jianwen; Bentley, Margaret E.
2012-01-01
Measurement of the home food environment is of interest to researchers because it affects food intake and is a feasible target for nutrition interventions. The objective of this study was to provide estimates to aid the calculation of sample size and number of repeated measures needed in studies of nutrients and foods in the home. We inventoried all foods in the homes of 80 African-American first-time mothers and determined 6 nutrient-related attributes. Sixty-three households were measured 3...
Calculation of HPGe efficiency for environmental samples: comparison of EFFTRAN and GEANT4
Energy Technology Data Exchange (ETDEWEB)
Nikolic, Jelena, E-mail: jnikolic@vinca.rs [University of Belgrade Institut for Nuclear Sciences Vinča, Mike Petrovica Alasa 12-16, 11001 Belgrade (Serbia); Vidmar, Tim [SCK.CEN, Belgian Nuclear Research Centre, Boeretang 200, BE-2400 Mol (Belgium); Jokovic, Dejan [University of Belgrade, Institute for Physics, Pregrevica 18, Belgrade (Serbia); Rajacic, Milica; Todorovic, Dragana [University of Belgrade Institut for Nuclear Sciences Vinča, Mike Petrovica Alasa 12-16, 11001 Belgrade (Serbia)
2014-11-01
Determination of full energy peak efficiency is one of the most important tasks that have to be performed before gamma spectrometry of environmental samples. Many methods, including measurement of specific reference materials, Monte Carlo simulations, efficiency transfer and semi empirical calculations, were developed in order to complete this task. Monte Carlo simulation, based on GEANT4 simulation package and EFFTRAN efficiency transfer software are applied for the efficiency calibration of three detectors, readily used in the Environment and Radiation Protection Laboratory of Institute for Nuclear Sciences Vinca, for measurement of environmental samples. Efficiencies were calculated for water, soil and aerosol samples. The aim of this paper is to perform efficiency calculations for HPGe detectors using both GEANT4 simulation and EFFTRAN efficiency transfer software and to compare obtained results with the experimental results. This comparison should show how the two methods agree with experimentally obtained efficiencies of our measurement system and in which part of the spectrum do the discrepancies appear. The detailed knowledge of accuracy and precision of both methods should enable us to choose an appropriate method for each situation that is presented in our and other laboratories on a daily basis.
ED-XRF set-up for size-segregated aerosol samples analysis
Bernardoni, V.; E. Cuccia; G. Calzolai; Chiari, M.; Lucarelli, F.; D. Massabo; Nava, S.; Prati, P.; Valli, G; Vecchi, R.
2011-01-01
The knowledge of size-segregated elemental concentrations in atmospheric particulate matter (PM) gives a useful contribution to the complete chemical characterisation; this information can be obtained by sampling with multi-stage cascade impactors. In this work, samples were collected using a low-pressure 12-stage Small Deposit Impactor and a 13-stage rotating Micro Orifice Uniform Deposit Impactor™. Both impactors collect the aerosol in an inhomogeneous geometry, which needs a special set-up...
Directory of Open Access Journals (Sweden)
Fjeldborg Paul
2008-07-01
Full Text Available Abstract Background Sepsis and complications to sepsis are major causes of mortality in critically ill patients. Rapid treatment of sepsis is of crucial importance for survival of patients. The infectious status of the critically ill patient is often difficult to assess because symptoms cannot be expressed and signs may present atypically. The established biological markers of inflammation (leucocytes, C-reactive protein may often be influenced by other parameters than infection, and may be unacceptably slowly released after progression of an infection. At the same time, lack of a relevant antimicrobial therapy in an early course of infection may be fatal for the patient. Specific and rapid markers of bacterial infection have been sought for use in these patients. Methods Multi-centre randomized controlled interventional trial. Powered for superiority and non-inferiority on all measured end points. Complies with, "Good Clinical Practice" (ICH-GCP Guideline (CPMP/ICH/135/95, Directive 2001/20/EC. Inclusion: 1 Age ≥ 18 years of age, 2 Admitted to the participating intensive care units, 3 Signed written informed consent. Exclusion: 1 Known hyper-bilirubinaemia. or hypertriglyceridaemia, 2 Likely that safety is compromised by blood sampling, 3 Pregnant or breast feeding. Computerized Randomisation: Two arms (1:1, n = 500 per arm: Arm 1: standard of care. Arm 2: standard of care and Procalcitonin guided diagnostics and treatment of infection. Primary Trial Objective: To address whether daily Procalcitonin measurements and immediate diagnostic and therapeutic response on day-to-day changes in procalcitonin can reduce the mortality of critically ill patients. Discussion For the first time ever, a mortality-endpoint, large scale randomized controlled trial with a biomarker-guided strategy compared to the best standard of care, is conducted in an Intensive care setting. Results will, with a high statistical power answer the question: Can the survival
Eberl, D.D.; Drits, V.A.; Srodon, Jan; Nuesch, R.
1996-01-01
Particle size may strongly influence the physical and chemical properties of a substance (e.g. its rheology, surface area, cation exchange capacity, solubility, etc.), and its measurement in rocks may yield geological information about ancient environments (sediment provenance, degree of metamorphism, degree of weathering, current directions, distance to shore, etc.). Therefore mineralogists, geologists, chemists, soil scientists, and others who deal with clay-size material would like to have a convenient method for measuring particle size distributions. Nano-size crystals generally are too fine to be measured by light microscopy. Laser scattering methods give only average particle sizes; therefore particle size can not be measured in a particular crystallographic direction. Also, the particles measured by laser techniques may be composed of several different minerals, and may be agglomerations of individual crystals. Measurement by electron and atomic force microscopy is tedious, expensive, and time consuming. It is difficult to measure more than a few hundred particles per sample by these methods. This many measurements, often taking several days of intensive effort, may yield an accurate mean size for a sample, but may be too few to determine an accurate distribution of sizes. Measurement of size distributions by X-ray diffraction (XRD) solves these shortcomings. An X-ray scan of a sample occurs automatically, taking a few minutes to a few hours. The resulting XRD peaks average diffraction effects from billions of individual nano-size crystals. The size that is measured by XRD may be related to the size of the individual crystals of the mineral in the sample, rather than to the size of particles formed from the agglomeration of these crystals. Therefore one can determine the size of a particular mineral in a mixture of minerals, and the sizes in a particular crystallographic direction of that mineral.
Sample-Size Effects on the Compression Behavior of a Ni-BASED Amorphous Alloy
Liang, Weizhong; Zhao, Guogang; Wu, Linzhi; Yu, Hongjun; Li, Ming; Zhang, Lin
Ni42Cu5Ti20Zr21.5Al8Si3.5 bulk metallic glasses rods with diameters of 1 mm and 3 mm, were prepared by arc melting of composing elements in a Ti-gettered argon atmosphere. The compressive deformation and fracture behavior of the amorphous alloy samples with different size were investigated by testing machine and scanning electron microscope. The compressive stress-strain curves of 1 mm and 3 mm samples exhibited 4.5% and 0% plastic strain, while the compressive fracture strength for 1 mm and 3 mm rod is 4691 MPa and 2631 MPa, respectively. The compressive fracture surface of different size sample consisted of shear zone and non-shear one. Typical vein patterns with some melting drops can be seen on the shear region of 1 mm rod, while fish-bone shape patterns can be observed on 3 mm specimen surface. Some interesting different spacing periodic ripples existed on the non-shear zone of 1 and 3 mm rods. On the side surface of 1 mm sample, high density of shear bands was observed. The skip of shear bands can be seen on 1 mm sample surface. The mechanisms of the effect of sample size on fracture strength and plasticity of the Ni-based amorphous alloy are discussed.
Effect of mesh grid size on the accuracy of deterministic VVER-1000 core calculations
International Nuclear Information System (INIS)
Research highlights: → Accuracy of changing mesh grid size in deterministic core calculations was investigated. → WIMS and CITATION codes were used in the investigation. → The best results belong to higher numbers of mesh points in radial and axial directions of the core. - Abstract: Numerical solutions based on finite-difference method require the domain in the problem to be divided into a number of nodes in the form of triangles, rectangular, and so on. To apply the finite-difference method in reactor physics for solving the diffusion equation with satisfactory accuracy, the distance between adjacent mesh-points should be small in comparison with a neutron mean free path. In this regard the effect of number of mesh points on the accuracy and computation time have been investigated using the VVER-1000 reactor of Bushehr NPP as an example, and utilizing WIMS and CITATION codes. The best results obtained in this study belong to meshing models with higher numbers of mesh-points in both radial and axial directions of the reactor core.
Speckle-suppression in hologram calculation using ray-sampling plane.
Utsugi, Takeru; Yamaguchi, Masahiro
2014-07-14
Speckle noise is an important issue in electro-holographic displays. We propose a new method for suppressing speckle noise in a computer-generated hologram (CGH) for 3D display. In our previous research, we proposed a method for CGH calculation using ray-sampling plane (RS-plane), which enables the application of advanced ray-based rendering techniques to the calculation of hologram that can reconstruct a deep 3D scene in high resolution. Conventional techniques for effective speckle suppression, which utilizes the time-multiplexing of sparse object points, can suppress the speckle noise with high resolution, but it cannot be applied to the CGH calculation using RS-plane because the CGH calculated using RS-plane does not utilize point sources on an object surface. Then, we propose the method to define the point sources from light-ray information and apply the speckle suppression technique using sparse point sources to CGH calculation using RS-plane. The validity of the proposed method was verified by numerical simulations.
Grenev, I. V.; Gavrilov, V. Yu.
2014-01-01
Adsorption isotherms of molecular hydrogen are measured at 77 K in a series of AlPO alumophosphate zeolites with different microchannel sizes. The potential of the intermolecular interaction of H2 is calculated within the model of a cylindrical channel of variable size. Henry constants are calculated for this model for arbitrary orientations of the adsorbate molecules in microchannels. The experimental and calculated values of the Henry adsorption constant of H2 are compared at 77 K on AlPO zeolites. The constants of intermolecular interaction are determined for the H2-AlPO system.
Sample-size effects in fast-neutron gamma-ray production measurements: solid-cylinder samples
International Nuclear Information System (INIS)
The effects of geometry, absorption and multiple scattering in (n,Xγ) reaction measurements with solid-cylinder samples are investigated. Both analytical and Monte-Carlo methods are employed in the analysis. Geometric effects are shown to be relatively insignificant except in definition of the scattering angles. However, absorption and multiple-scattering effects are quite important; accurate microscopic differential cross sections can be extracted from experimental data only after a careful determination of corrections for these processes. The results of measurements performed using several natural iron samples (covering a wide range of sizes) confirm validity of the correction procedures described herein. It is concluded that these procedures are reliable whenever sufficiently accurate neutron and photon cross section and angular distribution information is available for the analysis. (13 figures, 5 tables) (auth)
DECISION OF SAMPLE SIZES IN EXPERIMENTS OF TURBULENT FLOW BASED ON WAVELET ANALYSIS
Institute of Scientific and Technical Information of China (English)
DAI Zheng-yuan; GU Chuan-gang; WANG Tong; YANG Bo
2006-01-01
By making the wavelet analysis in both time and frequency aspect and statistical analysis, a method was proposed to decide sample sizes in the experiments of turbulent flows. With an example of analyzing the turbulent kinetic energy in a turbulent boundary layer, this method was proved to be practicable.
Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research
Cook, David A.; Hatala, Rose
2015-01-01
Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…
Introduction to Sample Size Choice for Confidence Intervals Based on "t" Statistics
Liu, Xiaofeng Steven; Loudermilk, Brandon; Simpson, Thomas
2014-01-01
Sample size can be chosen to achieve a specified width in a confidence interval. The probability of obtaining a narrow width given that the confidence interval includes the population parameter is defined as the power of the confidence interval, a concept unfamiliar to many practitioners. This article shows how to utilize the Statistical Analysis…
Gao, Ka; Li, Shuangming; Xu, Lei; Fu, Hengzhi
2014-05-01
Al-40% Cu hypereutectic alloy samples were successfully directionally solidified at a growth rate of 10 μm/s in different sizes (4 mm, 1.8 mm, and 0.45 mm thickness in transverse section). Using the serial sectioning technique, the three-dimensional (3D) microstructures of the primary intermetallic Al2Cu phase of the alloy can be observed with various growth patterns, L-shape, E-shape, and regular rectangular shape with respect to growth orientations of the (110) and (310) plane. The L-shape and regular rectangular shape of Al2Cu phase are bounded by {110} facets. When the sample size was reduced from 4 mm to 0.45 mm, the solidified microstructures changed from multi-layer dendrites to single-layer dendrite along the growth direction, and then the orientation texture was at the plane (310). The growth mechanism for the regular faceted intermetallic Al2Cu at different sample sizes was interpreted by the oriented attachment mechanism (OA). The experimental results showed that the directionally solidified Al-40% Cu alloy sample in a much smaller size can achieve a well-aligned morphology with a specific growth texture.
The Influence of Virtual Sample Size on Confidence and Causal-Strength Judgments
Liljeholm, Mimi; Cheng, Patricia W.
2009-01-01
The authors investigated whether confidence in causal judgments varies with virtual sample size--the frequency of cases in which the outcome is (a) absent before the introduction of a generative cause or (b) present before the introduction of a preventive cause. Participants were asked to evaluate the influence of various candidate causes on an…
Sample Size Requirements for Assessing Statistical Moments of Simulated Crop Yield Distributions
Lehmann, N.; Finger, R.; Klein, T.; Calanca, P.
2013-01-01
Mechanistic crop growth models are becoming increasingly important in agricultural research and are extensively used in climate change impact assessments. In such studies, statistics of crop yields are usually evaluated without the explicit consideration of sample size requirements. The purpose of t
Kelley, Ken
2007-11-01
The accuracy in parameter estimation approach to sample size planning is developed for the coefficient of variation, where the goal of the method is to obtain an accurate parameter estimate by achieving a sufficiently narrow confidence interval. The first method allows researchers to plan sample size so that the expected width of the confidence interval for the population coefficient of variation is sufficiently narrow. A modification allows a desired degree of assurance to be incorporated into the method, so that the obtained confidence interval will be sufficiently narrow with some specified probability (e.g., 85% assurance that the 95 confidence interval width will be no wider than to units). Tables of necessary sample size are provided for a variety of scenarios that may help researchers planning a study where the coefficient of variation is of interest plan an appropriate sample size in order to have a sufficiently narrow confidence interval, optionally with somespecified assurance of the confidence interval being sufficiently narrow. Freely available computer routines have been developed that allow researchers to easily implement all of the methods discussed in the article.
Support vector regression to predict porosity and permeability: Effect of sample size
Al-Anazi, A. F.; Gates, I. D.
2012-02-01
Porosity and permeability are key petrophysical parameters obtained from laboratory core analysis. Cores, obtained from drilled wells, are often few in number for most oil and gas fields. Porosity and permeability correlations based on conventional techniques such as linear regression or neural networks trained with core and geophysical logs suffer poor generalization to wells with only geophysical logs. The generalization problem of correlation models often becomes pronounced when the training sample size is small. This is attributed to the underlying assumption that conventional techniques employing the empirical risk minimization (ERM) inductive principle converge asymptotically to the true risk values as the number of samples increases. In small sample size estimation problems, the available training samples must span the complexity of the parameter space so that the model is able both to match the available training samples reasonably well and to generalize to new data. This is achieved using the structural risk minimization (SRM) inductive principle by matching the capability of the model to the available training data. One method that uses SRM is support vector regression (SVR) network. In this research, the capability of SVR to predict porosity and permeability in a heterogeneous sandstone reservoir under the effect of small sample size is evaluated. Particularly, the impact of Vapnik's ɛ-insensitivity loss function and least-modulus loss function on generalization performance was empirically investigated. The results are compared to the multilayer perception (MLP) neural network, a widely used regression method, which operates under the ERM principle. The mean square error and correlation coefficients were used to measure the quality of predictions. The results demonstrate that SVR yields consistently better predictions of the porosity and permeability with small sample size than the MLP method. Also, the performance of SVR depends on both kernel function
Nagaya, Yasunobu
2014-06-01
The methods to calculate the kinetics parameters of βeff and Λ with the differential operator sampling have been reviewed. The comparison of the results obtained with the differential operator sampling and iterated fission probability approaches has been performed. It is shown that the differential operator sampling approach gives the same results as the iterated fission probability approach within the statistical uncertainty. In addition, the prediction accuracy of the evaluated nuclear data library JENDL-4.0 for the measured βeff/Λ and βeff values is also examined. It is shown that JENDL-4.0 gives a good prediction except for the uranium-233 systems. The present results imply the need for revisiting the uranium-233 nuclear data evaluation and performing the detailed sensitivity analysis.
Forestry inventory based on multistage sampling with probability proportional to size
Lee, D. C. L.; Hernandez, P., Jr.; Shimabukuro, Y. E.
1983-01-01
A multistage sampling technique, with probability proportional to size, is developed for a forest volume inventory using remote sensing data. The LANDSAT data, Panchromatic aerial photographs, and field data are collected. Based on age and homogeneity, pine and eucalyptus classes are identified. Selection of tertiary sampling units is made through aerial photographs to minimize field work. The sampling errors for eucalyptus and pine ranged from 8.34 to 21.89 percent and from 7.18 to 8.60 percent, respectively.
Directory of Open Access Journals (Sweden)
John M Lachin
Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to
Evaluating the performance of species richness estimators: sensitivity to sample grain size
DEFF Research Database (Denmark)
Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara
2006-01-01
. Data obtained with standardized sampling of 78 transects in natural forest remnants of five islands were aggregated in seven different grains (i.e. ways of defining a single sample): islands, natural areas, transects, pairs of traps, traps, database records and individuals to assess the effect of using...... different sampling units on species richness estimations. 2. Estimated species richness scores depended both on the estimator considered and on the grain size used to aggregate data. However, several estimators (ACE, Chao1, Jackknife1 and 2 and Bootstrap) were precise in spite of grain variations. Weibull...... and several recent estimators [proposed by Rosenzweig et al. (Conservation Biology, 2003, 17, 864-874), and Ugland et al. (Journal of Animal Ecology, 2003, 72, 888-897)] performed poorly. 3. Estimations developed using the smaller grain sizes (pair of traps, traps, records and individuals) presented similar...
Size selective isocyanate aerosols personal air sampling using porous plastic foams
Khanh Huynh, Cong; Duc, Trinh Vu
2009-02-01
As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.
Enzymatic Kinetic Isotope Effects from First-Principles Path Sampling Calculations.
Varga, Matthew J; Schwartz, Steven D
2016-04-12
In this study, we develop and test a method to determine the rate of particle transfer and kinetic isotope effects in enzymatic reactions, specifically yeast alcohol dehydrogenase (YADH), from first-principles. Transition path sampling (TPS) and normal mode centroid dynamics (CMD) are used to simulate these enzymatic reactions without knowledge of their reaction coordinates and with the inclusion of quantum effects, such as zero-point energy and tunneling, on the transferring particle. Though previous studies have used TPS to calculate reaction rate constants in various model and real systems, it has not been applied to a system as large as YADH. The calculated primary H/D kinetic isotope effect agrees with previously reported experimental results, within experimental error. The kinetic isotope effects calculated with this method correspond to the kinetic isotope effect of the transfer event itself. The results reported here show that the kinetic isotope effects calculated from first-principles, purely for barrier passage, can be used to predict experimental kinetic isotope effects in enzymatic systems.
Calculation of coincidence summing corrections for a specific small soil sample geometry
Energy Technology Data Exchange (ETDEWEB)
Helmer, R.G.; Gehrke, R.J.
1996-10-01
Previously, a system was developed at the INEL for measuring the {gamma}-ray emitting nuclides in small soil samples for the purpose of environmental monitoring. These samples were counted close to a {approx}20% Ge detector and, therefore, it was necessary to take into account the coincidence summing that occurs for some nuclides. In order to improve the technical basis for the coincidence summing corrections, the authors have carried out a study of the variation in the coincidence summing probability with position within the sample volume. A Monte Carlo electron and photon transport code (CYLTRAN) was used to compute peak and total efficiencies for various photon energies from 30 to 2,000 keV at 30 points throughout the sample volume. The geometry for these calculations included the various components of the detector and source along with the shielding. The associated coincidence summing corrections were computed at these 30 positions in the sample volume and then averaged for the whole source. The influence of the soil and the detector shielding on the efficiencies was investigated.
Energy Technology Data Exchange (ETDEWEB)
Nagy, Tibor; Vikár, Anna; Lendvay, György, E-mail: lendvay.gyorgy@ttk.mta.hu [Institute of Materials and Environmental Chemistry, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Magyar tudósok körútja 2., H-1117 Budapest (Hungary)
2016-01-07
The quasiclassical trajectory (QCT) method is an efficient and important tool for studying the dynamics of bimolecular reactions. In this method, the motion of the atoms is simulated classically, and the only quantum effect considered is that the initial vibrational states of reactant molecules are semiclassically quantized. A sensible expectation is that the initial ensemble of classical molecular states generated this way should be stationary, similarly to the quantum state it is supposed to represent. The most widely used method for sampling the vibrational phase space of polyatomic molecules is based on the normal mode approximation. In the present work, it is demonstrated that normal mode sampling provides a nonstationary ensemble even for a simple molecule like methane, because real potential energy surfaces are anharmonic in the reactant domain. The consequences were investigated for reaction CH{sub 4} + H → CH{sub 3} + H{sub 2} and its various isotopologs and were found to be dramatic. Reaction probabilities and cross sections obtained from QCT calculations oscillate periodically as a function of the initial distance of the colliding partners and the excitation functions are erratic. The reason is that in the nonstationary ensemble of initial states, the mean bond length of the breaking C–H bond oscillates in time with the frequency of the symmetric stretch mode. We propose a simple method, one-period averaging, in which reactivity parameters are calculated by averaging over an entire period of the mean C–H bond length oscillation, which removes the observed artifacts and provides the physically most reasonable reaction probabilities and cross sections when the initial conditions for QCT calculations are generated by normal mode sampling.
Effect of sample size on the fluid flow through a single fractured granitoid
Institute of Scientific and Technical Information of China (English)
Kunal Kumar Singh; Devendra Narain Singh; Ranjith Pathegama Gamage
2016-01-01
Most of deep geological engineered structures, such as rock caverns, nuclear waste disposal repositories, metro rail tunnels, multi-layer underground parking, are constructed within hard crystalline rocks because of their high quality and low matrix permeability. In such rocks, fluid flows mainly through fractures. Quantification of fractures along with the behavior of the fluid flow through them, at different scales, becomes quite important. Earlier studies have revealed the influence of sample size on the confining stressepermeability relationship and it has been demonstrated that permeability of the fractured rock mass decreases with an increase in sample size. However, most of the researchers have employed numerical simulations to model fluid flow through the fracture/fracture network, or laboratory investigations on intact rock samples with diameter ranging between 38 mm and 45 cm and the diameter-to-length ratio of 1:2 using different experimental methods. Also, the confining stress, s3, has been considered to be less than 30 MPa and the effect of fracture roughness has been ignored. In the present study, an extension of the previous studies on “laboratory simulation of flow through single fractured granite” was conducted, in which consistent fluid flow experiments were performed on cy-lindrical samples of granitoids of two different sizes (38 mm and 54 mm in diameters), containing a“rough walled single fracture”. These experiments were performed under varied confining pressure (s3 ¼ 5e40 MPa), fluid pressure (fp ? 25 MPa), and fracture roughness. The results indicate that a nonlinear relationship exists between the discharge, Q, and the effective confining pressure, sef ., and Q decreases with an increase in sef .. Also, the effects of sample size and fracture roughness do not persist when sef . ? 20 MPa. It is expected that such a study will be quite useful in correlating and extrapolating the laboratory scale investigations to in-situ scale and
Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence
Energy Technology Data Exchange (ETDEWEB)
Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)
2013-07-01
Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)
A Complete Sample of Megaparsec Size Double Radio Sources from SUMSS
Saripalli, L; Subramanian, R; Boyce, E
2005-01-01
We present a complete sample of megaparsec-size double radio sources compiled from the Sydney University Molonglo Sky Survey (SUMSS). Almost complete redshift information has been obtained for the sample. The sample has the following defining criteria: Galactic latitude |b| > 12.5 deg, declination 5 arcmin. All the sources have projected linear size larger than 0.7 Mpc (assuming H_o = 71 km/s/Mpc). The sample is chosen from a region of the sky covering 2100 square degrees. In this paper, we present 843-MHz radio images of the extended radio morphologies made using the Molonglo Observatory Synthesis Telescope (MOST), higher resolution radio observations of any compact radio structures using the Australia Telescope Compact Array (ATCA), and low resolution optical spectra of the host galaxies from the 2.3-m Australian National University (ANU) telescope at Siding Spring Observatory. The sample presented here is the first in the southern hemisphere and significantly enhances the database of known giant radio sou...
Sample size for estimating the mean concentration of organisms in ballast water.
Costa, Eliardo G; Lopes, Rubens M; Singer, Julio M
2016-09-15
We consider the computation of sample sizes for estimating the mean concentration of organisms in ballast water. Given the possible heterogeneity of their distribution in the tank, we adopt a negative binomial model to obtain confidence intervals for the mean concentration. We show that the results obtained by Chen and Chen (2012) in a different set-up hold for the proposed model and use them to develop algorithms to compute sample sizes both in cases where the mean concentration is known to lie in some bounded interval or where there is no information about its range. We also construct simple diagrams that may be easily employed to decide for compliance with the D-2 regulation of the International Maritime Organization (IMO). PMID:27266648
DEFF Research Database (Denmark)
Andreasen, Jo Bønding; Pistor-Riebold, Thea Unger; Knudsen, Ingrid Hell;
2014-01-01
and compared three blood sampling tubes of different size: 1.8, 2.7, and 3.6 mL. All tubes were made of plastic and contained 3.2% sodium-citrate as anticoagulant. Platelet aggregation was investigated in 12 healthy individuals employing the Multiplate® Analyser comparing tubes of 3.6 mL and 1.8 mL. Platelet...... be preferred for RoTEM® analyses in order to minimise the volume of blood drawn. With regard to platelet aggregation analysed by impedance aggregometry tubes of different size cannot be used interchangeably. If platelet count is determined later than 10 min after blood sampling using tubes containing citrate...
International Nuclear Information System (INIS)
The purpose of this study is to determine the concentration of depleted uranium in dental fillings samples, which were obtained some hospital and dental office, sale of materials deployed in Iraq. 8 samples were examined from two different fillings and lead-filling (amalgam) and composite filling (plastic). concentrations of depleted uranium were determined in these samples using a nuclear track detector CR-39 through the recording of the tracks left by of fragments of fission resulting from the reaction 238U (n, f). The samples are bombarded by neutrons emitted from the neutron source (241Am-Be) with flux of ( 105 n. cm-2. s-1). The period of etching to show the track of fission fragments is 5 hours using NaOH solution with normalization (6.25N), and temperature (60 oC). Concentration of depleted uranium were calculated by comparison with standard samples. The result that obtained showed that the value of the weighted average for concentration of uranium in the samples fillings (5.54± 1.05) ppm lead to thr filling (amalgam) and (5.33±0.6) ppm of the filling composite (plastic). The hazard- index, the absorbed dose and the effective dose for these concentration were determined. The obtained results of the effective dose for each of the surface of the bone and skin (as the areas most affected by this compensation industrial) is (0.56 mSv / y) for the batting lead (amalgam) and (0.54 mSv / y) for the filling composite (plastic). From the results of study it was that the highest rate is the effective dose to a specimen amalgam filling (0.68 mSv / y) which is less than the allowable limit for exposure of the general people set the World Health Organization (WHO), a (1 mSv / y). (Author)
Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size
Zhihua Wang; Yongbo Zhang; Huimin Fu
2014-01-01
Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR) prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each predictio...
Breaking Free of Sample Size Dogma to Perform Innovative Translational Research
Bacchetti, Peter; Steven G Deeks; McCune, Joseph M.
2011-01-01
Innovative clinical and translational research is often delayed or prevented by reviewers’ expectations that any study performed in humans must be shown in advance to have high statistical power. This supposed requirement is not justifiable and is contradicted by the reality that increasing sample size produces diminishing marginal returns. Studies of new ideas often must start small (sometimes even with an N of 1) because of cost and feasibility concerns, and recent statistical work shows th...
Cliff´s Delta Calculator: A non-parametric effect size program for two groups of observations
Guillermo Macbeth; Eugenia Razumiejczyk; Rubén Daniel Ledesma
2011-01-01
The Cliff´s Delta statistic is an effect size measure that quantifies the amount of difference between two non-parametric variables beyond p-values interpretation. This measure can be understood as a useful complementary analysis for the corresponding hypothesis testing. During the last two decades the use of effect size measures has been strongly encouraged by methodologists and leading institutions of behavioral sciences. The aim of this contribution is to introduce the Cliff´s Delta Calcul...
Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B.
2011-01-01
Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite size pencil beam (FSPB) algorithm with a 3D-density correction method on GPU. This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework. Dosimetric evaluations against Monte Carlo dose calculations are conducted on 10 IMRT treatment plans (5 head-and-neck cases and 5 lung cases). For all cases, there i...
A contemporary decennial global Landsat sample of changing agricultural field sizes
White, Emma; Roy, David
2014-05-01
Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by
A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies
Directory of Open Access Journals (Sweden)
Hojin Moon
2002-12-01
Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.
Jiang, Shengyu; Wang, Chun; Weiss, David J
2016-01-01
Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM) A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root-mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1000 did not increase the accuracy of MGRM parameter estimates. PMID:26903916
Directory of Open Access Journals (Sweden)
Shengyu eJiang
2016-02-01
Full Text Available Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM. A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexiMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root- mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1,000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1,000 did not increase the accuracy of MGRM parameter estimates.
Estimating the Size of a Large Network and its Communities from a Random Sample
Chen, Lin; Crawford, Forrest W
2016-01-01
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that correctly estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhausti...
Ge, Yunfei; Zhang, Yuan; Booth, Jamie A.; Weaver, Jonathan M. R.; Dobson, Phillip S.
2016-08-01
We report a method for quantifying scanning thermal microscopy (SThM) probe–sample thermal interactions in air using a novel temperature calibration device. This new device has been designed, fabricated and characterised using SThM to provide an accurate and spatially variable temperature distribution that can be used as a temperature reference due to its unique design. The device was characterised by means of a microfabricated SThM probe operating in passive mode. This data was interpreted using a heat transfer model, built to describe the thermal interactions during a SThM thermal scan. This permitted the thermal contact resistance between the SThM tip and the device to be determined as 8.33 × 105 K W‑1. It also permitted the probe–sample contact radius to be clarified as being the same size as the probe’s tip radius of curvature. Finally, the data were used in the construction of a lumped-system steady state model for the SThM probe and its potential applications were addressed.
Energy Technology Data Exchange (ETDEWEB)
Kim, Chung Ho; O, Joo Hyun; Chung, Yong An; Yoo, Le Ryung; Sohn, Hyung Sun; Kim, Sung Hoon; Chung, Soo Kyo; Lee, Hyoung Koo [Catholic University of Korea, Seoul (Korea, Republic of)
2006-02-15
To determine appropriate sampling frequency and time of multiple blood sampling dual exponential method with {sup 99m}Tc-DTPA for calculating glomerular filtration rate (GFR). Thirty four patients were included in this study. Three mCi of {sup 99m}Tc-DTPA was intravenously injected and blood sampling at 9 different times, 5 ml each, were done. Using the radioactivity of serum, measured by gamma counter, the GFR was calculated using dual exponential method and corrected with the body surface area. Using spontaneously chosen 2 data points of serum radioactivity, 15 collections of 2-sample GFR were calculated. And 10 collections of 3-sample GFR and 12 collections of 4-sample GFR were also calculated. Using the 9-sample GFR as a reference value, degree of agreement was analyzed with Kendall's {tau} correlation coefficients, mean difference and standard deviation. Although some of the 2-sample GFR showed high correlation coefficient, over or underestimation had evolved as the renal function change. The 10-120-240 min 3-sample GFR showed a high correlation coefficient {tau} =0.93), minimal difference (Mean{+-}SD= -1.784{+-}3.972), and no over or underestimation as the renal function changed. Th 4-sample GFR showed no better accuracy than the 3-sample GFR. Int the wide spectrum or renal function, the 10-120-240 min 3-sample GFR could be the best choice for estimating the patients' renal function.
SAMPLE SIZE DETERMINATION IN NON-RADOMIZED SURVIVAL STUDIES WITH NON-CENSORED AND CENSORED DATA
Directory of Open Access Journals (Sweden)
S FAGHIHZADEH
2003-06-01
Full Text Available Introduction: In survival analysis, determination of sufficient sample size to achieve suitable statistical power is important .In both parametric and non-parametric methods of classic statistics, randomn selection of samples is a basic condition. practically, in most clinical trials and health surveys randomn allocation is impossible. Fixed - effect multiple linear regression analysis covers this need and this feature could be extended to survival regression analysis. This paper is the result of sample size determination in non-randomnized surval analysis with censored and non -censored data. Methods: In non-randomnized survival studies, linear regression with fixed -effect variable could be used. In fact such a regression is conditional expectation of dependent variable, conditioned on independent variable. Likelihood fuction with exponential hazard constructed by considering binary variable for allocation of each subject to one of two comparing groups, stating the variance of coefficient of fixed - effect independent variable by determination coefficient , sample size determination formulas are obtained with both censored and non-cencored data. So estimation of sample size is not based on the relation of a single independent variable but it could be attain the required power for a test adjusted for effect of the other explanatory covariates. Since the asymptotic distribution of the likelihood estimator of parameter is normal, we obtained the variance of the regression coefficient estimator formula then by stating the variance of regression coefficient of fixed-effect variable, by determination coefficient we derived formulas for determination of sample size in both censored and non-censored data. Results: In no-randomnized survival analysis ,to compare hazard rates of two groups without censored data, we obtained an estimation of determination coefficient ,risk ratio and proportion of membership to each group and their variances from
Using a Divided Bar Apparatus to Measure Thermal Conductivity of Samples of Odd Sizes and Shapes
Crowell, J. "; Gosnold, W. D.
2012-12-01
Standard procedure for measuring thermal conductivity using a divided bar apparatus requires a sample that has the same surface dimensions as the heat sink/source surface in the divided bar. Heat flow is assumed to be constant throughout the column and thermal conductivity (K) is determined by measuring temperatures (T) across the sample and across standard layers and using the basic relationship Ksample=(Kstandard*(ΔT1+ΔT2)/2)/(ΔTsample). Sometimes samples are not large enough or of correct proportions to match the surface of the heat sink/source, however using the equations presented here the thermal conductivity of these samples can still be measured with a divided bar. Measurements were done on the UND Geothermal Laboratories stationary divided bar apparatus (SDB). This SDB has been designed to mimic many in-situ conditions, with a temperature range of -20C to 150C and a pressure range of 0 to 10,000 psi for samples with parallel surfaces and 0 to 3000 psi for samples with non-parallel surfaces. The heat sink/source surfaces are copper disks and have a surface area of 1,772 mm2 (2.74 in2). Layers of polycarbonate 6 mm thick with the same surface area as the copper disks are located in the heat sink and in the heat source as standards. For this study, all samples were prepared from a single piece of 4 inch limestone core. Thermal conductivities were measured for each sample as it was cut successively smaller. The above equation was adjusted to include the thicknesses (Th) of the samples and the standards and the surface areas (A) of the heat sink/source and of the sample Ksample=(Kstandard*Astandard*Thsample*(ΔT1+ΔT3))/(ΔTsample*Asample*2*Thstandard). Measuring the thermal conductivity of samples of multiple sizes, shapes, and thicknesses gave consistent values for samples with surfaces as small as 50% of the heat sink/source surface, regardless of the shape of the sample. Measuring samples with surfaces smaller than 50% of the heat sink/source surface
Forest inventory using multistage sampling with probability proportional to size. [Brazil
Parada, N. D. J. (Principal Investigator); Lee, D. C. L.; Hernandezfilho, P.; Shimabukuro, Y. E.; Deassis, O. R.; Demedeiros, J. S.
1984-01-01
A multistage sampling technique, with probability proportional to size, for forest volume inventory using remote sensing data is developed and evaluated. The study area is located in the Southeastern Brazil. The LANDSAT 4 digital data of the study area are used in the first stage for automatic classification of reforested areas. Four classes of pine and eucalypt with different tree volumes are classified utilizing a maximum likelihood classification algorithm. Color infrared aerial photographs are utilized in the second stage of sampling. In the third state (ground level) the time volume of each class is determined. The total time volume of each class is expanded through a statistical procedure taking into account all the three stages of sampling. This procedure results in an accurate time volume estimate with a smaller number of aerial photographs and reduced time in field work.
Relative power and sample size analysis on gene expression profiling data
Directory of Open Access Journals (Sweden)
den Dunnen JT
2009-09-01
Full Text Available Abstract Background With the increasing number of expression profiling technologies, researchers today are confronted with choosing the technology that has sufficient power with minimal sample size, in order to reduce cost and time. These depend on data variability, partly determined by sample type, preparation and processing. Objective measures that help experimental design, given own pilot data, are thus fundamental. Results Relative power and sample size analysis were performed on two distinct data sets. The first set consisted of Affymetrix array data derived from a nutrigenomics experiment in which weak, intermediate and strong PPARα agonists were administered to wild-type and PPARα-null mice. Our analysis confirms the hierarchy of PPARα-activating compounds previously reported and the general idea that larger effect sizes positively contribute to the average power of the experiment. A simulation experiment was performed that mimicked the effect sizes seen in the first data set. The relative power was predicted but the estimates were slightly conservative. The second, more challenging, data set describes a microarray platform comparison study using hippocampal δC-doublecortin-like kinase transgenic mice that were compared to wild-type mice, which was combined with results from Solexa/Illumina deep sequencing runs. As expected, the choice of technology greatly influences the performance of the experiment. Solexa/Illumina deep sequencing has the highest overall power followed by the microarray platforms Agilent and Affymetrix. Interestingly, Solexa/Illumina deep sequencing displays comparable power across all intensity ranges, in contrast with microarray platforms that have decreased power in the low intensity range due to background noise. This means that deep sequencing technology is especially more powerful in detecting differences in the low intensity range, compared to microarray platforms. Conclusion Power and sample size analysis
Effect of sample size on the fluid flow through a single fractured granitoid
Directory of Open Access Journals (Sweden)
Kunal Kumar Singh
2016-06-01
Full Text Available Most of deep geological engineered structures, such as rock caverns, nuclear waste disposal repositories, metro rail tunnels, multi-layer underground parking, are constructed within hard crystalline rocks because of their high quality and low matrix permeability. In such rocks, fluid flows mainly through fractures. Quantification of fractures along with the behavior of the fluid flow through them, at different scales, becomes quite important. Earlier studies have revealed the influence of sample size on the confining stress–permeability relationship and it has been demonstrated that permeability of the fractured rock mass decreases with an increase in sample size. However, most of the researchers have employed numerical simulations to model fluid flow through the fracture/fracture network, or laboratory investigations on intact rock samples with diameter ranging between 38 mm and 45 cm and the diameter-to-length ratio of 1:2 using different experimental methods. Also, the confining stress, σ3, has been considered to be less than 30 MPa and the effect of fracture roughness has been ignored. In the present study, an extension of the previous studies on “laboratory simulation of flow through single fractured granite” was conducted, in which consistent fluid flow experiments were performed on cylindrical samples of granitoids of two different sizes (38 mm and 54 mm in diameters, containing a “rough walled single fracture”. These experiments were performed under varied confining pressure (σ3 = 5–40 MPa, fluid pressure (fp ≤ 25 MPa, and fracture roughness. The results indicate that a nonlinear relationship exists between the discharge, Q, and the effective confining pressure, σeff., and Q decreases with an increase in σeff.. Also, the effects of sample size and fracture roughness do not persist when σeff. ≥ 20 MPa. It is expected that such a study will be quite useful in correlating and extrapolating the laboratory
Saccenti, Edoardo; Timmerman, Marieke E
2016-08-01
Sample size determination is a fundamental step in the design of experiments. Methods for sample size determination are abundant for univariate analysis methods, but scarce in the multivariate case. Omics data are multivariate in nature and are commonly investigated using multivariate statistical methods, such as principal component analysis (PCA) and partial least-squares discriminant analysis (PLS-DA). No simple approaches to sample size determination exist for PCA and PLS-DA. In this paper we will introduce important concepts and offer strategies for (minimally) required sample size estimation when planning experiments to be analyzed using PCA and/or PLS-DA.
Adjustable virtual pore-size filter for automated sample preparation using acoustic radiation force
Energy Technology Data Exchange (ETDEWEB)
Jung, B; Fisher, K; Ness, K; Rose, K; Mariella, R
2008-05-22
We present a rapid and robust size-based separation method for high throughput microfluidic devices using acoustic radiation force. We developed a finite element modeling tool to predict the two-dimensional acoustic radiation force field perpendicular to the flow direction in microfluidic devices. Here we compare the results from this model with experimental parametric studies including variations of the PZT driving frequencies and voltages as well as various particle sizes and compressidensities. These experimental parametric studies also provide insight into the development of an adjustable 'virtual' pore-size filter as well as optimal operating conditions for various microparticle sizes. We demonstrated the separation of Saccharomyces cerevisiae and MS2 bacteriophage using acoustic focusing. The acoustic radiation force did not affect the MS2 viruses, and their concentration profile remained unchanged. With optimized design of our microfluidic flow system we were able to achieve yields of > 90% for the MS2 with > 80% of the S. cerevisiae being removed in this continuous-flow sample preparation device.
Separability tests for high-dimensional, low sample size multivariate repeated measures data.
Simpson, Sean L; Edwards, Lloyd J; Styner, Martin A; Muller, Keith E
2014-01-01
Longitudinal imaging studies have moved to the forefront of medical research due to their ability to characterize spatio-temporal features of biological structures across the lifespan. Valid inference in longitudinal imaging requires enough flexibility of the covariance model to allow reasonable fidelity to the true pattern. On the other hand, the existence of computable estimates demands a parsimonious parameterization of the covariance structure. Separable (Kronecker product) covariance models provide one such parameterization in which the spatial and temporal covariances are modeled separately. However, evaluating the validity of this parameterization in high-dimensions remains a challenge. Here we provide a scientifically informed approach to assessing the adequacy of separable (Kronecker product) covariance models when the number of observations is large relative to the number of independent sampling units (sample size). We address both the general case, in which unstructured matrices are considered for each covariance model, and the structured case, which assumes a particular structure for each model. For the structured case, we focus on the situation where the within subject correlation is believed to decrease exponentially in time and space as is common in longitudinal imaging studies. However, the provided framework equally applies to all covariance patterns used within the more general multivariate repeated measures context. Our approach provides useful guidance for high dimension, low sample size data that preclude using standard likelihood based tests. Longitudinal medical imaging data of caudate morphology in schizophrenia illustrates the approaches appeal. PMID:25342869
Efficient adaptive designs with mid-course sample size adjustment in clinical trials
Bartroff, Jay
2011-01-01
Adaptive designs have been proposed for clinical trials in which the nuisance parameters or alternative of interest are unknown or likely to be misspecified before the trial. Whereas most previous works on adaptive designs and mid-course sample size re-estimation have focused on two-stage or group sequential designs in the normal case, we consider here a new approach that involves at most three stages and is developed in the general framework of multiparameter exponential families. Not only does this approach maintain the prescribed type I error probability, but it also provides a simple but asymptotically efficient sequential test whose finite-sample performance, measured in terms of the expected sample size and power functions, is shown to be comparable to the optimal sequential design, determined by dynamic programming, in the simplified normal mean case with known variance and prespecified alternative, and superior to the existing two-stage designs and also to adaptive group sequential designs when the al...
Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size
Directory of Open Access Journals (Sweden)
Zhihua Wang
2014-01-01
Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.
Power and sample-size estimation for microbiome studies using pairwise distances and PERMANOVA
Kelly, Brendan J.; Gross, Robert; Bittinger, Kyle; Sherrill-Mix, Scott; Lewis, James D.; Collman, Ronald G.; Bushman, Frederic D.; Li, Hongzhe
2015-01-01
Motivation: The variation in community composition between microbiome samples, termed beta diversity, can be measured by pairwise distance based on either presence–absence or quantitative species abundance data. PERMANOVA, a permutation-based extension of multivariate analysis of variance to a matrix of pairwise distances, partitions within-group and between-group distances to permit assessment of the effect of an exposure or intervention (grouping factor) upon the sampled microbiome. Within-group distance and exposure/intervention effect size must be accurately modeled to estimate statistical power for a microbiome study that will be analyzed with pairwise distances and PERMANOVA. Results: We present a framework for PERMANOVA power estimation tailored to marker-gene microbiome studies that will be analyzed by pairwise distances, which includes: (i) a novel method for distance matrix simulation that permits modeling of within-group pairwise distances according to pre-specified population parameters; (ii) a method to incorporate effects of different sizes within the simulated distance matrix; (iii) a simulation-based method for estimating PERMANOVA power from simulated distance matrices; and (iv) an R statistical software package that implements the above. Matrices of pairwise distances can be efficiently simulated to satisfy the triangle inequality and incorporate group-level effects, which are quantified by the adjusted coefficient of determination, omega-squared (ω2). From simulated distance matrices, available PERMANOVA power or necessary sample size can be estimated for a planned microbiome study. Availability and implementation: http://github.com/brendankelly/micropower. Contact: brendank@mail.med.upenn.edu or hongzhe@upenn.edu PMID:25819674
Directory of Open Access Journals (Sweden)
D Johan Kotze
Full Text Available Temporal variation in the detectability of a species can bias estimates of relative abundance if not handled correctly. For example, when effort varies in space and/or time it becomes necessary to take variation in detectability into account when data are analyzed. We demonstrate the importance of incorporating seasonality into the analysis of data with unequal sample sizes due to lost traps at a particular density of a species. A case study of count data was simulated using a spring-active carabid beetle. Traps were 'lost' randomly during high beetle activity in high abundance sites and during low beetle activity in low abundance sites. Five different models were fitted to datasets with different levels of loss. If sample sizes were unequal and a seasonality variable was not included in models that assumed the number of individuals was log-normally distributed, the models severely under- or overestimated the true effect size. Results did not improve when seasonality and number of trapping days were included in these models as offset terms, but only performed well when the response variable was specified as following a negative binomial distribution. Finally, if seasonal variation of a species is unknown, which is often the case, seasonality can be added as a free factor, resulting in well-performing negative binomial models. Based on these results we recommend (a add sampling effort (number of trapping days in our example to the models as an offset term, (b if precise information is available on seasonal variation in detectability of a study object, add seasonality to the models as an offset term; (c if information on seasonal variation in detectability is inadequate, add seasonality as a free factor; and (d specify the response variable of count data as following a negative binomial or over-dispersed Poisson distribution.
Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples
Energy Technology Data Exchange (ETDEWEB)
Lisboa-Filho, P N [UNESP - Universidade Estadual Paulista, Grupo de Materiais Avancados, Departamento de Fisica, Bauru (Brazil); Deimling, C V; Ortiz, W A, E-mail: plisboa@fc.unesp.b [Grupo de Supercondutividade e Magnetismo, Departamento de Fisica, Universidade Federal de Sao Carlos, Sao Carlos (Brazil)
2010-01-15
In this contribution superconducting specimens of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.
Berlinger, B; Bugge, M D; Ulvestad, B; Kjuus, H; Kandler, K; Ellingsen, D G
2015-12-01
Air samples were collected by personal sampling with five stage Sioutas cascade impactors and respirable cyclones in parallel among tappers and crane operators in two manganese (Mn) alloy smelters in Norway to investigate PM fractions. The mass concentrations of PM collected by using the impactors and the respirable cyclones were critically evaluated by comparing the results of the parallel measurements. The geometric mean (GM) mass concentrations of the respirable fraction and the <10 μm PM fraction were 0.18 and 0.39 mg m(-3), respectively. Particle size distributions were determined using the impactor data in the range from 0 to 10 μm and by stationary measurements by using a scanning mobility particle sizer in the range from 10 to 487 nm. On average 50% of the particulate mass in the Mn alloy smelters was in the range from 2.5 to 10 μm, while the rest was distributed between the lower stages of the impactors. On average 15% of the particulate mass was found in the <0.25 μm PM fraction. The comparisons of the different PM fraction mass concentrations related to different work tasks or different workplaces, showed in many cases statistically significant differences, however, the particle size distribution of PM in the fraction <10 μm d(ae) was independent of the plant, furnace or work task. PMID:26498986
Zheng, Yi; Hu, Junqiang; Lin, Qiao
2012-01-01
Electrohydrodynamic (EHD) generation, a commonly used method in BioMEMS, plays a significant role in the pulsed-release drug delivery system for a decade. In this paper, an EHD based drug delivery system is well designed, which can be used to generate a single drug droplet as small as 2.83 nL in 8.5 ms with a total device of 2x2x3 mm^3, and an external supplied voltage of 1500 V. Theoretically, we derive the expressions for the size and the formation time of a droplet generated by EHD method, while taking into account the drug supply rate, properties of liquid, gap between electrodes, nozzle size, and charged droplet neutralization. This work proves a repeatable, stable and controllable droplet generation and delivery system based on EHD method.
Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B.
2011-01-01
Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite size pencil beam (FSPB) algorithm with a 3D-density correction method on GPU. This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework [Gu et al. Phys. Med. Biol. 54 6287-97, 2009]. Dosimetric evaluations against Monte Carlo dose calculations are conducted on 10 IMRT treatment plans (5 head-and-neck c...
DEFF Research Database (Denmark)
Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten;
2009-01-01
of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced...
Johnson, David R; Bachan, Lauren K
2013-08-01
In a recent article, Regan, Lakhanpal, and Anguiano (2012) highlighted the lack of evidence for different relationship outcomes between arranged and love-based marriages. Yet the sample size (n = 58) used in the study is insufficient for making such inferences. This reply discusses and demonstrates how small sample sizes reduce the utility of this research.
Directory of Open Access Journals (Sweden)
Smedslund Geir
2013-02-01
Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.
DEFF Research Database (Denmark)
Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.;
2013-01-01
SUMMARY Disease cases are often clustered within herds or generally groups that share common characteristics. Sample size formulae must adjust for the within-cluster correlation of the primary sampling units. Traditionally, the intra-cluster correlation coefficient (ICC), which is an average...... measure of the data heterogeneity, has been used to modify formulae for individual sample size estimation. However, subgroups of animals sharing common characteristics, may exhibit excessively less or more heterogeneity. Hence, sample size estimates based on the ICC may not achieve the desired precision...... subsp. paratuberculosis infection, in Danish dairy cattle and a study on critical control points for Salmonella cross-contamination of pork, in Greek slaughterhouses....
Module development for steam explosion load estimation-I. methodology and sample calculation
International Nuclear Information System (INIS)
A methodology has been suggestd to develop a module, which is able to estimate the steam explosion load under the integral code structure. As the first step of module development, TEXAS-V, which is a one-dimensional mechanistic code for steam explosion analysis, was selected and sample calculations were done. At this stage, the characteristics of the TEXAS-V code was identified and the analysis capability was setup. A sensitivity study was also performed on the uncertain code parameters such as mesh number, mesh cross-sectional area, mixing completion condition, and triggering magnitude. The melt jet with the diameter of 0.15m and the velocity of 9m/s was poured into the water at 1 atm, 363K, and 1.1 m depth during 0.74 sec. 197kg of melt was mixed with the water among the total of 947kg. The explosion peak pressure, propagation speed, and conversion ratio considering the mixed melt, were evaluated as 40MPa, 1500m/s, and 2%, respectively. The triggering magnitude did not show any effect on the explosion strength once the explosion would be started. The explosion violence was sensitive to the mesh number, mesh area, and mixing completion condition, mainly because the mixture condition is dependent upon these parameters. The additional study on these parameters needs to be done
Calculation of the effective diffusion coefficient during the drying of clay samples
Vasić Miloš; Radojević Zagorka; Grbavčić Željko
2012-01-01
The aim of this study was to calculate the effective diffusion coefficient based on experimentally recorded drying curves for two masonry clays obtained from different localities. The calculation method and two computer programs based on the mathematical calculation of the second Fick’s law and Cranck diffusion equation were developed. Masonry product shrinkage during drying was taken into consideration for the first time and the appropriate correction was entered into the calculation. ...
What about N? A methodological study of sample-size reporting in focus group studies
Directory of Open Access Journals (Sweden)
Glenton Claire
2011-03-01
Full Text Available Abstract Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96. Thirty seven (17% studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method
Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.
2016-09-01
Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.
Analyzing insulin samples by size-exclusion chromatography: a column degradation study.
Teska, Brandon M; Kumar, Amit; Carpenter, John F; Wempe, Michael F
2015-04-01
Investigating insulin analogs and probing their intrinsic stability at physiological temperature, we observed significant degradation in the size-exclusion chromatography (SEC) signal over a moderate number of insulin sample injections, which generated concerns about the quality of the separations. Therefore, our research goal was to identify the cause(s) for the observed signal degradation and attempt to mitigate the degradation in order to extend SEC column lifespan. In these studies, we used multiangle light scattering, nuclear magnetic resonance, and gas chromatography-mass spectrometry methods to evaluate column degradation. The results from these studies illustrate: (1) that zinc ions introduced by the insulin product produced the observed column performance issues; and (2) that including ethylenediaminetetraacetic acid, a zinc chelator, in the mobile phase helped to maintain column performance.
Sample Size Dependence of Second Magnetization Peak in Type-II Superconductors
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
We show that the second magnetization peak (SMP), i. e., an increase in the magnetization hysteresis loop width in type-II superconductors,vanishes for samples smaller than a critical size. We argue that the SMP is not related to the critical current enhancement but can be well explained within a framework of the thermomagnetic flux-jump instability theory, where flux jumps reduce the absolute irreversible magnetization relative to the isothermal critical state value at low enough magnetic fields. The recovering of the isothermal critical state with increasing field leads to the SMP. The low-field SMP takes place in both low-Tc conventional and high-Tc unconventional superconductors. Our results show that the restoration of the isothermal critical state is responsible for the SMP occurrence in both cases.
Directory of Open Access Journals (Sweden)
Vanessa Colombo-Corbi
2011-06-01
Full Text Available Glycolytic activities of eight enzymes in size-fractionated water samples from a eutrophic tropical reservoir are presented in this study, including enzymes assayed for the first time in a freshwater environment. Among these enzymes, rhamnosidase, arabinosidase and fucosidase presented high activity in the free-living fraction, while glucosidase, mannosidase and galactosidase exhibited high activity in the attached fraction. The low activity registered for rhamnosidase, arabinosidase and fucosidase in the attached fraction seemed contribute to the integrity of the aggregate and based on this fact, a protective role for these structures was proposed. The presented enzyme profiles and the differences in the relative activities probably reflected the organic matter composition as well as the metabolic requirements of the bacterial community, suggesting that bacteria attached to particulate matter had phenotypic traits distinct from those of free-living bacteria.
Directory of Open Access Journals (Sweden)
Christian Damgaard
2011-12-01
Full Text Available Increasingly, the survival rates in experimental ecology are presented using odds ratios or log response ratios, but the use of ratio metrics has a problem when all the individuals have either died or survived in only one replicate. In the empirical ecological literature, the problem often has been ignored or circumvented by different, more or less ad hoc approaches. Here, it is argued that the best summary statistic for communicating ecological results of frequency data in studies with small unbalanced samples may be the mean of the posterior distribution of the survival rate. The developed approach may be particularly useful when effect size indexes, such as odds ratios, are needed to compare frequency data between treatments, sites or studies.
Institute of Scientific and Technical Information of China (English)
Ya Li; Qiang Fua; Meng Liu; Yuan-Yuan Jiao; Wei Du; Chong Yu; Jing Liu; Chun Chang; Jian Lu
2012-01-01
In order to prepare a high capacity packing material for solid-phase extraction with specific recognition ability of trace ractopamine in biological samples, uniformly-sized, molecularly imprinted polymers (MIPs) were prepared by a multi-step swelling and polymerization method using methacrylic acid as a functional monomer, ethylene glycol dimethacrylate as a cross-linker, and toluene as a porogen respectively. Scanning electron microscope and specific surface area were employed to identify the characteristics of MIPs. Ultraviolet spectroscopy, Fourier transform infrared spectroscopy, Scatchard analysis and kinetic study were performed to interpret the specific recognition ability and the binding process of MIPs. The results showed that, compared with other reports, MIPs synthetized in this study showed high adsorption capacity besides specific recognition ability. The adsorption capacity of MIPs was 0.063 mmol/g at 1 mmol/L ractopamine concentra- tion with the distribution coefficient 1.70. The resulting MIPs could be used as solid-phase extraction materials for separation and enrichment of trace ractopamine in biological samples.
Weighted piecewise LDA for solving the small sample size problem in face verification.
Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis
2007-03-01
A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.
Directory of Open Access Journals (Sweden)
Fotini Kokou
2016-05-01
Full Text Available One of the main concerns in gene expression studies is the calculation of statistical significance which in most cases remains low due to limited sample size. Increasing biological replicates translates into more effective gains in power which, especially in nutritional experiments, is of great importance as individual variation of growth performance parameters and feed conversion is high. The present study investigates in the gilthead sea bream Sparus aurata, one of the most important Mediterranean aquaculture species. For 24 gilthead sea bream individuals (biological replicates the effects of gradual substitution of fish meal by plant ingredients (0% (control, 25%, 50% and 75% in the diets were studied by looking at expression levels of four immune-and stress-related genes in intestine, head kidney and liver. The present results showed that only the lowest substitution percentage is tolerated and that liver is the most sensitive tissue to detect gene expression variations in relation to fish meal substituted diets. Additionally the usage of three independent biological replicates were evaluated by calculating the averages of all possible triplets in order to assess the suitability of selected genes for stress indication as well as the impact of the experimental set up, thus in the present work the impact of FM substitution. Gene expression was altered depending of the selected biological triplicate. Only for two genes in liver (hsp70 and tgf significant differential expression was assured independently of the triplicates used. These results underlined the importance of choosing the adequate sample number especially when significant, but minor differences in gene expression levels are observed.
Thrane, Susan; Cohen, Susan M
2014-12-01
The objective of this study was to calculate the effect of Reiki therapy for pain and anxiety in randomized clinical trials. A systematic search of PubMed, ProQuest, Cochrane, PsychInfo, CINAHL, Web of Science, Global Health, and Medline databases was conducted using the search terms pain, anxiety, and Reiki. The Center for Reiki Research also was examined for articles. Studies that used randomization and a control or usual care group, used Reiki therapy in one arm of the study, were published in 2000 or later in peer-reviewed journals in English, and measured pain or anxiety were included. After removing duplicates, 49 articles were examined and 12 articles received full review. Seven studies met the inclusion criteria: four articles studied cancer patients, one examined post-surgical patients, and two analyzed community dwelling older adults. Effect sizes were calculated for all studies using Cohen's d statistic. Effect sizes for within group differences ranged from d = 0.24 for decrease in anxiety in women undergoing breast biopsy to d = 2.08 for decreased pain in community dwelling adults. The between group differences ranged from d = 0.32 for decrease of pain in a Reiki versus rest intervention for cancer patients to d = 4.5 for decrease in pain in community dwelling adults. Although the number of studies is limited, based on the size Cohen's d statistics calculated in this review, there is evidence to suggest that Reiki therapy may be effective for pain and anxiety. Continued research using Reiki therapy with larger sample sizes, consistently randomized groups, and standardized treatment protocols is recommended. PMID:24582620
How to Calculate Range and Population Size for the Otter? The Irish Approach as a Case Study
Directory of Open Access Journals (Sweden)
Dierdre Lynn
2011-01-01
Full Text Available All EU Member States are obliged to submit reports to the EU Commission every 6 years, detailing the conservation status of species and habitats listed on the Habitats Directive. The otter (Lutra lutra is one such species. Despite a number of national surveys that showed that the otter was widespread across the country, in Ireland’s 2007 conservation status assessment the otter was considered to be in unfavourable condition. While the Range, Habitat and Future Prospects categories were all considered favourable, Population was deemed to be unfavourable.This paper examines the data behind the 2007 assessment by Ireland, which included three national otter surveys and a series of radio-tracking studies. Range was mapped and calculated based on the results of national distribution surveys together with records submitted from the public. Population size was estimated by calculating the extent of available habitats (rivers, lakes and coasts, dividing that by the typical home range size and then multiplying the result by the proportion of positive sites in the most recent national survey.While the Range of the otter in Ireland did not decrease between the 1980/81 and the 2004/05 surveys, Population trend was calculated as -23.7%. As a consequence, the most recent national Red Data List for Ireland lists the species as Near Threatened (Marnell et al., 2009.
McCarthy, K.
2008-01-01
Semipermeable membrane devices (SPMDs) were deployed in the Columbia Slough, near Portland, Oregon, on three separate occasions to measure the spatial and seasonal distribution of dissolved polycyclic aromatic hydrocarbons (PAHs) and organochlorine compounds (OCs) in the slough. Concentrations of PAHs and OCs in SPMDs showed spatial and seasonal differences among sites and indicated that unusually high flows in the spring of 2006 diluted the concentrations of many of the target contaminants. However, the same PAHs - pyrene, fluoranthene, and the alkylated homologues of phenanthrene, anthracene, and fluorene - and OCs - polychlorinated biphenyls, pentachloroanisole, chlorpyrifos, dieldrin, and the metabolites of dichlorodiphenyltrichloroethane (DDT) - predominated throughout the system during all three deployment periods. The data suggest that storm washoff may be a predominant source of PAHs in the slough but that OCs are ubiquitous, entering the slough by a variety of pathways. Comparison of SPMDs deployed on the stream bed with SPMDs deployed in the overlying water column suggests that even for the very hydrophobic compounds investigated, bed sediments may not be a predominant source in this system. Perdeuterated phenanthrene (phenanthrene-d10). spiked at a rate of 2 ??g per SPMD, was shown to be a reliable performance reference compound (PRC) under the conditions of these deployments. Post-deployment concentrations of the PRC revealed differences in sampling conditions among sites and between seasons, but indicate that for SPMDs deployed throughout the main slough channel, differences in sampling rates were small enough to make site-to-site comparisons of SPMD concentrations straightforward. ?? Springer Science+Business Media B.V. 2007.
Frank van Rijnsoever
2015-01-01
This paper explores the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the codes in the population have been observed once in the sample. I delineate three different scenarios to sample information sources: “random chance,” which is based on probability sampling, “minima...
Energy Technology Data Exchange (ETDEWEB)
Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2012-02-15
Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and
Directory of Open Access Journals (Sweden)
Danilo Eduardo Rozane
2007-08-01
Full Text Available Dentre todas as etapas que permeiam um laudo foliar, ainda a amostragem continua sendo a mais sujeita a erros. O presente trabalho teve como objetivo determinar o tamanho de amostras foliares e a variação do erro amostral para coleta de folhas de pomares de mangueiras. O experimento contou com delineamento inteiramente casualizado, com seis repetições e quatro tratamentos, que constaram da coleta de uma folha, em cada uma das quatro posições cardeais, em 5; 10; 20 e 40 plantas. Com base nos resultados dos teores de nutrientes, foram calculados as médias, variâncias, erros-padrão das médias, o intervalo de confiança para a média e a porcentagem de erro em relação à média, através da semi-amplitude do intervalo de confiança expresso em porcentagem da média. Concluiu-se que, para as determinações químicas dos macronutrientes, 10 plantas de mangueira seriam suficientes, coletando-se uma folha nos quatro pontos cardeais da planta. Já para os micronutrientes, seriam necessárias, no mínimo, 20 plantas e, se considerarmos o Fe, seria necessário amostrar, pelo menos, 30 plantas.Of all the processes used in foliar evaluation, sampling is currently still the most subject to error. To corroborate this, the present study aimed to determine the size of foliar samples and error variation of samples collected from mango tree leaves. The study used a completely randomized design with six repetitions and four different treatments which consisted of a leaf from each of the cardinal points from 5, 10, 20 and 40 different trees. Mean variation, mean standard error, mean confidence interval and error percentage compared to mean values were calculated based on level of nutrients. Results were calculated by using the semi-amplitude of the interval of confidence expressed as mean percentages. Obtained results led to the conclusion that the chemical evaluation of macro-nutrients from 10 mango plants would be sufficient for assessment. However
DEFF Research Database (Denmark)
Stevens, Thomas; Lu, HY
2009-01-01
; this limits interpretation of the environmental changes revealed by the loess record. In particular, while the Quaternary/Neogene Chinese loess and Red Clay sequences have the potential to provide detailed records of past sedimentation and climate change, there is great uncertainty concerning: (i......Understanding loess sedimentation rates is crucial for constraining past atmospheric dust dynamics, regional climatic change and local depositional environments. However, the derivation of loess sedimentation rates is complicated by the lack of available methods for independent calculation......) the influences on sediment grain-size and accumulation; and (ii) their relationship through time and across the depositional region. This uncertainty has led to the widespread use of assumptions concerning the relationship between sedimentation rate and grain-size in order to derive age models and climate...
Gu, Xuejun; Jelen, Urszula; Li, Jinsheng; Jia, Xun; Jiang, Steve B.
2011-06-01
Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite-size pencil beam (FSPB) algorithm with a 3D-density correction method on graphics processing unit (GPU). This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework (Gu et al 2009 Phys. Med. Biol. 54 6287-97). Dosimetric evaluations against Monte Carlo dose calculations are conducted on ten IMRT treatment plans (five head-and-neck cases and five lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 s (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (~5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning.
Optimizing Stream Water Mercury Sampling for Calculation of Fish Bioaccumulation Factors
Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hgfish) divided by the water Hg concentration (Hgwater) and, consequently, are sensitive ...
Calculation of the effective diffusion coefficient during the drying of clay samples
Directory of Open Access Journals (Sweden)
Vasić Miloš
2012-01-01
Full Text Available The aim of this study was to calculate the effective diffusion coefficient based on experimentally recorded drying curves for two masonry clays obtained from different localities. The calculation method and two computer programs based on the mathematical calculation of the second Fick’s law and Cranck diffusion equation were developed. Masonry product shrinkage during drying was taken into consideration for the first time and the appropriate correction was entered into the calculation. The results presented in this paper show that the values of the effective diffusion coefficient determined by the designed computer programs (with and without the correction for shrinkage have similar values to those available in the literature for the same coefficient for different clays. Based on the mathematically determined prognostic value of the effective diffusion coefficient, it was concluded that, whatever the initial mineralogical composition of the clay, there is 90% agreement of the calculated prognostic drying curves with the experimentally recorded ones. When a shrinkage correction of the masonry products is introduced into the calculation step, this agreement is even better.
Directory of Open Access Journals (Sweden)
Jie Yang
2011-09-01
Full Text Available We report a relatively precise method of conductivity measurement in a diamond anvil cell with axis symmetrical electrodes and finite difference calculation. The axis symmetrical electrodes are composed of two parts: one is a round thin-film electrode deposited on diamond facet and the other is the inside wall of metal gasket. Due to the asymmetrical configuration of the two electrodes, finite difference method can be applied to calculate the conductivity of sample, which can reduce the measurement error.
Directory of Open Access Journals (Sweden)
Barbara Di Camillo
Full Text Available MOTIVATION: The identification of robust lists of molecular biomarkers related to a disease is a fundamental step for early diagnosis and treatment. However, methodologies for the discovery of biomarkers using microarray data often provide results with limited overlap. These differences are imputable to 1 dataset size (few subjects with respect to the number of features; 2 heterogeneity of the disease; 3 heterogeneity of experimental protocols and computational pipelines employed in the analysis. In this paper, we focus on the first two issues and assess, both on simulated (through an in silico regulation network model and real clinical datasets, the consistency of candidate biomarkers provided by a number of different methods. METHODS: We extensively simulated the effect of heterogeneity characteristic of complex diseases on different sets of microarray data. Heterogeneity was reproduced by simulating both intrinsic variability of the population and the alteration of regulatory mechanisms. Population variability was simulated by modeling evolution of a pool of subjects; then, a subset of them underwent alterations in regulatory mechanisms so as to mimic the disease state. RESULTS: The simulated data allowed us to outline advantages and drawbacks of different methods across multiple studies and varying number of samples and to evaluate precision of feature selection on a benchmark with known biomarkers. Although comparable classification accuracy was reached by different methods, the use of external cross-validation loops is helpful in finding features with a higher degree of precision and stability. Application to real data confirmed these results.
Cliff´s Delta Calculator: A non-parametric effect size program for two groups of observations
Directory of Open Access Journals (Sweden)
Guillermo Macbeth
2011-05-01
Full Text Available The Cliff´s Delta statistic is an effect size measure that quantifies the amount of difference between two non-parametric variables beyond p-values interpretation. This measure can be understood as a useful complementary analysis for the corresponding hypothesis testing. During the last two decades the use of effect size measures has been strongly encouraged by methodologists and leading institutions of behavioral sciences. The aim of this contribution is to introduce the Cliff´s Delta Calculator software that performs such analysis and offers some interpretation tips. Differences and similarities with the parametric case are analysed and illustrated. The implementation of this free program is fully described and compared with other calculators. Alternative algorithmic approaches are mathematically analysed and a basic linear algebra proof of its equivalence is formally presented. Two worked examples in cognitive psychology are commented. A visual interpretation of Cliff´s Delta is suggested. Availability, installation and applications of the program are presented and discussed.
Energy Technology Data Exchange (ETDEWEB)
Federico Jimenez-Cruz; Georgina C. Laredo [Instituto Mexicano del Petroleo, Mexico (Mexico). Programa de Tratamiento de Crudo Maya
2004-11-01
A good approach of the critical molecular dimensions of 35 linear and branched C5-C8 paraffins by DFT quantum chemical calculations at B3LYP/6-31G{asterisk}{asterisk} level of theory in gas phase is described. In this context, we found that either the determined molecular width or width-height average values can be used as critical measures in the analysis for selection of molecular sieves materials, depending on their pore size and shape. The molecular width values for linear and monosubstituted paraffins are 4.2 and 5.5 {angstrom}, respectively. In the case of disubstituted paraffins, the values are 5.5 for 2,3-, 2,4-, 2,5- and 3,4-disubstituted and for 2,2- and 3,3-disubstituted are 6.7-7.1 {angstrom}. The values for ethyl-substituted are 6.1-6.7 {angstrom} and for trisubstituted isoparaffins are 6.7. In order to select a porous material for selective separation of isoparaffins and paraffins, the zeolite diffusivity can be correlated with the critical diameter of the paraffins according to the geometry-limited diffusion concept and the effective minimum dimensions of the molecules. The calculated values of CPK molecular volume of the titled paraffins showed a good discrimination between the number of carbons and molecular size. 25 refs., 4 figs., 2 tabs.
Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.
2016-09-01
Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size <0.3 mm), a "paired samples t-test" can provide a statistical comparison between a particular sample and known lunar basalts. Of the fragments analyzed here, three are found to belong to each of the previously identified olivine and ilmenite basalt suites, four to the pigeonite basalt suite, one is an olivine cumulate, and two could not be categorized because of their coarse grain sizes and lack of appropriate mineral phases. Our approach introduces methods that can be used to investigate small sample sizes (i.e., fines) from future sample return missions to investigate lava flow diversity and petrological significance.
Kelley, Ken
2008-01-01
Methods of sample size planning are developed from the accuracy in parameter approach in the multiple regression context in order to obtain a sufficiently narrow confidence interval for the population squared multiple correlation coefficient when regressors are random. Approximate and exact methods are developed that provide necessary sample size…
Efficient calculation of risk measures by importance sampling -- the heavy tailed case
Hult, Henrik
2009-01-01
Computation of extreme quantiles and tail-based risk measures using standard Monte Carlo simulation can be inefficient. A method to speed up computations is provided by importance sampling. We show that importance sampling algorithms, designed for efficient tail probability estimation, can significantly improve Monte Carlo estimators of tail-based risk measures. In the heavy-tailed setting, when the random variable of interest has a regularly varying distribution, we provide sufficient conditions for the asymptotic relative error of importance sampling estimators of risk measures, such as Value-at-Risk and expected shortfall, to be small. The results are illustrated by some numerical examples.
40 CFR 90.426 - Dilute emission sampling calculations-gasoline fueled engines.
2010-07-01
... = 1 for two-stroke gasoline engines. (f)-(g) (h) The fuel mass flow rate, Fi, can be either measured... calculations for NOX and is equal to one for all other emissions. KHi is also equal to 1 for all two-stroke...-gasoline fueled engines. 90.426 Section 90.426 Protection of Environment ENVIRONMENTAL PROTECTION...
Directory of Open Access Journals (Sweden)
Tamer Dawod
2015-01-01
Full Text Available Purpose: This work investigated the accuracy of prowess treatment planning system (TPS in dose calculation in a homogenous phantom for symmetric and asymmetric field sizes using collapse cone convolution / superposition algorithm (CCCS. Methods: The measurements were carried out at source-to-surface distance (SSD set to 100 cm for 6 and 10 MV photon beams. Data for a full set of measurements for symmetric fields and asymmetric fields, including inplane and crossplane profiles at various depths and percentage depth doses (PDDs were obtained during measurements on the linear accelerator.Results: The results showed that the asymmetric collimation dose lead to significant errors (up to approximately 7% in dose calculations if changes in primary beam intensity and beam quality. It is obvious that the most difference in the isodose curves was found in buildup and the penumbra regions. Conclusion: The results showed that the dose calculation using Prowess TPS based on CCCS algorithm is generally in excellent agreement with measurements.
Gu, Xuejun; Li, Jinsheng; Jia, Xun; Jiang, Steve B
2011-01-01
Targeting at developing an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite size pencil beam (FSPB) algorithm with a 3D-density correction method on GPU. This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework [Gu et al. Phys. Med. Biol. 54 6287-97, 2009]. Dosimetric evaluations against MCSIM Monte Carlo dose calculations are conducted on 10 IMRT treatment plans with heterogeneous treatment regions (5 head-and-neck cases and 5 lung cases). For head and neck cases, when cavities exist near the target, the improvement with the 3D-density correction over the conventional FSPB algorithm is significant. However, when there are high-density dental filling materials in beam paths, the improvement is small and the accuracy of the new algorithm is still unsatisfactory. On the other hand, significant improvement of dose calculation accuracy is observed in all lung cases. Especially when the target is in the m...
Mielke, Steven L; Truhlar, Donald G
2016-01-21
Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function. PMID:26801023
Mielke, Steven L; Truhlar, Donald G
2016-01-21
Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.
Directory of Open Access Journals (Sweden)
Hiroshi Nishiura
Full Text Available BACKGROUND: Seroepidemiological studies before and after the epidemic wave of H1N1-2009 are useful for estimating population attack rates with a potential to validate early estimates of the reproduction number, R, in modeling studies. METHODOLOGY/PRINCIPAL FINDINGS: Since the final epidemic size, the proportion of individuals in a population who become infected during an epidemic, is not the result of a binomial sampling process because infection events are not independent of each other, we propose the use of an asymptotic distribution of the final size to compute approximate 95% confidence intervals of the observed final size. This allows the comparison of the observed final sizes against predictions based on the modeling study (R = 1.15, 1.40 and 1.90, which also yields simple formulae for determining sample sizes for future seroepidemiological studies. We examine a total of eleven published seroepidemiological studies of H1N1-2009 that took place after observing the peak incidence in a number of countries. Observed seropositive proportions in six studies appear to be smaller than that predicted from R = 1.40; four of the six studies sampled serum less than one month after the reported peak incidence. The comparison of the observed final sizes against R = 1.15 and 1.90 reveals that all eleven studies appear not to be significantly deviating from the prediction with R = 1.15, but final sizes in nine studies indicate overestimation if the value R = 1.90 is used. CONCLUSIONS: Sample sizes of published seroepidemiological studies were too small to assess the validity of model predictions except when R = 1.90 was used. We recommend the use of the proposed approach in determining the sample size of post-epidemic seroepidemiological studies, calculating the 95% confidence interval of observed final size, and conducting relevant hypothesis testing instead of the use of methods that rely on a binomial proportion.
Importance of Sample Size for the Estimation of Repeater F Waves in Amyotrophic Lateral Sclerosis
Directory of Open Access Journals (Sweden)
Jia Fang
2015-01-01
Full Text Available Background: In amyotrophic lateral sclerosis (ALS, repeater F waves are increased. Accurate assessment of repeater F waves requires an adequate sample size. Methods: We studied the F waves of left ulnar nerves in ALS patients. Based on the presence or absence of pyramidal signs in the left upper limb, the ALS patients were divided into two groups: One group with pyramidal signs designated as P group and the other without pyramidal signs designated as NP group. The Index repeating neurons (RN and Index repeater F waves (Freps were compared among the P, NP and control groups following 20 and 100 stimuli respectively. For each group, the Index RN and Index Freps obtained from 20 and 100 stimuli were compared. Results: In the P group, the Index RN (P = 0.004 and Index Freps (P = 0.001 obtained from 100 stimuli were significantly higher than from 20 stimuli. For F waves obtained from 20 stimuli, no significant differences were identified between the P and NP groups for Index RN (P = 0.052 and Index Freps (P = 0.079; The Index RN (P < 0.001 and Index Freps (P < 0.001 of the P group were significantly higher than the control group; The Index RN (P = 0.002 of the NP group was significantly higher than the control group. For F waves obtained from 100 stimuli, the Index RN (P < 0.001 and Index Freps (P < 0.001 of the P group were significantly higher than the NP group; The Index RN (P < 0.001 and Index Freps (P < 0.001 of the P and NP groups were significantly higher than the control group. Conclusions: Increased repeater F waves reflect increased excitability of motor neuron pool and indicate upper motor neuron dysfunction in ALS. For an accurate evaluation of repeater F waves in ALS patients especially those with moderate to severe muscle atrophy, 100 stimuli would be required.
[Tobacco smoking in a sample of middle-size city inhabitants aged 35-55].
Maniecka-Bryła, Irena; Maciak, Aleksandra; Kowalska, Alina; Bryła, Marek
2008-01-01
Tobacco smoking constitutes a common risk factor for the majority of civilization diseases, such as cardiovascular system diseases, malignant neoplasms and digestion and respiratory system disorders as well. Tobacco-related disorders relate to exacerbation of chronic diseases, for example diabetes and multiple sclerosis. Poland is one of those countries, where the prevalence of smoking is especially widespread. In Poland 42% of men and 25% of women smoke cigarettes and the amount of addicted people amounts to approximately 10 million. The latest data from the year 2003 show that the amount of cigarettes smoked by a particular citizen in Poland has risen fourfold since the beginning of 21st century. This paper presents an analysis of prevalence of tobacco smoking among inhabitants of a middle-size city in the Lodz province aged 35-55 years. The study sample comprised 124 people, including 75 females and 49 males. The tool of the research was a questionnaire survey containing questions concerning cigarette smoking. The study found out that 39.5% of respondents (41.3% of females and 36.7% of males) smoked cigarettes. The percentage of former smokers amounted to 15.3% and the percentage of non-smokers was higher than regular smokers and amounted to 44.8%. The study results showed that the majority of smokers were in the age interval of 45 to 49. Cigarette smoking influenced on smokers' health. The blood pressure and lipid balance was higher among smokers than among people who did not smoke cigarettes. The results of the conducted study confirm that there is a strong need of implementation of programmes towards limiting tobacco smoking, which may contribute to lowering the risk of tobacco-related diseases. PMID:19189562
Evaluation of Pump Pulsation in Respirable Size-Selective Sampling: Part I. Pulsation Measurements
Lee, Eun Gyung; Lee, Larry; Möhlmann, Carsten; Flemmer, Michael M.; Kashon, Michael; Harper, Martin
2013-01-01
Pulsations generated by personal sampling pumps modulate the airflow through the sampling trains, thereby varying sampling efficiencies, and possibly invalidating collection or monitoring. The purpose of this study was to characterize pulsations generated by personal sampling pumps relative to a nominal flow rate at the inlet of different respirable cyclones. Experiments were conducted using a factorial combination of 13 widely used sampling pumps (11 medium and 2 high volumetric flow rate pu...
Chock, Jeffrey Mun Kong
1999-01-01
Blast profiles and two primary methods of determining them were reviewed for use in the creation of a computer program for calculating blast pressures which serves as a design tool to aid engineers or analysts in the study of structures subjected to explosive air blast. These methods were integrated into a computer program, BLAST.F, to generate air blast pressure profiles by one of these two differing methods. These two methods were compared after the creation of the program and can conserv...
Esseiva, Pierre; Anglada, Frederic; Dujourdy, Laurence; Taroni, Franco; Margot, Pierre; Pasquier, Eric Du; Dawson, Michael; Roux, Claude; Doble, Philip
2005-08-15
Artificial neural networks (ANNs) were utilised to validate illicit drug classification in the profiling method used at "Institut de Police Scientifique" of the University of Lausanne (IPS). This method established links between samples using a combination of principal component analysis (PCA) and calculation of a correlation value between samples. Heroin seizures sent to the IPS laboratory were analysed using gas chromatography (GC) to separate the major alkaloids present in illicit heroin. Statistical analysis was then performed on 3371 samples. Initially, PCA was performed as a preliminary screen to identify samples of a similar chemical profile. A correlation value was then calculated for each sample previously identified with PCA. This correlation value was used to determine links between drug samples. These links were then recorded in an Ibase((R)) database. From this database the notion of "chemical class" arises, where samples with similar chemical profiles are grouped together. Currently, about 20 "chemical classes" have been identified. The normalised peak areas of six target compounds were then used to train an ANN to classify each sample into its appropriate class. Four hundred and sixty-eight samples were used as a training data set. Sixty samples were treated as blinds and 370 as non-linked samples. The results show that in 96% of cases the neural network attributed the seizure to the right "chemical class". The application of a neural network was found to be a useful tool to validate the classification of new drug seizures in existing chemical classes. This tool should be increasingly used in such situations involving profile comparisons and classifications.
Medhat, M. E.; Demir, Nilgun; Akar Tarim, Urkiye; Gurler, Orhan
2014-08-01
Monte Carlo simulations, FLUKA and Geant4, were performed to study mass attenuation for various types of soil at 59.5, 356.5, 661.6, 1173.2 and 1332.5 keV photon energies. Appreciable variations are noted for all parameters by changing the photon energy and the chemical composition of the sample. The simulations parameters were compared with experimental data and the XCOM program. The simulations show that the calculated mass attenuation coefficient values were closer to experimental values better than those obtained theoretically using the XCOM database for the same soil samples. The results indicate that Geant4 and FLUKA can be applied to estimate mass attenuation for various biological materials at different energies. The Monte Carlo method may be employed to make additional calculations on the photon attenuation characteristics of different soil samples collected from other places.
Evaluation of 1H NMR relaxometry for the assessment of pore size distribution in soil samples
Jaeger, F.; Bowe, S.; As, van H.; Schaumann, G.E.
2009-01-01
1H NMR relaxometry is used in earth science as a non-destructive and time-saving method to determine pore size distributions (PSD) in porous media with pore sizes ranging from nm to mm. This is a broader range than generally reported for results from X-ray computed tomography (X-ray CT) scanning, wh
Energy Technology Data Exchange (ETDEWEB)
Leifer, R. Z. [Environmental Measurements Lab. (EML), New York, NY (United States); Jacob, E. M. [Environmental Measurements Lab. (EML), New York, NY (United States); Marschke, S. F. [Environmental Measurements Lab. (EML), New York, NY (United States); Pranitis, D. M. [Environmental Measurements Lab. (EML), New York, NY (United States); Jaw, H-R. Kristina [Environmental Measurements Lab. (EML), New York, NY (United States)
2000-03-01
A rotating drum impactor was co-located with a high volume air sampler for ~ 1 y at the fence line of the U. S. Department of Energy’s Fernald Environmental Management Project site. Data on the size distribution of uranium bearing atmospheric aerosols from 0.065 mm to 100 mm in diameter were obtained and used to compute dose using several different models. During most of the year, the mass of ^{238}U above 15 mm exceeded 70% of the total uranium mass from all particulates. Above 4.3 µm, the ^{238}U mass exceeded 80% of the total uranium mass from all particulates. During any sampling period the size distribution was bimodal. In the winter/spring period, the modes appeared at 0.29 µm and 3.2 µm. During the summer period, the lower mode shifted up to ~ 0.45 mm. In the fall/winter, the upper mode shifted to ~ 1.7 µm, while the lower mode stayed at 0.45 mm. These differences reflect the changes in site activities. Thorium concentrations were comparable to the uranium concentrations during the late spring and summer period and decreased to ~25% of the ^{238}U concentration in the late summer. The thorium size distribution trend also differed from the uranium trend. The current calculational method used to demonstrate compliance with regulations assumes that the airborne particulates are characterized by an activity median diameter of 1 µm. This assumption results in an overestimate of the dose to offsite receptors by as much as a factor of seven relative to values derived using the latest ICRP 66 lung model with more appropriate particle sizes. Further evaluation of the size distribution for each radionuclide would substantially improve the dose estimates.
International Nuclear Information System (INIS)
The study aimed to appraise the dose differences between Acuros XB (AXB) and Anisotropic Analytical Algorithm (AAA) in stereotactic body radiotherapy (SBRT) treatment for lung cancer with flattening filter free (FFF) beams. Additionally, the potential role of the calculation grid size (CGS) on the dose differences between the two algorithms was also investigated. SBRT plans with 6X and 10X FFF beams produced from the CT scan data of 10 patients suffering from stage I lung cancer were enrolled in this study. Clinically acceptable treatment plans with AAA were recalculated using AXB with the same monitor units (MU) and identical multileaf collimator (MLC) settings. Furthermore, different CGS (2.5 mm and 1 mm) in the two algorithms was also employed to investigate their dosimetric impact. Dose to planning target volumes (PTV) and organs at risk (OARs) between the two algorithms were compared. PTV was separated into PTV-soft (density in soft-tissue range) and PTV-lung (density in lung range) for comparison. The dose to PTV-lung predicted by AXB was found to be 1.33 ± 1.12% (6XFFF beam with 2.5 mm CGS), 2.33 ± 1.37% (6XFFF beam with 1 mm CGS), 2.81 ± 2.33% (10XFFF beam with 2.5 mm CGS) and 3.34 ± 1.76% (10XFFF beam with 1 mm CGS) lower compared with that by AAA, respectively. However, the dose directed to PTV-soft was comparable. For OARs, AXB predicted a slightly lower dose to the aorta, chest wall, spinal cord and esophagus, regardless of whether the 6XFFF or 10XFFF beam was utilized. Exceptionally, dose to the ipsilateral lung was significantly higher with AXB. AXB principally predicts lower dose to PTV-lung compared to AAA and the CGS contributes to the relative dose difference between the two algorithms
Simple and efficient way of speeding up transmission calculations with k-point sampling
Directory of Open Access Journals (Sweden)
Jesper Toft Falkenberg
2015-07-01
Full Text Available The transmissions as functions of energy are central for electron or phonon transport in the Landauer transport picture. We suggest a simple and computationally “cheap” post-processing scheme to interpolate transmission functions over k-points to get smooth well-converged average transmission functions. This is relevant for data obtained using typical “expensive” first principles calculations where the leads/electrodes are described by periodic boundary conditions. We show examples of transport in graphene structures where a speed-up of an order of magnitude is easily obtained.
Simple and efficient way of speeding up transmission calculations with $k$-point sampling
Falkenberg, Jesper Toft
2015-01-01
The transmissions as functions of energy are central for electron or phonon transport in the Landauer transport picture. We suggest a simple and computationally "cheap" post-processing scheme to interpolate transmission functions over $k$-points to get smooth well-converged average transmission functions. This is relevant for data obtained using typical "expensive" first principles calculations where the leads/electrodes are described by periodic boundary conditions. We show examples of transport in graphene structures where a speed-up of an order of magnitude is easily obtained.
Bauer, Stefan; Ibanez, Ana B
2015-01-01
Background Increasing sample throughput is needed when large numbers of samples have to be processed. In chromatography, one strategy is to reduce column length for decreased analysis time. Therefore, the feasibility of analyzing samples simply on a guard column was explored using refractive index and ultraviolet detection. Results from the guard columns were compared to the analyses using the standard 300 mm Aminex HPX-87H column which is widely applied to the analysis of samples from many b...
Core size effect on the dry and saturated ultrasonic pulse velocity of limestone samples.
Ercikdi, Bayram; Karaman, Kadir; Cihangir, Ferdi; Yılmaz, Tekin; Aliyazıcıoğlu, Şener; Kesimal, Ayhan
2016-12-01
This study presents the effect of core length on the saturated (UPVsat) and dry (UPVdry) P-wave velocities of four different biomicritic limestone samples, namely light grey (BL-LG), dark grey (BL-DG), reddish (BL-R) and yellow (BL-Y), using core samples having different lengths (25-125mm) at a constant diameter (54.7mm). The saturated P-wave velocity (UPVsat) of all core samples generally decreased with increasing the sample length. However, the dry P-wave velocity (UPVdry) of samples obtained from BL-LG and BL-Y limestones increased with increasing the sample length. In contrast to the literature, the dry P-wave velocity (UPVdry) values of core samples having a length of 75, 100 and 125mm were consistently higher (2.8-46.2%) than those of saturated (UPVsat). Chemical and mineralogical analyses have shown that the P wave velocity is very sensitive to the calcite and clay minerals potentially leading to the weakening/disintegration of rock samples in the presence of water. Severe fluctuations in UPV values were observed to occur between 25 and 75mm sample lengths, thereafter, a trend of stabilization was observed. The maximum variation of UPV values between the sample length of 75mm and 125mm was only 7.3%. Therefore, the threshold core sample length was interpreted as 75mm for UPV measurement in biomicritic limestone samples used in this study.
Importance of Sample Size for the Estimation of Repeater F Waves in Amyotrophic Lateral Sclerosis
Institute of Scientific and Technical Information of China (English)
Jia Fang; Ming-Sheng Liu; Yu-Zhou Guan; Bo Cui; Li-Ying Cui
2015-01-01
Background:In amyotrophic lateral sclerosis (ALS),repeater F waves are increased.Accurate assessment of repeater F waves requires an adequate sample size.Methods:We studied the F waves of left ulnar nerves in ALS patients.Based on the presence or absence of pyramidal signs in the left upper limb,the ALS patients were divided into two groups:One group with pyramidal signs designated as P group and the other without pyramidal signs designated as NP group.The Index repeating neurons (RN) and Index repeater F waves (Freps) were compared among the P,NP and control groups following 20 and 100 stimuli respectively.For each group,the Index RN and Index Freps obtained from 20 and 100 stimuli were compared.Results:In the P group,the Index RN (P =0.004) and Index Freps (P =0.001) obtained from 100 stimuli were significantly higher than from 20 stimuli.For F waves obtained from 20 stimuli,no significant differences were identified between the P and NP groups for Index RN (P =0.052) and Index Freps (P =0.079); The Index RN (P ＜ 0.001) and Index Freps (P ＜ 0.001) of the P group were significantly higher than the control group; The Index RN (P =0.002) of the NP group was significantly higher than the control group.For F waves obtained from 100 stimuli,the Index RN (P ＜ 0.001) and Index Freps (P ＜ 0.001) of the P group were significantly higher than the NP group; The Index RN (P ＜ 0.001) and Index Freps (P ＜ 0.001) of the P and NP groups were significantly higher than the control group.Conclusions:Increased repeater F waves reflect increased excitability of motor neuron pool and indicate upper motor neuron dysfunction in ALS.For an accurate evaluation of repeater F waves in ALS patients especially those with moderate to severe muscle atrophy,100 stimuli would be required.
Multiscale sampling of plant diversity: Effects of minimum mapping unit size
Stohlgren, T.J.; Chong, G.W.; Kalkhan, M.A.; Schell, L.D.
1997-01-01
Only a small portion of any landscape can be sampled for vascular plant diversity because of constraints of cost (salaries, travel time between sites, etc.). Often, the investigator decides to reduce the cost of creating a vegetation map by increasing the minimum mapping unit (MMU), and/or by reducing the number of vegetation classes to be considered. Questions arise about what information is sacrificed when map resolution is decreased. We compared plant diversity patterns from vegetation maps made with 100-ha, 50-ha, 2-ha, and 0.02-ha MMUs in a 754-ha study area in Rocky Mountain National Park, Colorado, United States, using four 0.025-ha and 21 0.1-ha multiscale vegetation plots. We developed and tested species-log(area) curves, correcting the curves for within-vegetation type heterogeneity with Jaccard's coefficients. Total species richness in the study area was estimated from vegetation maps at each resolution (MMU), based on the corrected species-area curves, total area of the vegetation type, and species overlap among vegetation types. With the 0.02-ha MMU, six vegetation types were recovered, resulting in an estimated 552 species (95% CI = 520-583 species) in the 754-ha study area (330 plant species were observed in the 25 plots). With the 2-ha MMU, five vegetation types were recognized, resulting in an estimated 473 species for the study area. With the 50-ha MMU, 439 plant species were estimated for the four vegetation types recognized in the study area. With the 100-ha MMU, only three vegetation types were recognized, resulting in an estimated 341 plant species for the study area. Locally rare species and keystone ecosystems (areas of high or unique plant diversity) were missed at the 2-ha, 50-ha, and 100-ha scales. To evaluate the effects of minimum mapping unit size requires: (1) an initial stratification of homogeneous, heterogeneous, and rare habitat types; and (2) an evaluation of within-type and between-type heterogeneity generated by environmental
Amano, Ken-ichi
2012-01-01
Recent surface force apparatus (SFA) and atomic force microscopy (AFM) can measure force curves between a probe and a sample surface in solvent. The force curve is thought as the solvation structure in some articles, because its shape is generally oscilltive and pitch of the oscillation is about the same as diameter of the solvent. However, it is not the solvation structure. It is only the force between the probe and the sample surface. Therefore, this brief paper presents a method for calculating the solvation structure from the force curve. The method is constructed by using integral equation theory, a statistical mechanics of liquid (Ornstein-Zernike equation coupled by hypernetted-chain closure). This method is considered to be important for elucidation of the solvation structure on a sample surface.
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
The proportionator is a novel and radically different approach to sampling with microscopes based on well-known statistical theory (probability proportional to size - PPS sampling). It uses automatic image analysis, with a large range of options, to assign to every field of view in the section...... of its entirely different sampling strategy, based on known but non-uniform sampling probabilities, the proportionator for the first time allows the real CE at the section level to be automatically estimated (not just predicted), unbiased - for all estimators and at no extra cost to the user....
Directory of Open Access Journals (Sweden)
Badawi Mohamed S.
2015-01-01
Full Text Available When using gamma ray spectrometry for radioactivity analysis of environmental samples (such as soil, sediment or ash of a living organism, relevant linear attenuation coefficients should be known - in order to calculate self-absorption in the sample bulk. This parameter is additionally important since the unidentified samples are normally different in composition and density from the reference ones (the latter being e. g. liquid sources, commonly used for detection efficiency calibration in radioactivity monitoring. This work aims at introducing a numerical simulation method for calculation of linear attenuation coefficients without the use of a collimator. The method is primarily based on calculations of the effective solid angles - compound parameters accounting for the emission and detection probabilities, as well as for the source-to-detector geometrical configuration. The efficiency transfer principle and average path lengths through the samples themselves are employed, too. The results obtained are compared with those from the NIST-XCOM data base; close agreement confirms the validity of the numerical simulation method approach.
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Studies were conducted on specific core collections constructedon the basis of different traits and sample size by the method of stepwise cluster with three sampling strategies based on genotypic values of cotton.A total of 21 traits (11 agronomy traits,5 fiber traits and 5 seed traits) were used to construct main core collections.Specific core collections,as representative of the initial collection,were constructed by agronomy,fiber or seed trait,respectively.As compared with the main core collection,specific core collections tended to have similar property for maintaining genetic diversity of agronomy,seed or fiber traits.Core collections developed by about sample size of 17% (P2=0.17) and 24% (P1= 0.24) with three sampling strategies could be quite representative of the initial collection.
Homeopathy: statistical significance versus the sample size in experiments with Toxoplasma gondii
Directory of Open Access Journals (Sweden)
Ana LÃƒÂºcia Falavigna Guilherme
2011-09-01
, examined in its full length. This study was approved by the Ethics Committee for animal experimentation of the UEM - Protocol 036/2009. The data were compared using the tests Mann Whitney and Bootstrap [7] with the statistical software BioStat 5.0. Results and discussion: There was no significant difference when analyzed with the Mann-Whitney, even multiplying the "n" ten times (p=0.0618. The number of cysts observed in BIOT 200DH group was 4.5 Ã‚Â± 3.3 and 12.8 Ã‚Â± 9.7 in the CONTROL group. Table 1 shows the results obtained using the bootstrap analysis for each data changed from 2n until 2n+5, and their respective p-values. With the inclusion of more elements in the different groups, tested one by one, randomly, increasing gradually the samples, we observed the sample size needed to statistically confirm the results seen experimentally. Using 17 mice in group BIOT 200DH and 19 in the CONTROL group we have already observed statistical significance. This result suggests that experiments involving highly diluted substances and infection of mice with T. gondii should work with experimental groups with 17 animals at least. Despite the current and relevant ethical discussions about the number of animals used for experimental procedures the number of animals involved in each experiment must meet the characteristics of each item to be studied. In the case of experiments involving highly diluted substances, experimental animal models are still rudimentary and the biological effects observed appear to be also individualized, as described in literature for homeopathy [8]. The fact that the statistical significance was achieved by increasing the sample observed in this trial, tell us about a rare event, with a strong individual behavior, difficult to demonstrate in a result set, treated simply with a comparison of means or medians. Conclusion: Bootstrap seems to be an interesting methodology for the analysis of data obtained from experiments with highly diluted
Directory of Open Access Journals (Sweden)
Simon Boitard
2016-03-01
Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.
International Nuclear Information System (INIS)
We compare the performance of two well-established computational algorithms for the calculation of free-energy landscapes of biomolecular systems, umbrella sampling and metadynamics. We look at benchmark systems composed of polyethylene and polypropylene oligomers interacting with lipid (phosphatidylcholine) membranes, aiming at the calculation of the oligomer water-membrane free energy of transfer. We model our test systems at two different levels of description, united-atom and coarse-grained. We provide optimized parameters for the two methods at both resolutions. We devote special attention to the analysis of statistical errors in the two different methods and propose a general procedure for the error estimation in metadynamics simulations. Metadynamics and umbrella sampling yield the same estimates for the water-membrane free energy profile, but metadynamics can be more efficient, providing lower statistical uncertainties within the same simulation time
Energy Technology Data Exchange (ETDEWEB)
Bochicchio, Davide; Panizon, Emanuele; Ferrando, Riccardo; Rossi, Giulia, E-mail: giulia.rossi@gmail.com [Physics Department, University of Genoa and CNR-IMEM, Via Dodecaneso 33, 16146 Genoa (Italy); Monticelli, Luca [Bases Moléculaires et Structurales des Systèmes Infectieux (BMSSI), CNRS UMR 5086, 7 Passage du Vercors, 69007 Lyon (France)
2015-10-14
We compare the performance of two well-established computational algorithms for the calculation of free-energy landscapes of biomolecular systems, umbrella sampling and metadynamics. We look at benchmark systems composed of polyethylene and polypropylene oligomers interacting with lipid (phosphatidylcholine) membranes, aiming at the calculation of the oligomer water-membrane free energy of transfer. We model our test systems at two different levels of description, united-atom and coarse-grained. We provide optimized parameters for the two methods at both resolutions. We devote special attention to the analysis of statistical errors in the two different methods and propose a general procedure for the error estimation in metadynamics simulations. Metadynamics and umbrella sampling yield the same estimates for the water-membrane free energy profile, but metadynamics can be more efficient, providing lower statistical uncertainties within the same simulation time.
Bochicchio, Davide; Panizon, Emanuele; Ferrando, Riccardo; Monticelli, Luca; Rossi, Giulia
2015-10-01
We compare the performance of two well-established computational algorithms for the calculation of free-energy landscapes of biomolecular systems, umbrella sampling and metadynamics. We look at benchmark systems composed of polyethylene and polypropylene oligomers interacting with lipid (phosphatidylcholine) membranes, aiming at the calculation of the oligomer water-membrane free energy of transfer. We model our test systems at two different levels of description, united-atom and coarse-grained. We provide optimized parameters for the two methods at both resolutions. We devote special attention to the analysis of statistical errors in the two different methods and propose a general procedure for the error estimation in metadynamics simulations. Metadynamics and umbrella sampling yield the same estimates for the water-membrane free energy profile, but metadynamics can be more efficient, providing lower statistical uncertainties within the same simulation time.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.
2016-07-01
Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size fines) from future sample return missions to investigate lava flow diversity and petrological significance.
Directory of Open Access Journals (Sweden)
Liu Songtao
2012-12-01
Full Text Available Abstract Background Cardiac magnetic resonance (CMR T1 mapping has been used to characterize myocardial diffuse fibrosis. The aim of this study is to determine the reproducibility and sample size of CMR fibrosis measurements that would be applicable in clinical trials. Methods A modified Look-Locker with inversion recovery (MOLLI sequence was used to determine myocardial T1 values pre-, and 12 and 25min post-administration of a gadolinium-based contrast agent at 3 Tesla. For 24 healthy subjects (8 men; 29 ± 6 years, two separate scans were obtained a with a bolus of 0.15mmol/kg of gadopentate dimeglumine and b 0.1mmol/kg of gadobenate dimeglumine, respectively, with averaged of 51 ± 34 days between two scans. Separately, 25 heart failure subjects (12 men; 63 ± 14 years, were evaluated after a bolus of 0.15mmol/kg of gadopentate dimeglumine. Myocardial partition coefficient (λ was calculated according to (ΔR1myocardium/ΔR1blood, and ECV was derived from λ by adjusting (1-hematocrit. Results Mean ECV and λ were both significantly higher in HF subjects than healthy (ECV: 0.287 ± 0.034 vs. 0.267 ± 0.028, p=0.002; λ: 0.481 ± 0.052 vs. 442 ± 0.037, p Conclusion ECV and λ quantification have a low variability across scans, and could be a viable tool for evaluating clinical trial outcome.
Method to study sample object size limit of small-angle x-ray scattering computed tomography
Choi, Mina; Ghammraoui, Bahaa; Badal, Andreu; Badano, Aldo
2016-03-01
Small-angle x-ray scattering (SAXS) imaging is an emerging medical tool that can be used for in vivo detailed tissue characterization and has the potential to provide added contrast to conventional x-ray projection and CT imaging. We used a publicly available MC-GPU code to simulate x-ray trajectories in a SAXS-CT geometry for a target material embedded in a water background material with varying sample sizes (1, 3, 5, and 10 mm). Our target materials were water solution of gold nanoparticle (GNP) spheres with a radius of 6 nm and a water solution with dissolved serum albumin (BSA) proteins due to their well-characterized scatter profiles at small angles and highly scattering properties. The background material was water. Our objective is to study how the reconstructed scatter profile degrades at larger target imaging depths and increasing sample sizes. We have found that scatter profiles of the GNP in water can still be reconstructed at depths up to 5 mm embedded at the center of a 10 mm sample. Scatter profiles of BSA in water were also reconstructed at depths up to 5 mm in a 10 mm sample but with noticeable signal degradation as compared to the GNP sample. This work presents a method to study the sample size limits for future SAXS-CT imaging systems.
The accuracy of instrumental neutron activation analysis of kilogram-size inhomogeneous samples.
Blaauw, M; Lakmaker, O; van Aller, P
1997-07-01
The feasibility of quantitative instrumental neutron activation analysis (INAA) of samples in the kilogram range without internal standardization has been demonstrated by Overwater et al. (Anal. Chem. 1996, 68, 341). In their studies, however, they demonstrated only the agreement between the "corrected" γ ray spectrum of homogeneous large samples and that of small samples of the same material. In this paper, the k(0) calibration of the IRI facilities for large samples is described, and, this time in terms of (trace) element concentrations, some of Overwater's results for homogeneous materials are presented again, as well as results obtained from inhomogeneous materials and subsamples thereof. It is concluded that large-sample INAA can be as accurate as ordinary INAA, even when applied to inhomogeneous materials.
Directory of Open Access Journals (Sweden)
L. Luquot
2015-11-01
Full Text Available The aim of this study is to compare the structural, geometrical and transport parameters of a limestone rock sample determined by X-ray microtomography (XMT images and laboratory experiments. Total and effective porosity, surface-to-volume ratio, pore size distribution, permeability, tortuosity and effective diffusion coefficient have been estimated. Sensitivity analyses of the segmentation parameters have been performed. The limestone rock sample studied here have been characterized using both approaches before and after a reactive percolation experiment. Strong dissolution process occured during the percolation, promoting a wormhole formation. This strong heterogeneity formed after the percolation step allows to apply our methodology to two different samples and enhance the use of experimental techniques or XMT images depending on the rock heterogeneity. We established that for most of the parameters calculated here, the values obtained by computing XMT images are in agreement with the classical laboratory measurements. We demonstrated that the computational porosity is more informative than the laboratory one. We observed that pore size distributions obtained by XMT images and laboratory experiments are slightly different but complementary. Regarding the effective diffusion coefficient, we concluded that both approaches are valuable and give similar results. Nevertheless, we wrapped up that computing XMT images to determine transport, geometrical and petrophysical parameters provides similar results than the one measured at the laboratory but with much shorter durations.
Luquot, Linda; Hebert, Vanessa; Rodriguez, Olivier
2016-04-01
The aim of this study is to compare the structural, geometrical and transport parameters of a limestone rock sample determined by X-ray microtomography (XMT) images and laboratory experiments. Total and effective porosity, surface-to-volume ratio, pore size distribution, permeability, tortuosity and effective diffusion coeffcient have been estimated. Sensitivity analyses of the segmentation parameters have been performed. The limestone rock sample studied here have been characterized using both approaches before and after a reactive percolation experiment. Strong dissolution process occured during the percolation, promoting a wormhole formation. This strong heterogeneity formed after the percolation step allows to apply our methodology to two different samples and enhance the use of experimental techniques or XMT images depending on the rock heterogeneity. We established that for most of the parameters calculated here, the values obtained by computing XMT images are in agreement with the classical laboratory measurements. We demonstrated that the computational porosity is more informative than the laboratory one. We observed that pore size distributions obtained by XMT images and laboratory experiments are slightly different but complementary. Regarding the effective diffusion coeffcient, we concluded that both approaches are valuable and give similar results. Nevertheless, we wrapped up that computing XMT images to determine transport, geometrical and petrophysical parameters provides similar results than the one measured at the laboratory but with much shorter durations.
Luquot, Linda; Hebert, Vanessa; Rodriguez, Olivier
2016-03-01
The aim of this study is to compare the structural, geometrical and transport parameters of a limestone rock sample determined by X-ray microtomography (XMT) images and laboratory experiments. Total and effective porosity, pore-size distribution, tortuosity, and effective diffusion coefficient have been estimated. Sensitivity analyses of the segmentation parameters have been performed. The limestone rock sample studied here has been characterized using both approaches before and after a reactive percolation experiment. Strong dissolution process occurred during the percolation, promoting a wormhole formation. This strong heterogeneity formed after the percolation step allows us to apply our methodology to two different samples and enhance the use of experimental techniques or XMT images depending on the rock heterogeneity. We established that for most of the parameters calculated here, the values obtained by computing XMT images are in agreement with the classical laboratory measurements. We demonstrated that the computational porosity is more informative than the laboratory measurement. We observed that pore-size distributions obtained by XMT images and laboratory experiments are slightly different but complementary. Regarding the effective diffusion coefficient, we concluded that both approaches are valuable and give similar results. Nevertheless, we concluded that computing XMT images to determine transport, geometrical, and petrophysical parameters provide similar results to those measured at the laboratory but with much shorter durations.
Algina, James; Keselman, H. J.
2008-01-01
Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)
This paper compares the collection characteristics of a new rotating impactor for ultra fine aerosols (FLB) with the industry standard (Hock). The volume and droplet size distribution collected by the rotating impactors were measured via spectroscopy and microscopy. The rotary impactors were co-lo...
40 CFR 761.243 - Standard wipe sample method and size.
2010-07-01
... surface areas, when small diameter pipe, a small valve, or a small regulator. When smaller surfaces are sampled, convert the... pipe segment or pipeline section using a standard wipe test as defined in § 761.123. Detailed...
Energy Technology Data Exchange (ETDEWEB)
Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G., E-mail: tiagorusin@ime.eb.b, E-mail: rebello@ime.eb.b, E-mail: vellozo@cbpf.b, E-mail: renatoguedes@ime.eb.b [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Nuclear; Vital, Helio C., E-mail: vital@ctex.eb.b [Centro Tecnologico do Exercito (CTEx), Rio de Janeiro, RJ (Brazil); Silva, Ademir X., E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear
2011-07-01
A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)
Ellison, Laura E.; Lukacs, Paul M.
2014-01-01
Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.
Shi, Wei-Yu; Su, Li-Jun; Song, Yi; Ma, Ming-Guo; Du, Sheng
2015-10-01
The soil CO2 emission is recognized as one of the largest fluxes in the global carbon cycle. Small errors in its estimation can result in large uncertainties and have important consequences for climate model predictions. Monte Carlo approach is efficient for estimating and reducing spatial scale sampling errors. However, that has not been used in soil CO2 emission studies. Here, soil respiration data from 51 PVC collars were measured within farmland cultivated by maize covering 25 km(2) during the growing season. Based on Monte Carlo approach, optimal sample sizes of soil temperature, soil moisture, and soil CO2 emission were determined. And models of soil respiration can be effectively assessed: Soil temperature model is the most effective model to increasing accuracy among three models. The study demonstrated that Monte Carlo approach may improve soil respiration accuracy with limited sample size. That will be valuable for reducing uncertainties of global carbon cycle. PMID:26664693
Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît
2012-11-13
An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.
Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît
2012-11-13
An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation. PMID:26605623
Barbara Di Camillo; Tiziana Sanavia; Matteo Martini; Giuseppe Jurman; Francesco Sambo; Annalisa Barla; Margherita Squillario; Cesare Furlanello; Gianna Toffolo; Claudio Cobelli
2012-01-01
MOTIVATION: The identification of robust lists of molecular biomarkers related to a disease is a fundamental step for early diagnosis and treatment. However, methodologies for the discovery of biomarkers using microarray data often provide results with limited overlap. These differences are imputable to 1) dataset size (few subjects with respect to the number of features); 2) heterogeneity of the disease; 3) heterogeneity of experimental protocols and computational pipelines employed in the a...
RISK-ASSESSMENT PROCEDURES AND ESTABLISHING THE SIZE OF SAMPLES FOR AUDITING FINANCIAL STATEMENTS
Directory of Open Access Journals (Sweden)
Daniel Botez
2014-12-01
Full Text Available In auditing financial statements, the procedures for the assessment of the risks and the calculation of the materiality differ from an auditor to another, by audit cabinet policy or advice professional bodies. All, however, have the reference International Audit Standards ISA 315 “Identifying and assessing the risks of material misstatement through understanding the entity and its environment” and ISA 320 “Materiality in planning and performing an audit”. On the basis of specific practices auditors in Romania, the article shows some laborious and examples of these aspects. Such considerations are presented evaluation of the general inherent risk, a specific inherent risk, the risk of control and the calculation of the materiality.
Study and Calculation of Feeding Size for R-pendulum Mill%R型摆式磨粉机进料粒度的研究与计算
Institute of Scientific and Technical Information of China (English)
谭开强
2013-01-01
研究推导R型摆式磨粉机进料粒度计算公式,对R型摆式磨料机工况下的进料粒度进行计算,合理的确定R型摆式磨粉机的最大进料尺寸.%This paper researches and deduces the calculation of feeding size for R-pendulum mill.It calculates the feeding size for R-pendulum mill at working condition and reasonably determine the maximum feeding size for R-pendulum mill.
Item Characteristic Curve Parameters: Effects of Sample Size on Linear Equating.
Ree, Malcom James; Jensen, Harald E.
By means of computer simulation of test responses, the reliability of item analysis data and the accuracy of equating were examined for hypothetical samples of 250, 500, 1000, and 2000 subjects for two tests with 20 equating items plus 60 additional items on the same scale. Birnbaum's three-parameter logistic model was used for the simulation. The…
Effect of model choice and sample size on statistical tolerance limits
International Nuclear Information System (INIS)
Statistical tolerance limits are estimates of large (or small) quantiles of a distribution, quantities which are very sensitive to the shape of the tail of the distribution. The exact nature of this tail behavior cannot be ascertained brom small samples, so statistical tolerance limits are frequently computed using a statistical model chosen on the basis of theoretical considerations or prior experience with similar populations. This report illustrates the effects of such choices on the computations
Ultrasonic detection and sizing of cracks in cast stainless steel samples
International Nuclear Information System (INIS)
The test consisted of 15 samples of cast stainless steel, each with a weld. Some of the specimens were provided with artificially made thermal fatique cracks. The inspection was performed with the P-scan method. The investigations showed an improvement of recognizability relative to earlier investigations. One probe, the dual type, longitudinal wave 45 degrees, low frequence 0.5-1 MHz gives the best results. (G.B.)
Energy Technology Data Exchange (ETDEWEB)
Moore, Bria M.; Brady, Samuel L., E-mail: samuel.brady@stjude.org; Kaufman, Robert A. [Department of Radiological Sciences, St Jude Children' s Research Hospital, Memphis, Tennessee 38105 (United States); Mirro, Amy E. [Department of Biomedical Engineering, Washington University, St Louis, Missouri 63130 (United States)
2014-07-15
Purpose: To investigate the correlation of size-specific dose estimate (SSDE) with absorbed organ dose, and to develop a simple methodology for estimating patient organ dose in a pediatric population (5–55 kg). Methods: Four physical anthropomorphic phantoms representing a range of pediatric body habitus were scanned with metal oxide semiconductor field effect transistor (MOSFET) dosimeters placed at 23 organ locations to determine absolute organ dose. Phantom absolute organ dose was divided by phantom SSDE to determine correlation between organ dose and SSDE. Organ dose correlation factors (CF{sub SSDE}{sup organ}) were then multiplied by patient-specific SSDE to estimate patient organ dose. The CF{sub SSDE}{sup organ} were used to retrospectively estimate individual organ doses from 352 chest and 241 abdominopelvic pediatric CT examinations, where mean patient weight was 22 kg ± 15 (range 5–55 kg), and mean patient age was 6 yrs ± 5 (range 4 months to 23 yrs). Patient organ dose estimates were compared to published pediatric Monte Carlo study results. Results: Phantom effective diameters were matched with patient population effective diameters to within 4 cm; thus, showing appropriate scalability of the phantoms across the entire pediatric population in this study. IndividualCF{sub SSDE}{sup organ} were determined for a total of 23 organs in the chest and abdominopelvic region across nine weight subcategories. For organs fully covered by the scan volume, correlation in the chest (average 1.1; range 0.7–1.4) and abdominopelvic region (average 0.9; range 0.7–1.3) was near unity. For organ/tissue that extended beyond the scan volume (i.e., skin, bone marrow, and bone surface), correlation was determined to be poor (average 0.3; range: 0.1–0.4) for both the chest and abdominopelvic regions, respectively. A means to estimate patient organ dose was demonstrated. Calculated patient organ dose, using patient SSDE and CF{sub SSDE}{sup organ}, was compared to
Energy Technology Data Exchange (ETDEWEB)
Faye, C.B.; Amodeo, T.; Fréjafon, E. [Institut National de l' Environnement Industriel et des Risques (INERIS/DRC/CARA/NOVA), Parc Technologique Alata, BP 2, 60550 Verneuil-En-Halatte (France); Delepine-Gilon, N. [Institut des Sciences Analytiques, 5 rue de la Doua, 69100 Villeurbanne (France); Dutouquet, C., E-mail: christophe.dutouquet@ineris.fr [Institut National de l' Environnement Industriel et des Risques (INERIS/DRC/CARA/NOVA), Parc Technologique Alata, BP 2, 60550 Verneuil-En-Halatte (France)
2014-01-01
Pollution of water is a matter of concern all over the earth. Particles are known to play an important role in the transportation of pollutants in this medium. In addition, the emergence of new materials such as NOAA (Nano-Objects, their Aggregates and their Agglomerates) emphasizes the need to develop adapted instruments for their detection. Surveillance of pollutants in particulate form in waste waters in industries involved in nanoparticle manufacturing and processing is a telling example of possible applications of such instrumental development. The LIBS (laser-induced breakdown spectroscopy) technique coupled with the liquid jet as sampling mode for suspensions was deemed as a potential candidate for on-line and real time monitoring. With the final aim in view to obtain the best detection limits, the interaction of nanosecond laser pulses with the liquid jet was examined. The evolution of the volume sampled by laser pulses was estimated as a function of the laser energy applying conditional analysis when analyzing a suspension of micrometric-sized particles of borosilicate glass. An estimation of the sampled depth was made. Along with the estimation of the sampled volume, the evolution of the SNR (signal to noise ratio) as a function of the laser energy was investigated as well. Eventually, the laser energy and the corresponding fluence optimizing both the sampling volume and the SNR were determined. The obtained results highlight intrinsic limitations of the liquid jet sampling mode when using 532 nm nanosecond laser pulses with suspensions. - Highlights: • Micrometric-sized particles in suspensions are analyzed using LIBS and a liquid jet. • The evolution of the sampling volume is estimated as a function of laser energy. • The sampling volume happens to saturate beyond a certain laser fluence. • Its value was found much lower than the beam diameter times the jet thickness. • Particles proved not to be entirely vaporized.
VanCuren, Richard A.; Cahill, Thomas; Burkhart, John; Barnes, David; Zhao, Yongjing; Perry, Kevin; Cliff, Steven; McConnell, Joe
2012-06-01
An ongoing program to continuously collect time- and size-resolved aerosol samples from ambient air at Summit Station, Greenland (72.6 N, 38.5 W) is building a long-term data base to both record individual transport events and provide long-term temporal context for past and future intensive studies at the site. As a "first look" at this data set, analysis of samples collected from summer 2005 to spring 2006 demonstrates the utility of continuous sampling to characterize air masses over the ice pack, document individual aerosol transport events, and develop a long-term record. Seven source-related aerosol types were identified in this analysis: Asian dust, Saharan dust, industrial combustion, marine with combustion tracers, fresh coarse volcanic tephra, and aged volcanic plume with fine tephra and sulfate, and the well-mixed background "Arctic haze". The Saharan dust is a new discovery; the other types are consistent with those reported from previous work using snow pits and intermittent ambient air sampling during intensive study campaigns. Continuous sampling complements the fundamental characterization of Greenland aerosols developed in intensive field programs by providing a year-round record of aerosol size and composition at all temporal scales relevant to ice core analysis, ranging from individual deposition events and seasonal cycles, to a record of inter-annual variability of aerosols from both natural and anthropogenic sources.
Amano, Ken-ich
2013-01-01
Recent frequency-modulated atomic force microscopy (FM-AFM) can measure three-dimensional force distribution between a probe and a sample surface in liquid. The force distribution is, in the present circumstances, assumed to be solvation structure on the sample surface, because the force distribution and solvation structure have somewhat similar shape. However, the force distribution is exactly not the solvation structure. If we would like to obtain the solvation structure by using the liquid AFM, a method for transforming the force distribution into the solvation structure is necessary. Therefore, in this letter, we present the transforming method in a brief style. We call this method as a solution for an inverse problem, because the solvation structure is obtained at first and the force distribution is obtained next in general calculation processes. The method is formulated (mainly) by statistical mechanics of liquid.
Lugo, Jorge; Sosa, Victor
1999-10-01
The repulsion force between a cylindrical superconductor in the Meissner state and a small permanent magnet was calculated under the assumption that the superconductor was formed by a continuous array of dipoles distributed in the finite volume of the sample. After summing up the dipole-dipole interactions with the magnet, we obtained analytical expressions for the levitation force as a function of the superconductor-magnet distance, radius and thickness of the sample. We analyzed two configurations, with the magnet in a horizontal or vertical orientation.
IN SITU NON-INVASIVE SOIL CARBON ANALYSIS: SAMPLE SIZE AND GEOSTATISTICAL CONSIDERATIONS.
Energy Technology Data Exchange (ETDEWEB)
WIELOPOLSKI, L.
2005-04-01
I discuss a new approach for quantitative carbon analysis in soil based on INS. Although this INS method is not simple, it offers critical advantages not available with other newly emerging modalities. The key advantages of the INS system include the following: (1) It is a non-destructive method, i.e., no samples of any kind are taken. A neutron generator placed above the ground irradiates the soil, stimulating carbon characteristic gamma-ray emission that is counted by a detection system also placed above the ground. (2) The INS system can undertake multielemental analysis, so expanding its usefulness. (3) It can be used either in static or scanning modes. (4) The volume sampled by the INS method is large with a large footprint; when operating in a scanning mode, the sampled volume is continuous. (5) Except for a moderate initial cost of about $100,000 for the system, no additional expenses are required for its operation over two to three years after which a NG has to be replenished with a new tube at an approximate cost of $10,000, this regardless of the number of sites analyzed. In light of these characteristics, the INS system appears invaluable for monitoring changes in the carbon content in the field. For this purpose no calibration is required; by establishing a carbon index, changes in carbon yield can be followed with time in exactly the same location, thus giving a percent change. On the other hand, with calibration, it can be used to determine the carbon stock in the ground, thus estimating the soil's carbon inventory. However, this requires revising the standard practices for deciding upon the number of sites required to attain a given confidence level, in particular for the purposes of upward scaling. Then, geostatistical considerations should be incorporated in considering properly the averaging effects of the large volumes sampled by the INS system that would require revising standard practices in the field for determining the number of spots to
Williams, Mark R.; King, Kevin W.; Macrae, Merrin L.; Ford, William; Van Esbroeck, Chris; Brunke, Richard I.; English, Michael C.; Schiff, Sherry L.
2015-11-01
Accurate estimates of annual nutrient loads are required to evaluate trends in water quality following changes in land use or management and to calibrate and validate water quality models. While much emphasis has been placed on understanding the uncertainty of nutrient load estimates in large, naturally drained watersheds, few studies have focused on tile-drained fields and small tile-drained headwater watersheds. The objective of this study was to quantify uncertainty in annual dissolved reactive phosphorus (DRP) and nitrate-nitrogen (NO3-N) load estimates from four tile-drained fields and two small tile-drained headwater watersheds in Ohio, USA and Ontario, Canada. High temporal resolution datasets of discharge (10-30 min) and nutrient concentration (2 h to 1 d) were collected over a 1-2 year period at each site and used to calculate a reference nutrient load. Monte Carlo simulations were used to subsample the measured data to assess the effects of sample frequency, calculation algorithm, and compositing strategy on the uncertainty of load estimates. Results showed that uncertainty in annual DRP and NO3-N load estimates was influenced by both the sampling interval and the load estimation algorithm. Uncertainty in annual nutrient load estimates increased with increasing sampling interval for all of the load estimation algorithms tested. Continuous discharge measurements and linear interpolation of nutrient concentrations yielded the least amount of uncertainty, but still tended to underestimate the reference load. Compositing strategies generally improved the precision of load estimates compared to discrete grab samples; however, they often reduced the accuracy. Based on the results of this study, we recommended that nutrient concentration be measured every 13-26 h for DRP and every 2.7-17.5 d for NO3-N in tile-drained fields and small tile-drained headwater watersheds to accurately (±10%) estimate annual loads.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data. PMID:27410085
RNA Profiling for Biomarker Discovery: Practical Considerations for Limiting Sample Sizes
Directory of Open Access Journals (Sweden)
Danny J. Kelly
2005-01-01
Full Text Available We have compared microarray data generated on Affymetrix™ chips from standard (8 micrograms or low (100 nanograms amounts of total RNA. We evaluated the gene signals and gene fold-change estimates obtained from the two methods and validated a subset of the results by real time, polymerase chain reaction assays. The correlation of low RNA derived gene signals to gene signals obtained from standard RNA was poor for less to moderately abundant genes. Genes with high abundance showed better correlation in signals between the two methods. The signal correlation between the low RNA and standard RNA methods was improved by including a reference sample in the microarray analysis. In contrast, the fold-change estimates for genes were better correlated between the two methods regardless of the magnitude of gene signals. A reference sample based method is suggested for studies that would end up comparing gene signal data from a combination of low and standard RNA templates; no such referencing appears to be necessary when comparing fold-changes of gene expression between standard and low template reactions.
Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data
Dong, Kai
2015-09-16
DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.
A Rounding by Sampling Approach to the Minimum Size k-Arc Connected Subgraph Problem
Laekhanukit, Bundit; Singh, Mohit
2012-01-01
In the k-arc connected subgraph problem, we are given a directed graph G and an integer k and the goal is the find a subgraph of minimum cost such that there are at least k-arc disjoint paths between any pair of vertices. We give a simple (1 + 1/k)-approximation to the unweighted variant of the problem, where all arcs of G have the same cost. This improves on the 1 + 2/k approximation of Gabow et al. [GGTW09]. Similar to the 2-approximation algorithm for this problem [FJ81], our algorithm simply takes the union of a k in-arborescence and a k out-arborescence. The main difference is in the selection of the two arborescences. Here, inspired by the recent applications of the rounding by sampling method (see e.g. [AGM+ 10, MOS11, OSS11, AKS12]), we select the arborescences randomly by sampling from a distribution on unions of k arborescences that is defined based on an extreme point solution of the linear programming relaxation of the problem. In the analysis, we crucially utilize the sparsity property of the ext...
Directory of Open Access Journals (Sweden)
Oum Keltoum Hakam
2015-09-01
Full Text Available Purpose: As a way of prevention, we have measured the activities of uranium and radium isotopes (234U, 238U, 226Ra, 228Ra for 30 drinking water samples collected from 11 wells, 9 springs (6 hot and 3 cold, 3 commercialised mineral water, and 7 tap water samples. Methods: Activities of the Ra isotopes were measured by ultra-gamma spectrometry using a low background and high efficiency well type germanium detector. The U isotopes were counted in an alpha spectrometer.Results: The measured Uranium and radium activities are similar to those published for other non-polluting regions of the world. Except in one commercialised gaseous water sample, and in two hot spring water samples, the calculated effective doses during one year are inferior to the reference level of 0.1 mSv/year recommended by the International Commission on Radiological Protection. Conclusion: These activities don't present any risk for public health in Morocco. The sparkling water of Oulmes is occasionally consumed as table water and waters of warm springs are not used as main sources of drinking water.
Sutor, Malinda M.; Dagg, Michael J.
2008-06-01
The effects of vertical sampling resolution on estimates of plankton biomass and grazing calculations were examined using data collected in two different areas with vertically stratified water columns. Data were collected from one site in the upwelling region off Oregon and from four sites in the Northern Gulf of Mexico, three within the Mississippi River plume and one in adjacent oceanic waters. Plankton were found to be concentrated in discrete layers with sharp vertical gradients at all the stations. Phytoplankton distributions were correlated with gradients in temperature and salinity, but microzooplankton and mesozooplankton distributions were not. Layers of zooplankton were sometimes collocated with layers of phytoplankton, but this was not always the case. Simulated calculations demonstrate that when averages are taken over the water column, or coarser scale vertical sampling resolution is used, biomass and mesozooplankton grazing and filtration rates can be greatly underestimated. This has important implications for understanding the ecological significance of discrete layers of plankton and for assessing rates of grazing and production in stratified water columns.
The effect of sample size on fresh plasma thromboplastin ISI determination
DEFF Research Database (Denmark)
Poller, L; Van Den Besselaar, A M; Jespersen, J;
1999-01-01
The possibility of reduction of numbers of fresh coumarin and normal plasmas has been studied in a multicentre manual prothrombin (PT) calibration of high international sensitivity index (ISI) rabbit and low ISI human reference thromboplastins at 14 laboratories. The number of calibrant plasmas...... was reduced progressively by a computer program which generated random numbers to provide 1000 different selections for each reduced sample at each participant laboratory. Results were compared with those of the full set of 20 normal and 60 coumarin plasma calibrations. With the human reagent, 20 coumarins...... and seven normals still achieved the W.H.O. precision limit (3% CV of the slope), but with the rabbit reagent reduction coumarins with 17 normal plasmas led to unacceptable CV. Little reduction of numbers from the full set of 80 fresh plasmas appears advisable. For maximum confidence, when calibrating...
International Nuclear Information System (INIS)
We demonstrate the quantitative evaluation of the sharp classification of manganese-doped zinc sulfide (ZnS:Mn) quantum dots by size selective precipitation. The particles were characterized by the direct conversion of absorbance spectra to particle size distributions (PSDs) and high-resolution transmission electron micrographs (HRTEM). Gradual addition of a poor solvent (2-propanol) to the aqueous colloid led to the flocculation of larger particles. Though the starting suspension after synthesis had an already narrow PSD between 1.5 and 3.2 nm, different particle size fractions were subsequently isolated by the careful adjustment of the good solvent/poor solvent ratio. Moreover, due to the fact that for the analysis of the classification results the size distributions were available, an in-depth understanding of the quality of the distinct classification steps could be achieved. From the PSDs of the feed, as well as the coarse and the fine fractions with their corresponding yields determined after each classification step, an optimum after the first addition of poor solvent was identified with a maximal separation sharpness κ as high as 0.75. Only by the quantitative evaluation of classification results leading to an in-depth understanding of the relevant driving forces, a future transfer of this lab scale post-processing to larger quantities will be possible.
Wind tunnel study of twelve dust samples by large particle size
Shannak, B.; Corsmeier, U.; Kottmeier, Ch.; Al-azab, T.
2014-12-01
Due to the lack of data by large dust and sand particle, the fluid dynamics characteristics, hence the collection efficiencies of different twelve dust samplers have been experimentally investigated. Wind tunnel tests were carried out at wind velocities ranging from 1 up to 5.5 ms-1. As a large solid particle of 0.5 and 1 mm in diameter, Polystyrene pellets called STYRO Beads or polystyrene sphere were used instead of sand or dust. The results demonstrate that the collection efficiency is relatively acceptable only of eight tested sampler and lie between 60 and 80% depending on the wind velocity and particle size. These samplers are: the Cox Sand Catcher (CSC), the British Standard Directional Dust Gauge (BSD), the Big Spring Number Eight (BSNE), the Suspended Sediment Trap (SUSTRA), the Modified Wilson and Cooke (MWAC), the Wedge Dust Flux Gauge (WDFG), the Model Series Number 680 (SIERRA) and the Pollet Catcher (POLCA). Generally they can be slightly recommended as suitable dust samplers but with collecting error of 20 up to 40%. However the BSNE verify the best performance with a catching error of about 20% and can be with caution selected as a suitable dust sampler. Quite the contrary, the other four tested samplers which are the Marble Dust Collector (MDCO), the United States Geological Survey (USGS), the Inverted Frisbee Sampler (IFS) and the Inverted Frisbee Shaped Collecting Bowl (IFSCB) cannot be recommended due to their very low collection efficiency of 5 up to 40%. In total the efficiency of sampler may be below 0.5, depending on the frictional losses (caused by the sampler geometry) in the fluid and the particle's motion, and on the intensity of airflow acceleration near the sampler inlet. Therefore, the literature data of dust are defective and insufficient. To avoid false collecting data and hence inaccurate mass flux modeling, the geometry of the dust sampler should be considered and furthermore improved.
de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff
2016-09-01
The Pearson product-moment correlation coefficient () and the Spearman rank correlation coefficient () are widely used in psychological research. We compare and on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, and have similar expected values but is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, is more variable than . Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, had lower variability than in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, had lower variability than , and often corresponded more accurately to the population Pearson correlation coefficient () than did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing instead of . In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of and . In conclusion, is suitable for light-tailed distributions, whereas is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research. (PsycINFO Database Record
Durán Pacheco, Gonzalo; Hattendorf, Jan; Colford, John M; Mäusezahl, Daniel; Smith, Thomas
2009-10-30
Many different methods have been proposed for the analysis of cluster randomized trials (CRTs) over the last 30 years. However, the evaluation of methods on overdispersed count data has been based mostly on the comparison of results using empiric data; i.e. when the true model parameters are not known. In this study, we assess via simulation the performance of five methods for the analysis of counts in situations similar to real community-intervention trials. We used the negative binomial distribution to simulate overdispersed counts of CRTs with two study arms, allowing the period of time under observation to vary among individuals. We assessed different sample sizes, degrees of clustering and degrees of cluster-size imbalance. The compared methods are: (i) the two-sample t-test of cluster-level rates, (ii) generalized estimating equations (GEE) with empirical covariance estimators, (iii) GEE with model-based covariance estimators, (iv) generalized linear mixed models (GLMM) and (v) Bayesian hierarchical models (Bayes-HM). Variation in sample size and clustering led to differences between the methods in terms of coverage, significance, power and random-effects estimation. GLMM and Bayes-HM performed better in general with Bayes-HM producing less dispersed results for random-effects estimates although upward biased when clustering was low. GEE showed higher power but anticonservative coverage and elevated type I error rates. Imbalance affected the overall performance of the cluster-level t-test and the GEE's coverage in small samples. Important effects arising from accounting for overdispersion are illustrated through the analysis of a community-intervention trial on Solar Water Disinfection in rural Bolivia. PMID:19672840
Graf, Michael M H; Maurer, Manuela; Oostenbrink, Chris
2016-11-01
Previous free-energy calculations have shown that the seemingly simple transformation of the tripeptide KXK to KGK in water holds some unobvious challenges concerning the convergence of the forward and backward thermodynamic integration processes (i.e., hysteresis). In the current study, the central residue X was either alanine, serine, glutamic acid, lysine, phenylalanine, or tyrosine. Interestingly, the transformation from alanine to glycine yielded the highest hysteresis in relation to the extent of the chemical change of the side chain. The reason for that could be attributed to poor sampling of φ2 /ψ2 dihedral angles along the transformation. Altering the nature of alanine's Cβ atom drastically improved the sampling and at the same time led to the identification of high energy barriers as cause for it. Consequently, simple strategies to overcome these barriers are to increase simulation time (computationally expensive) or to use enhanced sampling techniques such as Hamiltonian replica exchange molecular dynamics and one-step perturbation. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Luís G. Dias
2014-09-01
Full Text Available In this work, the main organic acids (citric, malic and ascorbic acids and sugars (glucose, fructose and sucrose present in commercial fruit beverages (fruit carbonated soft-drinks, fruit nectars and fruit juices were determined. A novel size exclusion high performance liquid chromatography isocratic green method, with ultraviolet and refractive index detectors coupled in series, was developed. This methodology enabled the simultaneous quantification of sugars and organic acids without any sample pre-treatment, even when peak interferences occurred. The method was in-house validated, showing a good linearity (R > 0.999, adequate detection and quantification limits (20 and 280 mg L−1, respectively, satisfactory instrumental and method precisions (relative standard deviations lower than 6% and acceptable method accuracy (relative error lower than 5%. Sugars and organic acids profiles were used to calculate dose-over-threshold values, aiming to evaluate their individual sensory impact on beverage global taste perception. The results demonstrated that sucrose, fructose, ascorbic acid, citric acid and malic acid have the greater individual sensory impact in the overall taste of a specific beverage. Furthermore, although organic acids were present in lower concentrations than sugars, their taste influence was significant and, in some cases, higher than the sugars’ contribution towards the global sensory perception.
Directory of Open Access Journals (Sweden)
Sebastian Wilhelm
2015-12-01
Full Text Available The production of silica is performed by mixing an inorganic, silicate-based precursor and an acid. Monomeric silicic acid forms and polymerizes to amorphous silica particles. Both further polymerization and agglomeration of the particles lead to a gel network. Since polymerization continues after gelation, the gel network consolidates. This rather slow process is known as “natural syneresis” and strongly influences the product properties (e.g., agglomerate size, porosity or internal surface. “Enforced syneresis” is the superposition of natural syneresis with a mechanical, external force. Enforced syneresis may be used either for analytical or preparative purposes. Hereby, two open key aspects are of particular interest. On the one hand, the question arises whether natural and enforced syneresis are analogous processes with respect to their dependence on the process parameters: pH, temperature and sample size. On the other hand, a method is desirable that allows for correlating natural and enforced syneresis behavior. We can show that the pH-, temperature- and sample size-dependency of natural and enforced syneresis are indeed analogous. It is possible to predict natural syneresis using a correlative model. We found that our model predicts maximum volume shrinkages between 19% and 30% in comparison to measured values of 20% for natural syneresis.
Directory of Open Access Journals (Sweden)
Michael B.C. Khoo
2013-11-01
Full Text Available The double sampling (DS X bar chart, one of the most widely-used charting methods, is superior for detecting small and moderate shifts in the process mean. In a right skewed run length distribution, the median run length (MRL provides a more credible representation of the central tendency than the average run length (ARL, as the mean is greater than the median. In this paper, therefore, MRL is used as the performance criterion instead of the traditional ARL. Generally, the performance of the DS X bar chart is investigated under the assumption of known process parameters. In practice, these parameters are usually estimated from an in-control reference Phase-I dataset. Since the performance of the DS X bar chart is significantly affected by estimation errors, we study the effects of parameter estimation on the MRL-based DS X bar chart when the in-control average sample size is minimised. This study reveals that more than 80 samples are required for the MRL-based DS X bar chart with estimated parameters to perform more favourably than the corresponding chart with known parameters.
Energy Technology Data Exchange (ETDEWEB)
Takagi, Shigeyuki M [ORNL; Subedi, Alaska P [ORNL; Cooper, Valentino R [ORNL; Singh, David J [ORNL
2010-01-01
We investigate the effect of $A$-site size differences in the double perovskites BiScO$_3$-$M$NbO$_3$ ($M$$=$Na, K and Rb) using first-principles calculations. We find that the polarization of these materials is 70$\\sim$90 $\\mu$C/cm$^2$ along the rhombohedral direction. The main contribution to the high polarization comes from large off-centerings of Bi ions, which are strongly enhanced by the suppression of octahedral tilts as the $M$ ion size increases. A high Born effective charge of Nb also contributes to the polarization and this contribution is also enhanced by increasing the $M$ ion size.
Mirante, Fátima; Alves, Célia; Pio, Casimiro; Pindado, Oscar; Perez, Rosa; Revuelta, M.a. Aranzazu; Artiñano, Begoña
2013-10-01
Madrid, the largest city of Spain, has some and unique air pollution problems, such as emissions from residential coal burning, a huge vehicle fleet and frequent African dust outbreaks, along with the lack of industrial emissions. The chemical composition of particulate matter (PM) was studied during summer and winter sampling campaigns, conducted in order to obtain size-segregated information at two different urban sites (roadside and urban background). PM was sampled with high volume cascade impactors, with 4 stages: 10-2.5, 2.5-1, 1-0.5 and < 0.5 μm. Samples were solvent extracted and organic compounds were identified and quantified by GC-MS. Alkanes, polycyclic aromatic hydrocarbons (PAHs), alcohols and fatty acids were chromatographically resolved. The PM1-2.5 was the fraction with the highest mass percentage of organics. Acids were the organic compounds that dominated all particle size fractions. Different organic compounds presented apparently different seasonal characteristics, reflecting distinct emission sources, such as vehicle exhausts and biogenic sources. The benzo[a]pyrene equivalent concentrations were lower than 1 ng m- 3. The estimated carcinogenic risk is low.
Institute of Scientific and Technical Information of China (English)
Wenbin FANG; Hongfei SUN; Erde WANG; Yaohong GENG
2005-01-01
A new method using lead coated glass fiber to produce continuous wire for battery grid of electric vehicles (EVs)and hybrid electric vehicles (HEVs) was introduced. Under equal flow, both the maximum and minimum theoretical value of gap size were studied and estimation equation was established. The experimental results show that the gap size is a key parameter for the continuous coating extrusion process. Its maximum value (Hmax) is 0.24 mm and the minimum one (Hmin) is 0.12 mm. At a gap size of 0.18 mm, the maximum of metal extrusion per unit of time and optimal coating speed could be obtained.
Li, Yanwei; Zhang, Ruiming; Du, Likai; Zhang, Qingzhu; Wang, Wenxing
2016-01-01
The quantum mechanics/molecular mechanics (QM/MM) method (e.g., density functional theory (DFT)/MM) is important in elucidating enzymatic mechanisms. It is indispensable to study "multiple" conformations of enzymes to get unbiased energetic and structural results. One challenging problem, however, is to determine the minimum number of conformations for DFT/MM calculations. Here, we propose two convergence criteria, namely the Boltzmann-weighted average barrier and the disproportionate effect, to tentatively address this issue. The criteria were tested by defluorination reaction catalyzed by fluoroacetate dehalogenase. The results suggest that at least 20 conformations of enzymatic residues are required for convergence using DFT/MM calculations. We also tested the correlation of energy barriers between small QM regions and big QM regions. A roughly positive correlation was found. This kind of correlation has not been reported in the literature. The correlation inspires us to propose a protocol for more efficient sampling. This saves 50% of the computational cost in our current case. PMID:27556449
Directory of Open Access Journals (Sweden)
Yanwei Li
2016-08-01
Full Text Available The quantum mechanics/molecular mechanics (QM/MM method (e.g., density functional theory (DFT/MM is important in elucidating enzymatic mechanisms. It is indispensable to study “multiple” conformations of enzymes to get unbiased energetic and structural results. One challenging problem, however, is to determine the minimum number of conformations for DFT/MM calculations. Here, we propose two convergence criteria, namely the Boltzmann-weighted average barrier and the disproportionate effect, to tentatively address this issue. The criteria were tested by defluorination reaction catalyzed by fluoroacetate dehalogenase. The results suggest that at least 20 conformations of enzymatic residues are required for convergence using DFT/MM calculations. We also tested the correlation of energy barriers between small QM regions and big QM regions. A roughly positive correlation was found. This kind of correlation has not been reported in the literature. The correlation inspires us to propose a protocol for more efficient sampling. This saves 50% of the computational cost in our current case.
Directory of Open Access Journals (Sweden)
Daniel Vasiliu
Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.
Vasiliu, Daniel; Clamons, Samuel; McDonough, Molly; Rabe, Brian; Saha, Margaret
2015-01-01
Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED). Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
DEFF Research Database (Denmark)
Henriksen, Jens Henrik Sahl
1983-01-01
) exchange of endogeneous macromolecules. A significant 'sieving' is present in this barrier to the largest macromolecule (IgM). Calculations of pore-size equivalent to the observed permselectivity of macromolecules suggest microvascular gaps (or channels) with an average radius about 300 A, i...
Otto, S.; Trautmann, T.; M. Wendisch
2011-01-01
Realistic size equivalence and shape of Saharan mineral dust particles are derived from in-situ particle, lidar and sun photometer measurements during SAMUM-1 in Morocco (19 May 2006), dealing with measured size- and altitude-resolved axis ratio distributions of assumed spheroidal model particles. The data were applied in optical property, radiative effect, forcing and heating effect simulations to quantify the realistic impact of particle non-sphericity. It turned out that volume-to-surface ...
Otto, S.; Trautmann, T.; M. Wendisch
2010-01-01
Realistic size equivalence and shape of Saharan mineral dust particles are derived from on in-situ particle, lidar and sun photometer measurements during SAMUM-1 in Morocco (19 May 2006), dealing with measured size- and altitude-resolved axis ratio distributions of assumed spheroidal model particles. The data were applied in optical property, radiative effect, forcing and heating effect simulations to quantify the realistic impact of particle non-sphericity. It turned out that volume-to-surfa...
Institute of Scientific and Technical Information of China (English)
LI Xiao-ling; LU Yong-gen; LI Jin-quan; Xu Hai-ming; Muhammad Qasim SHAHID
2011-01-01
The development of a core collection could enhance the utilization of germplasm collections in crop improvement programs and simplify their management.Selection of an appropriate sampling strategy is an important prerequisite to construct a core collection with appropriate size in order to adequately represent the genetic spectrum and maximally capture the genetic diversity in available crop collections.The present study was initiated to construct nested core collections to determine the appropriate sample size to represent the genetic diversity of rice landrace collection based on 15 quantitative traits and 34 qualitative traits of 2 262 rice accessions.The results showed that 50-225 nested core collections,whose sampling rate was 2.2％-9.9％,were sufficient to maintain the maximum genetic diversity of the initial collections.Of these,150 accessions (6.6％) could capture the maximal genetic diversity of the initial collection.Three data types,i.e.qualitative traits (QT1),quantitative traits (QT2) and integrated qualitative and quantitative traits (QTT),were compared for their efficiency in constructing core collections based on the weighted pair-group average method combined with stepwise clustering and preferred sampling on adjusted Euclidean distances.Every combining scheme constructed eight rice core collections (225,200,175,150,125,100,75 and 50).The results showed that the QTT data was the best in constructing a core collection as indicated by the genetic diversity of core collections.A core collection constructed only on the information of QT1 could not represent the initial collection effectively.QTT should be used together to construct a productive core collection.
François, Filip; Maenhaut, Willy; Colin, Jean-Louis; Losno, Remi; Schulz, Michael; Stahlschmidt, Thomas; Spokes, Lucinda; Jickells, Timothy
During an intercomparison field experiment, organized at the Atlantic coast station of Mace Head, Ireland, in April 1991, aerosol samples were collected by four research groups. A variety of samplers was used, combining both high- and low-volume devices, with different types of collection substrates: Hi-Vol Whatman 41 filter holders, single Nuclepore filters and stacked filter units, as well as PIXE cascade impactors. The samples were analyzed by each participating group, using in-house analytical techniques and procedures. The intercomparison of the daily concentrations for 15 elements, measured by two or more participants, revealed a good agreement for the low-volume samplers for the majority of the elements, but also indicated some specific analytical problems, owing to the very low concentrations of the non-sea-salt elements at the sampling site. With the Hi-Vol Whatman 41 filter sampler, on the other hand, much higher results were obtained in particular for the sea-salt and crustal elements. The discrepancy was dependent upon the wind speed and was attributed to a higher collection efficiency of the Hi-Vol sampler for the very coarse particles, as compared to the low-volume devices under high wind speed conditions. The elemental mass size distribution, as derived from parallel cascade impactor samplings by two groups, showed discrepancies in the submicrometer aerosol fraction, which were tentatively attributed to differences in stage cut-off diameters and/or to bounce-off or splintering effects on the quartz impactor slides used by one of the groups. However, the atmospheric concentrations (sums over all stages) were rather similar in the parallel impactor samples and were only slightly lower than those derived from stacked filter unit samples taken in parallel.
O'Brien, R. E.; Laskin, A.; Laskin, J.; Weber, R.; Goldstein, A. H.
2011-12-01
This project focuses on analyzing the identities of molecules that comprise oligomers in size resolved aerosol fractions. Since oligomers are generally too large and polar to be measured by typical GC/MS analysis, soft ionization with high resolution mass spectrometry is used to extend the range of observable compounds. Samples collected with a microorifice uniform deposition impactor (MOUDI) during CALNEX Bakersfield in June 2010 have been analyzed with nanospray desorption electrospray ionization (nano-DESI) and an Orbitrap mass spectrometer. The nano-DESI is a soft ionization technique that allows molecular ions to be observed and the Orbitrap has sufficient resolution to determine the elemental composition of almost all species above the detection limit. A large fraction of SOA is made up of high molecular weight oligomers which are thought to form through acid catalyzed reactions of photo-chemically processed volatile organic compounds (VOC). The formation of oligomers must be influenced by the VOCs available, the amount of atmospheric sulfate and nitrate, and the magnitude of photo-chemical processing, among other potential influences. We present the elemental composition of chemical species in SOA in the 0.18 to 0.32 micron size range, providing the first multi-day data set for the study of these oligomers in atmospheric samples. Possible formation pathways and sources of observed compounds will be examined by comparison to other concurrent measurements at the site.
Directory of Open Access Journals (Sweden)
Schwermer Heinzpeter
2007-05-01
Full Text Available Abstract Background International trade regulations require that countries document their livestock's sanitary status in general and freedom from specific infective agents in detail provided that import restrictions should be applied. The latter is generally achieved by large national serological surveys and risk assessments. The paper describes the basic structure and application of a generic stochastic model for risk-based sample size calculation of consecutive national surveys to document freedom from contagious disease agents in livestock. Methods In the model, disease spread during the time period between two consecutive surveys was considered, either from undetected infections within the domestic population or from imported infected animals. The @Risk model consists of the domestic spread in-between two national surveys; the infection of domestic herds from animals imported from countries with a sanitary status comparable to Switzerland or lower sanitary status and the summary sheet which summed up the numbers of resulting infected herds of all infection pathways to derive the pre-survey prevalence in the domestic population. Thereof the pre-survey probability of freedom from infection and required survey sample sizes were calculated. A scenario for detection of infected herds by general surveillance was included optionally. Results The model highlights the importance of residual domestic infection spread and characteristics of different import pathways. The sensitivity analysis revealed that number of infected, but undetected domestic herds and the multiplicative between-survey-spread factor were most correlated with the pre-survey probability of freedom from infection and the resulting sample size, respectively. Compared to the deterministic pre-cursor model, the stochastic model was therefore more sensitive to the previous survey's results. Undetected spread of infection in the domestic population between two surveys gained more
Effects of sample size on the second magnetization peak in Bi2Sr2CaCuO8+ at low temperatures
Indian Academy of Sciences (India)
B Kalisky; A Shaulov; Y Yeshurun
2006-01-01
Effects of sample size on the second magnetization peak (SMP) in Bi2Sr2CaCuO8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in the order-disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order-disorder transition induction in samples of different size.
Standard Deviation for Small Samples
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
新的缺口试样尺寸系数经验公式%New Empirical Formula of Size Coefficient of Notched Samples
Institute of Scientific and Technical Information of China (English)
黄宁; 黄明辉; 湛利华
2012-01-01
In order to comprehensively analyze the size effect of notched samples, the fatigue performance of notched samples was experimentally investigated for two kinds of materials (45# steel and Q235 steel) under cyclic tension. The results show that, for geometrically similar specimens, the effect of notch shape on the fatigue life can be neglected because it is too slight, and that there exists a thickness effect apart from the stress gradient, namely a critical thickness exceeding which the fatigue life may reduce. Moreover, by introducing the characteristic parameter L/G that represents the effect of geometric size on the fatigue performance, the test results of fatigue strength of geometrically similar specimens are fitted and compared with the calculated values obtained by the Peterson method , finding that the proposed fitting expression is more effective. Finally, an empirical formula of size coefficient of notched samples under tension, which well accords with the actual situation, is presented according to the equation of fatigue strength and the definition of size coefficient.%为了全面分析缺口试样尺寸效应,对两种材料(45#和Q235钢)的缺口板试样进行循环拉伸作用下的疲劳性能试验.结果显示:几何相似试样的缺口形状对疲劳寿命影响很小,可以忽略不计；除应力梯度外存在厚度效应,即该方向尺寸具有一个临界值,超过此值时会导致疲劳寿命下降.通过引入表征尺寸变化对疲劳性能影响的特征参量L/(G),对几何相似试样的疲劳强度测试结果进行曲线拟合,并与采用Peterson法得到的计算值比较,发现文中提出的疲劳强度拟合表达式更有效.最后,根据疲劳强度表达式和尺寸系数的定义得到了拉伸作用下缺口试样尺寸系数的经验公式,该公式与实际情况吻合良好.
Institute of Scientific and Technical Information of China (English)
K.ALARY; D.BABRE; L.CANER; F.FEDER; M.SZWARC; M.NAUDAN; G.BOURGEON
2013-01-01
The possibilities of combining the dissolution of short-range-order minerals (SROMs) like allophane and imogolite,by ammonium oxalate and a particle size distribution analysis performed by the pipette method were investigated by tests on a soil sample from Reunion,a volcanic island located in the Indian Ocean,having a large SROMs content.The need to work with moist soil samples was again emphasized because the microaggregates formed during air-drying are resistant to the reagent.The SROM content increased,but irregularly,with the number of dissolutions by ammonium oxalate:334 and 470 mg g-1 of SROMs were dissolved after one and three dissolutions respectively.Six successive dissolutions with ammonium oxalate on the same soil sample showed that 89％ of the sum of oxides extracted by the 6 dissolutions were extracted by the first dissolution (mean 304 mg g-1).A compromise needs to be found between the total removal of SROMs by large quantities of ammonium oxalate and the preservation of clay minerals,which were unexpectedly dissolved by this reagent.These tests enabled a description of the clay assemblage of the soil (gibbsite,smectite,and traces of kaolinite) in an area where such information was lacking due to the difficulties encountered in recupcration of the clay fraction.
Directory of Open Access Journals (Sweden)
Govardhani.Immadi
2014-05-01
Full Text Available With the increased demand for long distance Tele communication day by day, satellite communication system was developed. Satellite communications utilize L, C, Ku and Ka bands of frequency to fulfil all the requirements. Utilization of higher frequencies causes severe attenuation due to rain. Rain attenuation is noticeable for frequencies above 10ghz. Amount of attenuation depends on whether the operating wave length is comparable with rain drop diameter or not. In this paper the main focus is on drop size distribution using empirical methods, especially Marshall and Palmer distributions. Empirical methods deal with power law relation between the rain rate(mm/h and radar reflectivity(dBz. Finally it is discussed about the rain rate variation, radar reflectivity, drop size distribution, that is made for two rain events at K L University, Vijayawada on 4th September 2013 and on 18 th August 2013.
Takagi, S.; A. Subedi; Cooper, V. R.; Singh, D. J.
2010-01-01
We investigate the effect of $A$-site size differences in the double perovskites BiScO$_3$-$M$NbO$_3$ ($M$$=$Na, K and Rb) using first-principles calculations. We find that the polarization of these materials is 70$\\sim$90 $\\mu$C/cm$^2$ along the rhombohedral direction. The main contribution to the high polarization comes from large off-centerings of Bi ions, which are strongly enhanced by the suppression of octahedral tilts as the $M$ ion size increases. A high Born effective charge of Nb al...
Energy Technology Data Exchange (ETDEWEB)
Hassan, Jamal, E-mail: jamal.hassan@kustar.ac.ae [Department of Applied Mathematics and Sciences, KU (United Arab Emirates); Department of Physics and Astronomy, University of Waterloo, Waterloo, Ontario, N2L 3G1 (Canada)
2012-09-15
The pore size distribution (PSD) of nano-material MCM-41 is determined using two different approaches: N{sub 2} adsorption-desorption and {sup 1}H NMR signal of water confined in silica nano-pores of MCM-41. The first approach is based on the recently modified Kelvin equation [J.V. Rocha, D. Barrera, K. Sapag, Top. Catal. 54(2011) 121-134] which deals with the known underestimation in pore size distribution for the mesoporous materials such as MCM-41 by introducing a correction factor to the classical Kelvin equation. The second method employs the Gibbs-Thompson equation, using NMR, for melting point depression of liquid in confined geometries. The result shows that both approaches give similar pore size distribution to some extent, and also the NMR technique can be considered as an alternative direct method to obtain quantitative results especially for mesoporous materials. The pore diameter estimated for the nano-material used in this study was about 35 and 38 A for the modified Kelvin and NMR methods respectively. A comparison between these methods and the classical Kelvin equation is also presented.
Hughes, William O.; McNelis, Anne M.
2010-01-01
The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.
Tan, Ming; Fang, Hong-Bin; Tian, Guo-Liang; Houghton, Peter J
2003-07-15
In anticancer drug development, the combined use of two drugs is an important strategy to achieve greater therapeutic success. Often combination studies are performed in animal (mostly mice) models before clinical trials are conducted. These experiments on mice are costly, especially with combination studies. However, experimental designs and sample size derivations for the joint action of drugs are not currently available except for a few cases where strong model assumptions are made. For example, Abdelbasit and Plackett proposed an optimal design assuming that the dose-response relationship follows some specified linear models. Tallarida et al. derived a design by fixing the mixture ratio and used a t-test to detect the simple similar action. The issue is that in reality we usually do not have enough information on the joint action of the two compounds before experiment and to understand their joint action is exactly our study goal. In this paper, we first propose a novel non-parametric model that does not impose such strong assumptions on the joint action. We then propose an experimental design for the joint action using uniform measure in this non-parametric model. This design is optimal in the sense that it reduces the variability in modelling synergy while allocating the doses to minimize the number of experimental units and to extract maximum information on the joint action of the compounds. Based on this design, we propose a robust F-test to detect departures from the simple similar action of two compounds and a method to determine sample sizes that are economically feasible. We illustrate the method with a study of the joint action of two new anticancer agents: temozolomide and irinotecan. PMID:12820275
Directory of Open Access Journals (Sweden)
Valéria Schimitz Marodim
2000-10-01
Full Text Available Este estudo visa a estabelecer o delineamento experimental e o tamanho de amostra para a cultura da alface (Lactuca sativa em hidroponia, pelo sistema NFT (Nutrient film technique. O experimento foi conduzido no Laboratório de Cultivos Sem Solo/Hidroponia, no Departamento de Fitotecnia da Universidade Federal de Santa Maria e baseou-se em dados de massa de plantas. Os resultados obtidos mostraram que, usando estrutura de cultivo de alface em hidroponia sobre bancadas de fibrocimento com seis canais, o delineamento experimental adequado é blocos ao acaso se a unidade experimental for constituída de faixas transversais aos canais das bancadas, e deve ser inteiramente casualizado se a bancada for a unidade experimental; para a variável massa de plantas, o tamanho da amostra é de 40 plantas para uma semi-amplitude do intervalo de confiança em percentagem da média (d igual a 5% e de 7 plantas para um d igual a 20%.This study was carried out to establish the experimental design and sample size for hydroponic lettuce (Lactuca sativa crop under nutrient film technique. The experiment was conducted in the Laboratory of Hydroponic Crops of the Horticulture Department of the Federal University of Santa Maria. The evaluated traits were plant weight. Under hydroponic conditions on concrete bench with six ducts, the most indicated experimental design for lettuce is randomised blocks for duct transversal plots or completely randomised for bench plot. The sample size for plant weight should be 40 and 7 plants, respectively, for a confidence interval of mean percentage (d equal to 5% and 20%.
Mélachio, Tanekou Tito Trésor; Njiokou, Flobert; Ravel, Sophie; Simo, Gustave; Solano, Philippe; De Meeûs, Thierry
2015-07-01
Human and animal trypanosomiases are two major constraints to development in Africa. These diseases are mainly transmitted by tsetse flies in particular by Glossina palpalis palpalis in Western and Central Africa. To set up an effective vector control campaign, prior population genetics studies have proved useful. Previous studies on population genetics of G. p. palpalis using microsatellite loci showed high heterozygote deficits, as compared to Hardy-Weinberg expectations, mainly explained by the presence of null alleles and/or the mixing of individuals belonging to several reproductive units (Wahlund effect). In this study we implemented a system of trapping, consisting of a central trap and two to four satellite traps around the central one to evaluate a possible role of the Wahlund effect in tsetse flies from three Cameroon human and animal African trypanosomiases foci (Campo, Bipindi and Fontem). We also estimated effective population sizes and dispersal. No difference was observed between the values of allelic richness, genetic diversity and Wright's FIS, in the samples from central and from satellite traps, suggesting an absence of Wahlund effect. Partitioning of the samples with Bayesian methods showed numerous clusters of 2-3 individuals as expected from a population at demographic equilibrium with two expected offspring per reproducing female. As previously shown, null alleles appeared as the most probable factor inducing these heterozygote deficits in these populations. Effective population sizes varied from 80 to 450 individuals while immigration rates were between 0.05 and 0.43, showing substantial genetic exchanges between different villages within a focus. These results suggest that the "suppression" with establishment of physical barriers may be the best strategy for a vector control campaign in this forest context.
International Nuclear Information System (INIS)
Purpose: To demonstrate the feasibility of using existing data stored within the DICOM header of certain CT localizer radiographs as a patient size metric for calculating CT size-specific dose estimates (SSDE). Methods: For most Siemens CT scanners, the CT localizer radiograph (topogram) contains a private DICOM field that stores an array of numbers describing AP and LAT attenuation-based measures of patient dimension. The square root of the product of the AP and LAT size data, which provides an estimate of water-equivalent-diameter (WED), was calculated retrospectively from topogram data of 20 patients who received clinically-indicated abdomen/pelvis (n=10) and chest (n=10) scans (WED-topo). In addition, slice-by-slice water-equivalent-diameter (WED-image) and effective diameter (ED-image) values were calculated from the respective image data. Using TG-204 lookup tables, size-dependent conversion factors were determined based upon WED-topo, WED-image and ED-image values. These conversion factors were used with the reported CTDIvol to calculate slice-by-slice SSDE for each method. Averaging over all slices, a single SSDE value was determined for each patient and size metric. Patientspecific SSDE and CTDIvol values were then compared with patientspecific organ doses derived from detailed Monte Carlo simulations of fixed tube current scans. Results: For abdomen/pelvis scans, the average difference between liver dose and CTDIvol, SSDE(WED-topo), SSDE(WED-image), and SSDE(ED-image) was 18.70%, 8.17%, 6.84%, and 7.58%, respectively. For chest scans, the average difference between lung dose and CTDIvol, SSDE(WED-topo), SSDE(WED-image), and SSDE(ED-image) was 25.80%, 3.33%, 4.11%, and 7.66%, respectively. Conclusion: SSDE calculated using WED derived from data in the DICOM header of the topogram was comparable to SSDE calculated using WED and ED derived from axial images; each of these estimated organ dose to within 10% for both abdomen/pelvis and chest CT examinations
Directory of Open Access Journals (Sweden)
S. Otto
2010-11-01
Full Text Available Realistic size equivalence and shape of Saharan mineral dust particles are derived from on in-situ particle, lidar and sun photometer measurements during SAMUM-1 in Morocco (19 May 2006, dealing with measured size- and altitude-resolved axis ratio distributions of assumed spheroidal model particles. The data were applied in optical property, radiative effect, forcing and heating effect simulations to quantify the realistic impact of particle non-sphericity. It turned out that volume-to-surface equivalent spheroids with prolate shape are most realistic: particle non-sphericity only slightly affects single scattering albedo and asymmetry parameter but may enhance extinction coefficient by up to 10%. At the bottom of the atmosphere (BOA the Saharan mineral dust always leads to a loss of solar radiation, while the sign of the forcing at the top of the atmosphere (TOA depends on surface albedo: solar cooling/warming over a mean ocean/land surface. In the thermal spectral range the dust inhibits the emission of radiation to space and warms the BOA. The most realistic case of particle non-sphericity causes changes of total (solar plus thermal forcing by 55/5% at the TOA over ocean/land and 15% at the BOA over both land and ocean and enhances total radiative heating within the dust plume by up to 20%. Large dust particles significantly contribute to all the radiative effects reported.
Johari, G. P.; Khouri, J.
2013-03-01
Certain distributions of relaxation times can be described in terms of a non-exponential response parameter, β, of value between 0 and 1. Both β and the relaxation time, τ0, of a material depend upon the probe used for studying its dynamics and the value of β is qualitatively related to the non-Arrhenius variation of viscosity and τ0. A solute adds to the diversity of an intermolecular environment and is therefore expected to reduce β, i.e., to increase the distribution and to change τ0. We argue that the calorimetric value βcal determined from the specific heat [Cp = T(dS/dT)p] data is a more appropriate measure of the distribution of relaxation times arising from configurational fluctuations than β determined from other properties, and report a study of βcal of two sets of binary mixtures, each containing a different molecule of ˜2 nm size. We find that βcal changes monotonically with the composition, i.e., solute molecules modify the nano-scale composition and may increase or decrease τ0, but do not always decrease βcal. (Plots of βcal against the composition do not show a minimum.) We also analyze the data from the literature, and find that (i) βcal of an orientationally disordered crystal is less than that of its liquid, (ii) βcal varies with the isomer's nature, and chiral centers in a molecule decrease βcal, and (iii) βcal decreases when a sample's thickness is decreased to the nm-scale. After examining the difference between βcal and β determined from other properties we discuss the consequences of our findings for theories of non-exponential response, and suggest that studies of βcal may be more revealing of structure-freezing than studies of the non-Arrhenius behavior. On the basis of previous reports that β → 1 for dielectric relaxation of liquids of centiPoise viscosity observed at GHz frequencies, we argue that its molecular mechanism is the same as that of the Johari-Goldstein (JG) relaxation. Its spectrum becomes broader on
Terzyk, Artur P; Furmaniak, Sylwester; Harris, Peter J F; Gauden, Piotr A; Włoch, Jerzy; Kowalczyk, Piotr; Rychlicki, Gerhard
2007-11-28
A plausible model for the structure of non-graphitizing carbon is one which consists of curved, fullerene-like fragments grouped together in a random arrangement. Although this model was proposed several years ago, there have been no attempts to calculate the properties of such a structure. Here, we determine the density, pore size distribution and adsorption properties of a model porous carbon constructed from fullerene-like elements. Using the method proposed recently by Bhattacharya and Gubbins (BG), which was tested in this study for ideal and defective carbon slits, the pore size distributions (PSDs) of the initial model and two related carbon models are calculated. The obtained PSD curves show that two structures are micro-mesoporous (with different ratio of micro/mesopores) and the third is strictly microporous. Using the grand canonical Monte Carlo (GCMC) method, adsorption isotherms of Ar (87 K) are simulated for all the structures. Finally PSD curves are calculated using the Horvath-Kawazoe, non-local density functional theory (NLDFT), Nguyen and Do, and Barrett-Joyner-Halenda (BJH) approaches, and compared with those predicted by the BG method. This is the first study in which different methods of calculation of PSDs for carbons from adsorption data can be really verified, since absolute (i.e. true) PSDs are obtained using the BG method. This is also the first study reporting the results of computer simulations of adsorption on fullerene-like carbon models.
Landguth, Erin L.; Gedy, Bradley C.; Oyler-McCance, Sara J.; Garey, Andrew L.; Emel, Sarah L.; Mumma, Matthew; Wagner, Helene H.; Fortin, Marie-Josée; Cushman, Samuel A.
2012-01-01
The influence of study design on the ability to detect the effects of landscape pattern on gene flow is one of the most pressing methodological gaps in landscape genetic research. To investigate the effect of study design on landscape genetics inference, we used a spatially-explicit, individual-based program to simulate gene flow in a spatially continuous population inhabiting a landscape with gradual spatial changes in resistance to movement. We simulated a wide range of combinations of number of loci, number of alleles per locus and number of individuals sampled from the population. We assessed how these three aspects of study design influenced the statistical power to successfully identify the generating process among competing hypotheses of isolation-by-distance, isolation-by-barrier, and isolation-by-landscape resistance using a causal modelling approach with partial Mantel tests. We modelled the statistical power to identify the generating process as a response surface for equilibrium and non-equilibrium conditions after introduction of isolation-by-landscape resistance. All three variables (loci, alleles and sampled individuals) affect the power of causal modelling, but to different degrees. Stronger partial Mantel r correlations between landscape distances and genetic distances were found when more loci were used and when loci were more variable, which makes comparisons of effect size between studies difficult. Number of individuals did not affect the accuracy through mean equilibrium partial Mantel r, but larger samples decreased the uncertainty (increasing the precision) of equilibrium partial Mantel r estimates. We conclude that amplifying more (and more variable) loci is likely to increase the power of landscape genetic inferences more than increasing number of individuals.
HyPEP FY-07 Report: Initial Calculations of Component Sizes, Quasi-Static, and Dynamics Analyses
Energy Technology Data Exchange (ETDEWEB)
Chang Oh
2007-07-01
The Very High Temperature Gas-Cooled Reactor (VHTR) coupled to the High Temperature Steam Electrolysis (HTSE) process is one of two reference integrated systems being investigated by the U.S. Department of Energy and Idaho National Laboratory for the production of hydrogen. In this concept a VHTR outlet temperature of 900 °C provides thermal energy and high efficiency electricity for the electrolysis of steam in the HTSE process. In the second reference system the Sulfur Iodine (SI) process is coupled to the VHTR to produce hydrogen thermochemically. This report describes component sizing studies and control system strategies for achieving plant production and operability goals for these two reference systems. The optimal size and design condition for the intermediate heat exchanger, one of the most important components for integration of the VHTR and HTSE plants, was estimated using an analytic model. A partial load schedule and control system was designed for the integrated plant using a quasi-static simulation. Reactor stability for temperature perturbations in the hydrogen plant was investigated using both a simple analytic method and a dynamic simulation. Potential efficiency improvements over the VHTR/HTSE plant were investigated for an alternative design that directly couples a High Temperature Steam Rankin Cycle (HTRC) to the HTSE process. This work was done using the HYSYS code and results for the HTRC/HTSE system were compared to the VHTR/HTSE system. Integration of the VHTR with SI process plants was begun. Using the ASPEN plus code the efficiency was estimated. Finally, this report describes planning for the validation and verification of the HYPEP code.
Directory of Open Access Journals (Sweden)
S. Otto
2011-05-01
Full Text Available Realistic size equivalence and shape of Saharan mineral dust particles are derived from in-situ particle, lidar and sun photometer measurements during SAMUM-1 in Morocco (19 May 2006, dealing with measured size- and altitude-resolved axis ratio distributions of assumed spheroidal model particles. The data were applied in optical property, radiative effect, forcing and heating effect simulations to quantify the realistic impact of particle non-sphericity. It turned out that volume-to-surface equivalent spheroids with prolate shape are most realistic: particle non-sphericity only slightly affects single scattering albedo and asymmetry parameter but may enhance extinction coefficient by up to 10 %. At the bottom of the atmosphere (BOA the Saharan mineral dust always leads to a loss of solar radiation, while the sign of the forcing at the top of the atmosphere (TOA depends on surface albedo: solar cooling/warming over a mean ocean/land surface. In the thermal spectral range the dust inhibits the emission of radiation to space and warms the BOA. The most realistic case of particle non-sphericity causes changes of total (solar plus thermal forcing by 55/5 % at the TOA over ocean/land and 15 % at the BOA over both land and ocean and enhances total radiative heating within the dust plume by up to 20 %. Large dust particles significantly contribute to all the radiative effects reported. They strongly enhance the absorbing properties and forward scattering in the solar and increase predominantly, e.g., the total TOA forcing of the dust over land.
Rzama, A.; Erramli, H.; Misdaq, M. A.
1994-09-01
Induced gamma-activities of different disk shaped irradiated samples and standards with 14 MeV neutrons have been determined by using a Monte Carlo calculation method adapted to the experimental conditions. The self-absorption of the multienergetic emitted gamma rays has been taken into account in the final samples activities. The influence of the different activation parameters has been studied. Na, K, Cl and P contents in biological (red beet) samples have been determined.
Directory of Open Access Journals (Sweden)
Frank L Forcino
Full Text Available Community ecologists commonly perform multivariate techniques (e.g., ordination, cluster analysis to assess patterns and gradients of taxonomic variation. A critical requirement for a meaningful statistical analysis is accurate information on the taxa found within an ecological sample. However, oversampling (too many individuals counted per sample also comes at a cost, particularly for ecological systems in which identification and quantification is substantially more resource consuming than the field expedition itself. In such systems, an increasingly larger sample size will eventually result in diminishing returns in improving any pattern or gradient revealed by the data, but will also lead to continually increasing costs. Here, we examine 396 datasets: 44 previously published and 352 created datasets. Using meta-analytic and simulation-based approaches, the research within the present paper seeks (1 to determine minimal sample sizes required to produce robust multivariate statistical results when conducting abundance-based, community ecology research. Furthermore, we seek (2 to determine the dataset parameters (i.e., evenness, number of taxa, number of samples that require larger sample sizes, regardless of resource availability. We found that in the 44 previously published and the 220 created datasets with randomly chosen abundances, a conservative estimate of a sample size of 58 produced the same multivariate results as all larger sample sizes. However, this minimal number varies as a function of evenness, where increased evenness resulted in increased minimal sample sizes. Sample sizes as small as 58 individuals are sufficient for a broad range of multivariate abundance-based research. In cases when resource availability is the limiting factor for conducting a project (e.g., small university, time to conduct the research project, statistically viable results can still be obtained with less of an investment.
Dendukuri, Nandini; Bélisle, Patrick; Joseph, Lawrence
2010-11-20
Diagnostic tests rarely provide perfect results. The misclassification induced by imperfect sensitivities and specificities of diagnostic tests must be accounted for when planning prevalence studies or investigations into properties of new tests. The previous work has shown that applying a single imperfect test to estimate prevalence can often result in very large sample size requirements, and that sometimes even an infinite sample size is insufficient for precise estimation because the problem is non-identifiable. Adding a second test can sometimes reduce the sample size substantially, but infinite sample sizes can still occur as the problem remains non-identifiable. We investigate the further improvement possible when three diagnostic tests are to be applied. We first develop methods required for studies when three conditionally independent tests are available, using different Bayesian criteria. We then apply these criteria to prototypic scenarios, showing that large sample size reductions can occur compared to when only one or two tests are used. As the problem is now identifiable, infinite sample sizes cannot occur except in pathological situations. Finally, we relax the conditional independence assumption, demonstrating in this once again non-identifiable situation that sample sizes may substantially grow and possibly be infinite. We apply our methods to the planning of two infectious disease studies, the first designed to estimate the prevalence of Strongyloides infection, and the second relating to estimating the sensitivity of a new test for tuberculosis transmission. The much smaller sample sizes that are typically required when three as compared to one or two tests are used should encourage researchers to plan their studies using more than two diagnostic tests whenever possible. User-friendly software is available for both design and analysis stages greatly facilitating the use of these methods.
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model.
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model. PMID:27494766
Gruijter, de J.J.; Braak, ter C.J.F.
1992-01-01
Two fundamentally different sources of randomness exist on which design and inference in spatial sampling can be based: (a) variation that would occur on resampling the same spatial population with other sampling configurations generated by the same design, and (b) variation occurring on sampling ot
International Nuclear Information System (INIS)
Due to the importance of calculating sensitivity and uncertainty in the calculation of field engineering, and especially in the nuclear world, it has been decided to present the main features of the new module present in the new version of SCALE 6.2 (currently beta 3 version) called SAMPLER. This module allows the calculation of uncertainty in a wide range of effective sections, neutron parameters, composition and physical parameters. However, the calculation of sensitivity is not present in the beta 3 release. Even so, this module can be helpful for participants of the proposed Benchmark by Expert Group on Uncertainty Analysis in Modelling (UAM-LWR), as well as to analysts in general. (Author)
Al-Kabab, FA; Ghoname, NA; Banabilh, SM
2014-01-01
Objective: The aim was to formulate a prediction regression equation for Yemeni and to compare it with Moyer's method for the prediction of the size of the un-erupted permanent canines and premolars. Subjects and Methods: Measurements of mesio-distal width of four permanent mandibular incisors, as well as canines and premolars in both arches were obtained from a sample of 400 school children aged 12-14 years old (13.80 ± 0.42 standard deviation) using electronic digital calliper. The data were subjected to statistical and linear regression analysis and then compared with Moyer's prediction tables. Results: The result showed that the mean mesio-distal tooth widths of the canines and premolars in the maxillary arch were significantly larger in boys than girls (P Moyer's method. Significant differences (P Moyer's tables in almost all percentile levels, including the recommended 50% and 75% levels. Conclusions: The Moyer's probability tables significantly overestimate the mesio-distal widths of the un-erupted permanent canine and premolars of Yemeni in almost all percentile levels, including the commonly used 50% and 75% levels. Therefore, it was suggested with caution that the proposed prediction regression equations and tables developed in the present study could be considered as an alternative and more precise method for mixed dentition space analysis in Yemeni. PMID:25143930
Hardouin, Jean-Benoit; Blanchin, Myriam; Feddag, Mohand-Larbi; Le Néel, Tanguy; Perrot, Bastien; Sébille, Véronique
2015-07-20
The analysis of patient-reported outcomes or other psychological traits can be realized using the Rasch measurement model. When the objective of a study is to compare groups of individuals, it is important, before the study, to define a sample size such that the group comparison test will attain a given power. The Raschpower procedure (RP) allows doing so with dichotomous items. The RP is extended to polytomous items. Several computational issues were identified, and adaptations have been proposed. The performance of this new version of RP is assessed using simulations. This adaptation of RP allows obtaining a good estimate of the expected power of a test to compare groups of patients in a large number of practical situations. A Stata module, as well as its implementation online, is proposed to perform the RP. Two versions of the RP for polytomous items are proposed (deterministic and stochastic versions). These two versions produce similar results in all of the tested cases. We recommend the use of the deterministic version, when the measure is obtained using small questionnaires or items with a few number of response categories, and the stochastic version elsewhere, so as to optimize computing time. PMID:25787270
Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.
2015-11-01
Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM
Nasir, M.; Pratama, D.; Anam, C.; Haryanto, F.
2016-03-01
The aim of this research was to calculate Size Specific Dose Estimates (SSDE) generated by the varian OBI CBCT v1.4 X-ray tube working at 100 kV using EGSnrc Monte Carlo simulations. The EGSnrc Monte Carlo code used in this simulation was divided into two parts. Phase space file data resulted by the first part simulation became an input to the second part. This research was performed with varying phantom diameters of 5 to 35 cm and varying phantom lengths of 10 to 25 cm. Dose distribution data were used to calculate SSDE values using trapezoidal rule (trapz) function in a Matlab program. SSDE obtained from this calculation was compared to that in AAPM report and experimental data. It was obtained that the normalization of SSDE value for each phantom diameter was between 1.00 and 3.19. The normalization of SSDE value for each phantom length was between 0.96 and 1.07. The statistical error in this simulation was 4.98% for varying phantom diameters and 5.20% for varying phantom lengths. This study demonstrated the accuracy of the Monte Carlo technique in simulating the dose calculation. In the future, the influence of cylindrical phantom material to SSDE would be studied.
Ciarleglio, Maria M; Arendt, Christopher D; Makuch, Robert W; Peduzzi, Peter N
2015-03-01
Specification of the treatment effect that a clinical trial is designed to detect (θA) plays a critical role in sample size and power calculations. However, no formal method exists for using prior information to guide the choice of θA. This paper presents a hybrid classical and Bayesian procedure for choosing an estimate of the treatment effect to be detected in a clinical trial that formally integrates prior information into this aspect of trial design. The value of θA is found that equates the pre-specified frequentist power and the conditional expected power of the trial. The conditional expected power averages the traditional frequentist power curve using the conditional prior distribution of the true unknown treatment effect θ as the averaging weight. The Bayesian prior distribution summarizes current knowledge of both the magnitude of the treatment effect and the strength of the prior information through the assumed spread of the distribution. By using a hybrid classical and Bayesian approach, we are able to formally integrate prior information on the uncertainty and variability of the treatment effect into the design of the study, mitigating the risk that the power calculation will be overly optimistic while maintaining a frequentist framework for the final analysis. The value of θA found using this method may be written as a function of the prior mean μ0 and standard deviation τ0, with a unique relationship for a given ratio of μ0/τ0. Results are presented for Normal, Uniform, and Gamma priors for θ. PMID:25583273
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.
2014-01-01
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same
一种满足有效样本量固定的放回抽样方法%A Method of Sampling with Replacement Making the Sample Size is Constant
Institute of Scientific and Technical Information of China (English)
艾小青; 金勇进
2012-01-01
相比不放回抽样，放回抽样的实施比较简单，操作性强，但缺点在于单元可能被重复抽到，抽出的有效样本量小于等于样本量，不是固定的。本文应用逆抽样的原理，设计了一种放回抽样方法，满足有效样本量固定，并且估计量的性质优良。%Conducting sampling with replacement is more simple, but its disadvantage is the units may be sampled repeatedly and the available sample size is random. The paper proposes a new method of sampling with replacement applied by the principle of inverse sampling, which makes the sample size is constant and the estimator is fine.
Directory of Open Access Journals (Sweden)
Myriam Blanchin
Full Text Available BACKGROUND: Patient-reported outcomes (PRO that comprise all self-reported measures by the patient are important as endpoint in clinical trials and epidemiological studies. Models from the Item Response Theory (IRT are increasingly used to analyze these particular outcomes that bring into play a latent variable as these outcomes cannot be directly observed. Preliminary developments have been proposed for sample size and power determination for the comparison of PRO in cross-sectional studies comparing two groups of patients when an IRT model is intended to be used for analysis. The objective of this work was to validate these developments in a large number of situations reflecting real-life studies. METHODOLOGY: The method to determine the power relies on the characteristics of the latent trait and of the questionnaire (distribution of the items, the difference between the latent variable mean in each group and the variance of this difference estimated using Cramer-Rao bound. Different scenarios were considered to evaluate the impact of the characteristics of the questionnaire and of the variance of the latent trait on performances of the Cramer-Rao method. The power obtained using Cramer-Rao method was compared to simulations. PRINCIPAL FINDINGS: Powers achieved with the Cramer-Rao method were close to powers obtained from simulations when the questionnaire was suitable for the studied population. Nevertheless, we have shown an underestimation of power with the Cramer-Rao method when the questionnaire was less suitable for the population. Besides, the Cramer-Rao method stays valid whatever the values of the variance of the latent trait. CONCLUSIONS: The Cramer-Rao method is adequate to determine the power of a test of group effect at design stage for two-group comparison studies including patient-reported outcomes in health sciences. At the design stage, the questionnaire used to measure the intended PRO should be carefully chosen in relation
Blanchin, Myriam; Hardouin, Jean-Benoit; Guillemin, Francis; Falissard, Bruno; Sébille, Véronique
2013-01-01
Background Patient-reported outcomes (PRO) that comprise all self-reported measures by the patient are important as endpoint in clinical trials and epidemiological studies. Models from the Item Response Theory (IRT) are increasingly used to analyze these particular outcomes that bring into play a latent variable as these outcomes cannot be directly observed. Preliminary developments have been proposed for sample size and power determination for the comparison of PRO in cross-sectional studies comparing two groups of patients when an IRT model is intended to be used for analysis. The objective of this work was to validate these developments in a large number of situations reflecting real-life studies. Methodology The method to determine the power relies on the characteristics of the latent trait and of the questionnaire (distribution of the items), the difference between the latent variable mean in each group and the variance of this difference estimated using Cramer-Rao bound. Different scenarios were considered to evaluate the impact of the characteristics of the questionnaire and of the variance of the latent trait on performances of the Cramer-Rao method. The power obtained using Cramer-Rao method was compared to simulations. Principal Findings Powers achieved with the Cramer-Rao method were close to powers obtained from simulations when the questionnaire was suitable for the studied population. Nevertheless, we have shown an underestimation of power with the Cramer-Rao method when the questionnaire was less suitable for the population. Besides, the Cramer-Rao method stays valid whatever the values of the variance of the latent trait. Conclusions The Cramer-Rao method is adequate to determine the power of a test of group effect at design stage for two-group comparison studies including patient-reported outcomes in health sciences. At the design stage, the questionnaire used to measure the intended PRO should be carefully chosen in relation to the studied
Johnson, Kenneth L.; White, K, Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.
International Nuclear Information System (INIS)
The objective of this study was to make a robust 137Cs inventory calculation for Reservoir 11 in the Mayak Production Associations industrial cascade of reservoirs. High resolution satellite photographs provided information about the original Techa River and floodplain environment before, during and also after Reservoir 11 was constructed. The images provided important clues about the old Techa River system and also showed the extent of the contaminated area. Sediment cores were sampled along a transect in Reservoir 11, and perpendicular to the Techa River in 1996. 137Cs contamination densities were in the range 344-4030 kBq m-2 and appear to exhibit a strong dependence on relative position along the transect, with more central positions exhibiting the highest actual levels. The contamination densities derived from the seven cores were combined with (published) data from four core samples collected in a 1994 field campaign. Sample data and images were integrated in a geographical information system (GIS). The GIS aided in the definition of distinctive floodplain classes, the so-called sediment 'facies', area calculations and in the positioning of 1994 and 1996 sample profiles in the reservoir. The resulting inventory estimate for Reservoir 11, is a factor 4-10 lower than earlier calculations. Analysis of the uncertainty in the calculations has been carried out and provides support for the integrity of the new method
Viechtbauer, Wolfgang
2007-01-01
Standardized effect sizes and confidence intervals thereof are extremely useful devices for comparing results across different studies using scales with incommensurable units. However, exact confidence intervals for standardized effect sizes can usually be obtained only via iterative estimation procedures. The present article summarizes several…
Oum Keltoum Hakam; Abdelmajid Choukri; Aicha Abbad; Ahmed Elharfi
2015-01-01
Purpose: As a way of prevention, we have measured the activities of uranium and radium isotopes (234U, 238U, 226Ra, 228Ra) for 30 drinking water samples collected from 11 wells, 9 springs (6 hot and 3 cold), 3 commercialised mineral water, and 7 tap water samples. Methods: Activities of the Ra isotopes were measured by ultra-gamma spectrometry using a low background and high efficiency well type germanium detector. The U isotopes were counted in an alpha spectrometer.Results: The measured Ura...
Voordouw, Gerrit; Menon, Priyesh; Pinnock, Tijan; Sharma, Mohita; Shen, Yin; Venturelli, Amanda; Voordouw, Johanna; Sexton, Aoife
2016-01-01
Microbially-influenced corrosion (MIC) contributes to the general corrosion rate (CR), which is typically measured with carbon steel coupons. Here we explore the use of carbon steel ball bearings, referred to as beads (55.0 ± 0.3 mg; Ø = 0.238 cm), for determining CRs. CRs for samples from an oil field in Oceania incubated with beads were determined by the weight loss method, using acid treatment to remove corrosion products. The release of ferrous and ferric iron was also measured and CRs based on weight loss and iron determination were in good agreement. Average CRs were 0.022 mm/yr for eight produced waters with high numbers (10(5)/ml) of acid-producing bacteria (APB), but no sulfate-reducing bacteria (SRB). Average CRs were 0.009 mm/yr for five central processing facility (CPF) waters, which had no APB or SRB due to weekly biocide treatment and 0.036 mm/yr for 2 CPF tank bottom sludges, which had high numbers of APB (10(6)/ml) and SRB (10(8)/ml). Hence, corrosion monitoring with carbon steel beads indicated that biocide treatment of CPF waters decreased the CR, except where biocide did not penetrate. The CR for incubations with 20 ml of a produced water decreased from 0.061 to 0.007 mm/yr when increasing the number of beads from 1 to 40. CRs determined with beads were higher than those with coupons, possibly also due to a higher weight of iron per unit volume used in incubations with coupons. Use of 1 ml syringe columns, containing carbon steel beads, and injected with 10 ml/day of SRB-containing medium for 256 days gave a CR of 0.11 mm/yr under flow conditions. The standard deviation of the distribution of residual bead weights, a measure for the unevenness of the corrosion, increased with increasing CR. The most heavily corroded beads showed significant pitting. Hence the use of uniformly sized carbon steel beads offers new opportunities for screening and monitoring of corrosion including determination of the distribution of corrosion rates, which allows
Directory of Open Access Journals (Sweden)
Gerrit eVoordouw
2016-03-01
Full Text Available Microbially-influenced corrosion (MIC contributes to the general corrosion rate (CR, which is typically measured with carbon steel coupons. Here we explore the use of carbon steel ball bearings, referred to as beads (55.0 ± 0.3 mg; Ø = 0.238 cm, for determining CRs. CRs for samples from an oil field in Oceania incubated with beads were determined by the weight loss method, using acid treatment to remove corrosion products. The release of ferrous and ferric iron was also measured and CRs based on weight loss and iron determination were in good agreement. Average CRs were 0.022 mm/yr for 8 produced waters with high numbers (105/ml of acid-producing bacteria (APB, but no sulfate-reducing bacteria (SRB. Average CRs were 0.009 mm/yr for 5 central processing facility (CPF waters, which had no APB or SRB due to weekly biocide treatment and 0.036 mm/yr for 2 CPF tank bottom sludges, which had high numbers of APB (106/ml and SRB (108/ml. Hence, corrosion monitoring with carbon steel beads indicated that biocide treatment of CPF waters decreased the CR, except where biocide did not penetrate. The CR for incubations with 20 ml of a produced water decreased from 0.061 to 0.007 mm/yr when increasing the number of beads from 1 to 40. CRs determined with beads were higher than those with coupons, possibly also due to a higher weight of iron per unit volume used in incubations with coupons. Use of 1 ml syringe columns, containing carbon steel beads and injected with 10 ml/day of SRB-containing medium for 256 days gave a CR of 0.11 mm/yr under flow conditions. The standard deviation of the distribution of residual bead weights, a measure for the unevenness of the corrosion, increased with increasing CR. The most heavily corroded beads showed significant pitting. Hence the use of uniformly sized carbon steel beads offers new opportunities for screening and monitoring of corrosion including determination of the distribution of corrosion rates, which allows
Voordouw, Gerrit; Menon, Priyesh; Pinnock, Tijan; Sharma, Mohita; Shen, Yin; Venturelli, Amanda; Voordouw, Johanna; Sexton, Aoife
2016-01-01
Microbially-influenced corrosion (MIC) contributes to the general corrosion rate (CR), which is typically measured with carbon steel coupons. Here we explore the use of carbon steel ball bearings, referred to as beads (55.0 ± 0.3 mg; Ø = 0.238 cm), for determining CRs. CRs for samples from an oil field in Oceania incubated with beads were determined by the weight loss method, using acid treatment to remove corrosion products. The release of ferrous and ferric iron was also measured and CRs based on weight loss and iron determination were in good agreement. Average CRs were 0.022 mm/yr for eight produced waters with high numbers (105/ml) of acid-producing bacteria (APB), but no sulfate-reducing bacteria (SRB). Average CRs were 0.009 mm/yr for five central processing facility (CPF) waters, which had no APB or SRB due to weekly biocide treatment and 0.036 mm/yr for 2 CPF tank bottom sludges, which had high numbers of APB (106/ml) and SRB (108/ml). Hence, corrosion monitoring with carbon steel beads indicated that biocide treatment of CPF waters decreased the CR, except where biocide did not penetrate. The CR for incubations with 20 ml of a produced water decreased from 0.061 to 0.007 mm/yr when increasing the number of beads from 1 to 40. CRs determined with beads were higher than those with coupons, possibly also due to a higher weight of iron per unit volume used in incubations with coupons. Use of 1 ml syringe columns, containing carbon steel beads, and injected with 10 ml/day of SRB-containing medium for 256 days gave a CR of 0.11 mm/yr under flow conditions. The standard deviation of the distribution of residual bead weights, a measure for the unevenness of the corrosion, increased with increasing CR. The most heavily corroded beads showed significant pitting. Hence the use of uniformly sized carbon steel beads offers new opportunities for screening and monitoring of corrosion including determination of the distribution of corrosion rates, which allows
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Investigations into forest soils face the problem of the high level of spatial variability that is an inherent property of all forest soils. In order to investigate the effect of changes in residue management practices on soil properties in hoop pine (Araucaria cunninghamii Aiton ex A. Cunn.) plantations of subtropical Australia it was important to understand the intensity of sampling effort required to overcome the spatial variability induced by those changes. Harvest residues were formed into windrows to prevent nitrogen (N) losses through volatilisation and erosion that had previously occurred as a result of pile and burn operations. We selected second rotation (2R) hoop pine sites where the windrows (10-15 m apart) had been formed 1, 2 and 3 years prior to sampling in order to examine the spatial variability in soil carbon (C)and N and in potential mineralisable N (PMN) in the areas beneath and between (inter-) the windrows. We examined the implications of soil variability on the number of samples required to detect differences in means for specific soil properties,at different ages and at specified levels of accuracy. Sample size needed to accurately reflect differences between means was not affected by the position where the samples were taken relative to the windrows but differed according to the parameter to be sampled. The relative soil sampling size required for detecting differences between means of a soil property in the inter-windrow and beneath-windrow positions was highly dependent on the soil property assessed and the acceptable relative sampling error. An alternative strategy for soil sampling should be considered, if the estimated sample size exceeds 50 replications. The possible solution to this problem is collection of composite soil samples allowing a substantial reduction in the number of samples required for chemical analysis without loss in the precision of the mean estimates for a particular soil property.
Energy Technology Data Exchange (ETDEWEB)
Nasrabadi, M.N. [Department of Physics, Faculty of Science, University of Kashan, Km. 6, Ravand Road, Kashan (Iran, Islamic Republic of)], E-mail: mnnasri@kashanu.ac.ir; Jalali, M. [Isfahan Nuclear Science and Technology Research Institute, Atomic Energy organization of Iran (Iran, Islamic Republic of); Mohammadi, A. [Department of Physics, Faculty of Science, University of Kashan, Km. 6, Ravand Road, Kashan (Iran, Islamic Republic of)
2007-10-15
In this work thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing materials is studied using bulk sample prompt gamma neutron activation analysis (BSPGNAA) with the MCNP code. The code was used to perform three dimensional simulations of a neutron source, neutron detector and sample of various material compositions. The MCNP model was validated against experimental measurements of the neutron flux performed using a BF{sub 3} detector. Simulations were performed to predict thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing solutes. In practice, the MCNP calculations are combined with experimental measurements of the relative thermal neutron flux over the sample's surface, with respect to a reference water sample, to derive the thermal neutron self-shielding within the sample. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the average thermal neutron flux within the sample volume is required.
Barghouty, A. F.
2013-01-01
Accurate estimates of electron-capture cross sections at energies relevant to ENA modeling (approx. few MeV per nucleon) and for multi-electron ions must rely on detailed, but computationally expensive, quantummechanical description of the collision process. Kuang's semi-classical approach is an elegant and efficient way to arrive at these estimates. Motivated by ENA modeling efforts, we shall briefly present this approach along with sample applications and report on current progress.
Energy Technology Data Exchange (ETDEWEB)
Bobbitt, N. Scott; Kim, Minjung [Department of Chemical Engineering, The University of Texas at Austin, Austin, Texas 78712 (United States); Sai, Na [Department of Physics, The University of Texas at Austin, Austin, Texas 78712 (United States); Marom, Noa [Department of Physics and Engineering Physics, Tulane University, New Orleans, Louisiana, 70118 (United States); Chelikowsky, James R. [Center for Computational Materials, Institute for Computational Engineering and Sciences, Departments of Physics and Chemical Engineering, The University of Texas at Austin, Austin, Texas 78712 (United States)
2014-09-07
Zinc oxide is often used as a popular inexpensive transparent conducting oxide. Here, we employ density functional theory and local density approximation to examine the effects of quantum confinement in doped nanocrystals of this material. Specifically, we examine the addition of Ga and Al dopants to ZnO nanocrystals on the order of 1.0 nm. We find that the inclusion of these dopants is energetically less favorable in smaller particles and that the electron binding energy, which is associated with the dopant activation, decreases with the nanocrystal size. We find that the introduction of impurities does not alter significantly the Kohn-Sham eigenspectrum for small nanocrystals of ZnO. The added electron occupies the lowest existing state, i.e., no new bound state is introduced in the gap. We verify this assertion with hybrid functional calculations.
International Nuclear Information System (INIS)
Zinc oxide is often used as a popular inexpensive transparent conducting oxide. Here, we employ density functional theory and local density approximation to examine the effects of quantum confinement in doped nanocrystals of this material. Specifically, we examine the addition of Ga and Al dopants to ZnO nanocrystals on the order of 1.0 nm. We find that the inclusion of these dopants is energetically less favorable in smaller particles and that the electron binding energy, which is associated with the dopant activation, decreases with the nanocrystal size. We find that the introduction of impurities does not alter significantly the Kohn-Sham eigenspectrum for small nanocrystals of ZnO. The added electron occupies the lowest existing state, i.e., no new bound state is introduced in the gap. We verify this assertion with hybrid functional calculations
Rodríguez-Kessler, P. L.; Rodríguez-Domínguez, A. R.
2015-11-01
Size and structure effects on the oxygen reduction reaction on PtN clusters with N = 12-13 atoms have been investigated using periodic density functional theory calculations with the generalized gradient approximation. To describe the catalytic activity, we calculated the O and OH adsorption energies on the cluster surface. The oxygen binding on the 3-fold hollow sites on stable Pt12-13 cluster models resulted more favorable for the reaction with O, compared with the Pt13(Ih) and Pt55(Ih) icosahedral particles, in which O binds strongly. However, the rate-limiting step resulted in the removal of the OH species due to strong adsorptions on the vertex sites, reducing the utility of the catalyst surface. On the other hand, the active sites of Pt12-13 clusters have been localized on the edge sites. In particular, the OH adsorption on a bilayer Pt12 cluster is the closest to the optimal target; with 0.0-0.2 eV weaker than the Pt(111) surface. However, more progress is necessary to activate the vertex sites of the clusters. The d-band center of PtN clusters shows that the structural dependence plays a decisive factor in the cluster reactivity.
Energy Technology Data Exchange (ETDEWEB)
Rodríguez-Kessler, P. L., E-mail: peter.rodriguez@ipicyt.edu.mx [Instituto Potosino de Investigación Científica y Tecnológica, San Luis Potosí 78216 (Mexico); Rodríguez-Domínguez, A. R. [Instituto de Física, Universidad Autónoma de San Luis Potosí, San Luis Potosí 78000 (Mexico)
2015-11-14
Size and structure effects on the oxygen reduction reaction on Pt{sub N} clusters with N = 12–13 atoms have been investigated using periodic density functional theory calculations with the generalized gradient approximation. To describe the catalytic activity, we calculated the O and OH adsorption energies on the cluster surface. The oxygen binding on the 3-fold hollow sites on stable Pt{sub 12−13} cluster models resulted more favorable for the reaction with O, compared with the Pt{sub 13}(I{sub h}) and Pt{sub 55}(I{sub h}) icosahedral particles, in which O binds strongly. However, the rate-limiting step resulted in the removal of the OH species due to strong adsorptions on the vertex sites, reducing the utility of the catalyst surface. On the other hand, the active sites of Pt{sub 12−13} clusters have been localized on the edge sites. In particular, the OH adsorption on a bilayer Pt{sub 12} cluster is the closest to the optimal target; with 0.0-0.2 eV weaker than the Pt(111) surface. However, more progress is necessary to activate the vertex sites of the clusters. The d-band center of Pt{sub N} clusters shows that the structural dependence plays a decisive factor in the cluster reactivity.
International Nuclear Information System (INIS)
The present paper summarizes calculation results for an international benchmark proposed by the Sodium-cooled Fast Reactor core Feed-back and transient response (SFR-FT) under the framework of the Working Party on scientific issues of Reactor Systems (WPRS) of the Nuclear Energy Agency of the OECD. It focuses on the large size oxide-fueled SFR. Library effect for core performance characteristics and reactivity feedback coefficients is analyzed using sensitivity analysis. The effect of ultra-fine energy group calculation in effective cross section generation is also analyzed. The discrepancy is about 0.4% for a neutron multiplication factor by changing JENDL-4.0 with JEFF-3.1. That is about -0.1% by changing JENDL-4.0 with ENDF/B-VII.1. The main contributions to the discrepancy between JENDL-4.0 and ENDF/B-VII.1 are 240Pu capture, 238U inelastic scattering and 239Pu fission. Those to the discrepancy between JENDL-4.0 and JEFF-3.1 are 23Na inelastic scattering, 56Fe inelastic scattering, 238Pu fission, 240Pu capture, 240Pu fission, 238U inelastic scattering, 239Pu fission and 239Pu nu-value. As for the sodium void reactivity, JEFF-3.1 and ENDF/B-VII.1 underestimate by about 8% compared with JENDL-4.0. The main contributions to the discrepancy between JENDL-4.0 and ENDF/B-VII.1 and 23Na elastic scattering, 23Na inelastic scattering and 239Pu fission. That to the discrepancy between JENDL-4.0 and JEFF-3.1 is 23Na inelastic scattering. The ultra-fine energy group calculation increases the sodium void reactivity by 2%. (author)
Energy Technology Data Exchange (ETDEWEB)
Romero, L.; Travesi, A.
1983-07-01
A codes, BETAL, was developed, written in FORTRAN IV, to automatize calculations and presentations of the result of the total alpha-beta activities measurements in environmental samples. This code performs the necessary calculations for transformation the activities measured in total counts, to pCi/1., bearing in mind the efficiency of the detector used and the other necessary parameters. Further more, it appraise the standard deviation of the result, and calculus the Lower limit of detection for each measurement. This code is written in iterative way by screen-operator dialogue, and asking the necessary data to perform the calculation of the activity in each case by a screen label. The code could be executed through any screen and keyboard terminal, (whose computer accepts Fortran IV) with a printer connected to the said computer. (Author) 5 refs.
Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.
Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira
2016-01-01
Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.
Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica
2013-01-01
This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…
Particle size distributions (PSD) have long been used to more accurately estimate the PM10 fraction of total particulate matter (PM) stack samples taken from agricultural sources. These PSD analyses were typically conducted using a Coulter Counter with 50 micrometer aperture tube. With recent increa...
Hancock, Gregory R.; Freeman, Mara J.
2001-01-01
Provides select power and sample size tables and interpolation strategies associated with the root mean square error of approximation test of not close fit under standard assumed conditions. The goal is to inform researchers conducting structural equation modeling about power limitations when testing a model. (SLD)
DEFF Research Database (Denmark)
Thorlund, Kristian; Anema, Aranka; Mills, Edward
2010-01-01
To illustrate the utility of statistical monitoring boundaries in meta-analysis, and provide a framework in which meta-analysis can be interpreted according to the adequacy of sample size. To propose a simple method for determining how many patients need to be randomized in a future trial before...
Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael
2016-01-01
Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…
Rogan, Joanne C.; Keselman, H. J.
1977-01-01
The effects of variance heterogeneity on the empirical probability of a Type I error for the analysis of variance (ANOVA) F-test are examined. The rate of Type I error varies as a function of the degree of variance heterogeneity, and the ANOVA F-test is not always robust to variance heterogeneity when sample sizes are equal. (Author/JAC)
DEFF Research Database (Denmark)
Aukland, S M; Westerhausen, R; Plessen, K J;
2011-01-01
BACKGROUND AND PURPOSE: Several studies suggest that VLBW is associated with a reduced CC size later in life. We aimed to clarify this in a prospective, controlled study of 19-year-olds, hypothesizing that those with LBWs had smaller subregions of CC than the age-matched controls, even after corr...
Institute of Scientific and Technical Information of China (English)
秦艳杰; 金迪; 初冠囡; 李霞; 李永仁
2012-01-01
This study examines the effects of sample size and number of AFLP primer pairs on genetic structure in cultured sea urchin (Strongyloeentrotus intermedius). 8 different sample sizes, 10,20,30,40,50,60,70 and 80 individuals,were used to calculate the genetic parameters including Nei's gene diversity(H),Shannon's information index (I) and percent of polymorphic loci (PP). These indicators increased dramatically along with the increasing of sample sizes and then held a stable trend. When the sample sizes were equal to,or more than 50,PP showed no significant differences with the sample size rising. H and I showed no significant differences when there were more than 30 and 20 individuals,respectively. The genetic parameters were also calculated from band information detected by each of one,two, three,four and five AFLP primer pairs,and those calculated from six AFLP primer pairs were considered as control. H,I and PP were all showed significant differences between results from one or two primer pairs and those from control. When the numbers of AFLP primer pairs were equal to or more than three,all genetic parameters showed no significant differences. Accordingly, we suggested that when AFLP markers were used to estimate genetic diversity of sea urchin population, the minimum sample size should not be less than 50 and at least 3 AFLP primer pairs(or,more than 100 loci) were required when the sample size was large enough(80 individuals).%采用10,20,30,40,50,60,70,80共8个样本量梯度,随机选择并逐步增加AFLP引物组合,研究了AFLP标记数量及样本量对中间球海胆(Strongylocentrotus intermedius)群体遗传学指标的影响.结果表明:基因多样性指数(H)、Shannon氏指数(I)和多态位点比例(PP)3个遗传学指标对样本量的敏感程度不同,PP指标受样本量影响较大,样本量≥50个时方稳定,H指标在样本量≥30时稳定下来,而I指标在样本量≥20时即不再出现显著差异.共采用6对AFLP引物对80
Energy Technology Data Exchange (ETDEWEB)
Pedrotti, Alceu [Sergipe Univ., Sao Cristovao, SE (Brazil). Dept. de Engenharia Agronomica]. E-mail: apedroti@ufs.br; Pauletto, Antonio [Universidade Federal de Pelotas, RS (Brazil). Faculdade de Agronomia Eliseu Maciel. Dept. de Solos; Crestana, Silvio; Cruvinel, Paulo Estevao; Vaz, Carlos Manoel Pedro; Naime, Joao de Mendonca; Silva, Alvaro Macedo da [Empresa Brasileira de Pesquisa Agropecuaria, Sao Carlos, SP (Brazil). Instrumentacao Agropecuaria
2003-12-01
Computerized tomography (CT) is an important tool in Soil Science for noninvasive measurement of density and water content of soil samples. This work aims to describe the aspects of sample size adequacy for Planosol (Albaqualf) and to evaluate procedures for statistical analysis, using a CT scanner with a {sup 241} Am source. Density errors attributed to the equipment are 0.051 and 0.046 Mg m{sup -3} for horizons A and B, respectively. The theoretical value for sample thickness for the Planosol, using this equipment, is 4.0 cm for the horizons A and B. The ideal thickness of samples is approximately 6.0 cm, being smaller for samples of the horizon B in relation to A. Alternatives for the improvement of the efficiency analysis and the reliability of the results obtained by CT are also discussed, and indicate good precision and adaptability of the application of this technology in Planosol (Albaqualf) studies. (author)
Directory of Open Access Journals (Sweden)
Congcong Li
2014-01-01
Full Text Available Although a large number of new image classification algorithms have been developed, they are rarely tested with the same classification task. In this research, with the same Landsat Thematic Mapper (TM data set and the same classification scheme over Guangzhou City, China, we tested two unsupervised and 13 supervised classification algorithms, including a number of machine learning algorithms that became popular in remote sensing during the past 20 years. Our analysis focused primarily on the spectral information provided by the TM data. We assessed all algorithms in a per-pixel classification decision experiment and all supervised algorithms in a segment-based experiment. We found that when sufficiently representative training samples were used, most algorithms performed reasonably well. Lack of training samples led to greater classification accuracy discrepancies than classification algorithms themselves. Some algorithms were more tolerable to insufficient (less representative training samples than others. Many algorithms improved the overall accuracy marginally with per-segment decision making.
All-reflective UV-VIS-NIR transmission and fluorescence spectrometer for Î¼m-sized samples
Kirchner, Friedrich O.; Lahme, Stefan; Riedle, Eberhard; Baum, Peter
2014-07-01
We report on an optical transmission spectrometer optimized for tiny samples. The setup is based on all-reflective parabolic optics and delivers broadband operation from 215 to 1030 nm. A fiber-coupled light source is used for illumination and a fiber-coupled miniature spectrometer for detection. The diameter of the probed area is less than 200 μm for all wavelengths. We demonstrate the capability to record transmission, absorption, reflection, fluorescence and refractive indices of tiny and ultrathin sample flakes with this versatile device. The performance is validated with a solid state wavelength standard and with dye solutions.
Emami Riedmaier, Ariane; Burt, Howard; Abduljalil, Khaled; Neuhoff, Sibylle
2016-07-01
Rosuvastatin is a substrate of choice in clinical studies of organic anion-transporting polypeptide (OATP)1B1- and OATP1B3-associated drug interactions; thus, understanding the effect of OATP1B1 polymorphisms on the pharmacokinetics of rosuvastatin is crucial. Here, physiologically based pharmacokinetic (PBPK) modeling was coupled with a power calculation algorithm to evaluate the influence of sample size on the ability to detect an effect (80% power) of OATP1B1 phenotype on pharmacokinetics of rosuvastatin. Intestinal, hepatic, and renal transporters were mechanistically incorporated into a rosuvastatin PBPK model using permeability-limited models for intestine, liver, and kidney, respectively, nested within a full PBPK model. Simulated plasma rosuvastatin concentrations in healthy volunteers were in agreement with previously reported clinical data. Power calculations were used to determine the influence of sample size on study power while accounting for OATP1B1 haplotype frequency and abundance in addition to its correlation with OATP1B3 abundance. It was determined that 10 poor-transporter and 45 intermediate-transporter individuals are required to achieve 80% power to discriminate the AUC0-48h of rosuvastatin from that of the extensive-transporter phenotype. This number was reduced to 7 poor-transporter and 40 intermediate-transporter individuals when the reported correlation between OATP1B1 and 1B3 abundance was taken into account. The current study represents the first example in which PBPK modeling in conjunction with power analysis has been used to investigate sample size in clinical studies of OATP1B1 polymorphisms. This approach highlights the influence of interindividual variability and correlation of transporter abundance on study power and should allow more informed decision making in pharmacogenomic study design. PMID:27385171
Directory of Open Access Journals (Sweden)
Lan Do
2015-01-01
Full Text Available Hyaluronan is a negatively charged polydisperse polysaccharide where both its size and tissue concentration play an important role in many physiological and pathological processes. The various functions of hyaluronan depend on its molecular size. Up to now, it has been difficult to study the role of hyaluronan in diseases with pathological changes in the extracellular matrix where availability is low or tissue samples are small. Difficulty to obtain large enough biopsies from human diseased tissue or tissue from animal models has also restricted the study of hyaluronan. In this paper, we demonstrate that gas-phase electrophoretic molecular mobility analyzer (GEMMA can be used to estimate the distribution of hyaluronan molecular sizes in biological samples with a limited amount of hyaluronan. The low detection level of the GEMMA method allows for estimation of hyaluronan molecular sizes from different parts of small organs. Hence, the GEMMA method opens opportunity to attain a profile over the distribution of hyaluronan molecular sizes and estimate changes caused by disease or experimental conditions that has not been possible to obtain before.
Holdeman, James D.; Clisset, James R.; Moder, Jeffrey P.
2010-01-01
The primary purpose of this jet-in-crossflow study was to calculate expected results for two configurations for which limited or no experimental results have been published: (1) cases of opposed rows of closely-spaced jets from inline and staggered round holes and (2) rows of jets from alternating large and small round holes. Simulations of these configurations were performed using an Excel (Microsoft Corporation) spreadsheet implementation of a NASA-developed empirical model which had been shown in previous publications to give excellent representations of mean experimental scalar results suggesting that the NASA empirical model for the scalar field could confidently be used to investigate these configurations. The supplemental Excel spreadsheet is posted with the current report on the NASA Glenn Technical Reports Server (http://gltrs.grc.nasa.gov) and can be accessed from the Supplementary Notes section as TM-2010-216100-SUPPL1.xls. Calculations for cases of opposed rows of jets with the orifices on one side shifted show that staggering can improve the mixing, particularly for cases where jets would overpenetrate slightly if the orifices were in an aligned configuration. The jets from the larger holes dominate the mixture fraction for configurations with a row of large holes opposite a row of smaller ones although the jet penetration was about the same. For single and opposed rows with mixed hole sizes, jets from the larger holes penetrated farther. For all cases investigated, the dimensionless variance of the mixture fraction decreased significantly with increasing downstream distance. However, at a given downstream distance, the variation between cases was small.
Townsend, James T.; Colonius, Hans
2005-01-01
The maximum and minimum of a sample from a probability distribution are extremely important random variables in many areas of psychological theory, methodology, and statistics. For instance, the behavior of the mean of the maximum or minimum processing time, as a function of the number of component random processing times ("n"), has been studied…
Directory of Open Access Journals (Sweden)
Morecroft Michael D
2001-07-01
Full Text Available Abstract Background The Resource Dispersion Hypothesis (RDH proposes a mechanism for the passive formation of social groups where resources are dispersed, even in the absence of any benefits of group living per se. Despite supportive modelling, it lacks empirical testing. The RDH predicts that, rather than Territory Size (TS increasing monotonically with Group Size (GS to account for increasing metabolic needs, TS is constrained by the dispersion of resource patches, whereas GS is independently limited by their richness. We conducted multiple-year tests of these predictions using data from the long-term study of badgers Meles meles in Wytham Woods, England. The study has long failed to identify direct benefits from group living and, consequently, alternative explanations for their large group sizes have been sought. Results TS was not consistently related to resource dispersion, nor was GS consistently related to resource richness. Results differed according to data groupings and whether territories were mapped using minimum convex polygons or traditional methods. Habitats differed significantly in resource availability, but there was also evidence that food resources may be spatially aggregated within habitat types as well as between them. Conclusions This is, we believe, the largest ever test of the RDH and builds on the long-term project that initiated part of the thinking behind the hypothesis. Support for predictions were mixed and depended on year and the method used to map territory borders. We suggest that within-habitat patchiness, as well as model assumptions, should be further investigated for improved tests of the RDH in the future.
Energy Technology Data Exchange (ETDEWEB)
Damiani, Rick [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2016-02-08
This manual summarizes the theory and preliminary verifications of the JacketSE module, which is an offshore jacket sizing tool that is part of the Wind-Plant Integrated System Design & Engineering Model toolbox. JacketSE is based on a finite-element formulation and on user-prescribed inputs and design standards' criteria (constraints). The physics are highly simplified, with a primary focus on satisfying ultimate limit states and modal performance requirements. Preliminary validation work included comparing industry data and verification against ANSYS, a commercial finite-element analysis package. The results are encouraging, and future improvements to the code are recommended in this manual.
Energy Technology Data Exchange (ETDEWEB)
Ota, T. A. [AWE, Aldermaston, Reading, Berkshire RG7 4PR (United Kingdom)
2013-10-15
Photonic Doppler velocimetry, also known as heterodyne velocimetry, is a widely used optical technique that requires the analysis of frequency modulated signals. This paper describes an investigation into the errors of short time Fourier transform analysis. The number of variables requiring investigation was reduced by means of an equivalence principle. Error predictions, as the number of cycles, samples per cycle, noise level, and window type were varied, are presented. The results were found to be in good agreement with analytical models.
Grażyna E Sroga; Karim, Lamya; Colón, Wilfredo; Vashishth, Deepak
2011-01-01
There is growing evidence supporting the need for a broad scale investigation of the proteins and protein modifications in the organic matrix of bone and the use of these measures to predict fragility fractures. However, limitations in sample availability and high heterogeneity of bone tissue cause unique experimental and/or diagnostic problems. We addressed these by an innovative combination of laser capture microscopy with our newly developed liquid chromatography separation methods, follow...
Lakayan, Dina; Haselberg, Rob; Niessen, Wilfried M A; Somsen, Govert W; Kool, Jeroen
2016-06-24
Surface plasmon resonance (SPR) is an optical technique that measures biomolecular interactions. Stand-alone SPR cannot distinguish different binding components present in one sample. Moreover, sample matrix components may show non-specific binding to the sensor surface, leading to detection interferences. This study describes the development of coupled size-exclusion chromatography (SEC) SPR sensing for the separation of sample components prior to their on-line bio-interaction analysis. A heterogeneous polyclonal human serum albumin antibody (anti-HSA) sample, which was characterized by proteomics analysis, was used as test sample. The proposed SEC-SPR coupling was optimized by studying system parameters, such as injection volume, flow rate and sample concentration, using immobilized HSA on the sensor chip. Automated switch valves were used for on-line regeneration of the SPR sensor chip in between injections and for potential chromatographic heart cutting experiments, allowing SPR detection of individual components. The performance of the SEC-SPR system was evaluated by the analysis of papain-digested anti-HSA sampled at different incubation time points. The new on-line SEC-SPR methodology allows specific label-free analysis of real-time interactions of eluting antibody sample constituents towards their antigenic target. PMID:27215465
DEFF Research Database (Denmark)
Rousing, Tine; Møller, Steen Henrik; Hansen, Steffen W
2012-01-01
" in validity, reliability as well as feasibility - the latter both as regards time and economy costs. This paper based on empiric data addressed the questions on needed sample size for a robust herd assessment of animal based measures. The animal based part of the full WelFur protocol including 9 animal based......European Fur Breeder's Association initiated the "WelFur project" in 2009 which is aiming at developing an applicable on farm welfare assessment protocol for mink based on the Welfare Quality® principles. Such a welfare assessment system should possess the following qualities: It should be "high...... in herd prevalence of the mentioned parameters. Statistical analyses showed that a sample size of 125 adult mink was a robus estimate of the herd level of animal based measures....
DEFF Research Database (Denmark)
Shetty, Nisha; Min, Tai-Gi; Gislum, René;
2011-01-01
The effects of the number of seeds in a training sample set on the ability to predict the viability of cabbage or radish seeds are presented and discussed. The supervised classification method extended canonical variates analysis (ECVA) was used to develop a classification model. Calibration sub...... using all 600 seeds in the calibration set. Thus, the number of seeds in the calibration set can be reduced by up to 67% without significant loss of classification accuracy, which will effectively enhance the cost-effectiveness of NIR spectral analysis. Wavelength regions important...
Gerrit eVoordouw; Priyesh eMenon; Tijan ePinnock; Mohita eSharma; Yin eShen; Amanda eVenturelli; Johanna eVoordouw; Aoife eSexton
2016-01-01
Microbially-influenced corrosion (MIC) contributes to the general corrosion rate (CR), which is typically measured with carbon steel coupons. Here we explore the use of carbon steel ball bearings, referred to as beads (55.0 ± 0.3 mg; Ø = 0.238 cm), for determining CRs. CRs for samples from an oil field in Oceania incubated with beads were determined by the weight loss method, using acid treatment to remove corrosion products. The release of ferrous and ferric iron was also measured and CRs ba...
Chang, G. S.; Lillo, M. A.
2009-08-01
The National Nuclear Security Administrations (NNSA) Reduced Enrichment for Research and Test Reactors (RERTR) program assigned to the Idaho National Laboratory (INL) the responsibility of developing and demonstrating high uranium density research reactor fuel forms to enable the use of low enriched uranium (LEU) in research and test reactors around the world. A series of full-size fuel plate experiments have been proposed for irradiation testing in the center flux trap (CFT) position of the Advanced Test Reactor (ATR). These full-size fuel plate tests are designated as the AFIP tests. The AFIP nominal fuel zone is rectangular in shape having a designed length of 21.5-in (54.61-cm), width of 1.6-in (4.064-cm), and uniform thickness of 0.014-in (0.03556-cm). This gives a nominal fuel zone volume of 0.482 in3 (7.89 cm3) per fuel plate. The AFIP test assembly has two test positions. Each test position is designed to hold 2 full-size plates, for a total of 4 full-size plates per test assembly. The AFIP test plates will be irradiated at a peak surface heat flux of about 350 W/cm2 and discharged at a peak U-235 burn-up of about 70 at.%. Based on limited irradiation testing of the monolithic (U-10Mo) fuel form, it is desirable to keep the peak fuel temperature below 250°C to achieve this, it will be necessary to keep plate heat fluxes below 500 W/cm2. Due to the heavy U-235 loading and a plate width of 1.6-in (4.064-cm), the neutron self-shielding will increase the local-to-average-ratio (L2AR) fission power near the sides of the fuel plates. To demonstrate that the AFIP experiment will meet the ATR safety requirements, a very detailed 2-dimensional (2D) Y-Z fission power profile was evaluated in order to best predict the fuel plate temperature distribution. The ability to accurately predict fuel plate power and burnup are essential to both the design of the AFIP tests as well as evaluation of the irradiated fuel performance. To support this need, a detailed MCNP Y
Kolak, Jon; Hackley, Paul C.; Ruppert, Leslie F.; Warwick, Peter D.; Burruss, Robert
2015-01-01
To investigate the potential for mobilizing organic compounds from coal beds during geologic carbon dioxide (CO2) storage (sequestration), a series of solvent extractions using dichloromethane (DCM) and using supercritical CO2 (40 °C and 10 MPa) were conducted on a set of coal samples collected from Louisiana and Ohio. The coal samples studied range in rank from lignite A to high volatile A bituminous, and were characterized using proximate, ultimate, organic petrography, and sorption isotherm analyses. Sorption isotherm analyses of gaseous CO2 and methane show a general increase in gas storage capacity with coal rank, consistent with findings from previous studies. In the solvent extractions, both dry, ground coal samples and moist, intact core plug samples were used to evaluate effects of variations in particle size and moisture content. Samples were spiked with perdeuterated surrogate compounds prior to extraction, and extracts were analyzed via gas chromatography–mass spectrometry. The DCM extracts generally contained the highest concentrations of organic compounds, indicating the existence of additional hydrocarbons within the coal matrix that were not mobilized during supercritical CO2 extractions. Concentrations of aliphatic and aromatic compounds measured in supercritical CO2 extracts of core plug samples generally are lower than concentrations in corresponding extracts of dry, ground coal samples, due to differences in particle size and moisture content. Changes in the amount of extracted compounds and in surrogate recovery measured during consecutive supercritical CO2extractions of core plug samples appear to reflect the transition from a water-wet to a CO2-wet system. Changes in coal core plug mass during supercritical CO2 extraction range from 3.4% to 14%, indicating that a substantial portion of coal moisture is retained in the low-rank coal samples. Moisture retention within core plug samples, especially in low-rank coals, appears to inhibit
International Nuclear Information System (INIS)
In the α-particle spectrometric technique for radium isotope analysis, the effects of the barium carrier thickness and pore size of the membrane filter (used for the main Ba-Ra sulphate filtration) on the resolution of the α-particle energy peaks were investigated. With 0.45 μm Millipore membrane filters, the full width at half maximum (FWHM) for 4.78 MeV α-decay peak of 226Ra decreased from 221.7 to 121.3 keV with reduction in barium carrier additions from 320 to 20 μg Ba. The resolution further improved to 67.0 keV for 20 μg Ba carrier when 0.2 μm Nucleopore filters were used. There was also a significant decrease (20-50%) in the retention of radon and its daughters compared to their equilibrium concentrations as the barium carrier thickness was reduced. A correlation study between 133Ba tracer and 226Ra isotopes recovery factors gave a recovery factor 226Ra/133Ba ratio of 0.93 ± 0.08 in the main Ba-Ra sulphate precipitate under various pH and sulphate concentration conditions. (author)
Energy Technology Data Exchange (ETDEWEB)
Plionis, Alexander A [Los Alamos National Laboratory; Peterson, Dominic S [Los Alamos National Laboratory; Tandon, Lav [Los Alamos National Laboratory; Lamont, Stephen P [Los Alamos National Laboratory
2009-01-01
Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid nondestructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.
Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke
2016-03-01
Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate
Directory of Open Access Journals (Sweden)
M.I. Baranov
2016-06-01
Full Text Available Purpose. Calculation and experimental researches of the electro-thermal resistibility of the steel sheet samples to action standard pulse current components of the artificial lightning with amplitude-time parameters (ATP, corresponded the requirements of normative documents of USA for SAE ARP 5412 & SAE ARP 5416. Methodology. Electrophysics bases of technique of high tensions and large impulsive currents (LIC, and also scientific and technical bases of planning of devices of high-voltage impulsive technique and measuring in them LIC. Сurrent amplitude ImA=±200 kA (with a tolerance of ±10 %; current action integral JA=2∙106 A2•s (with a tolerance of ±20 %; time, corresponding to the amplitude of the current ImA, tmA≤50 microseconds; the duration of the current flow τpA≤500 microseconds. Results. The results of the evaluation of the calculated and experimental studies of electro-thermal resistance of the samples of plates measuring 0,5 m 0,5 m stainless steel 1 mm thickness to the action on them artificial lightning impulse currents with rationed ATP on the requirements of normative documents of USA for SAE ARP 5412 & SAE ARP 5416. A pulse A- component have a first amplitude 192 kA, the corresponding time of 34 μs, and the duration aperiodic component amplitude 804 A, corresponding to the time 9 ms. It has been shown that the long C- component current of artificial lightning can lead to keyhole these samples. The diameter of the holes in this thin steel sheet, which is formed during the flow of current C- components can reach 15 mm. The results of calculation and experiment agree within 28 %. Originality. For the first time in world practice on the generator large pulsed currents experimental studies of resistibility of sheet steel samples to the action of artificial lightning currents with critical parameters. Practical value. Using the results obtained in the practice of lightning protection will significantly improve the
Lorenzo, C; Carretero, J M; Arsuaga, J L; Gracia, A; Martínez, I
1998-05-01
A sexual dimorphism more marked than in living humans has been claimed for European Middle Pleistocene humans, Neandertals and prehistoric modern humans. In this paper, body size and cranial capacity variation are studied in the Sima de los Huesos Middle Pleistocene sample. This is the largest sample of non-modern humans found to date from one single site, and with all skeletal elements represented. Since the techniques available to estimate the degree of sexual dimorphism in small palaeontological samples are all unsatisfactory, we have used the bootstraping method to asses the magnitude of the variation in the Sima de los Huesos sample compared to modern human intrapopulational variation. We analyze size variation without attempting to sex the specimens a priori. Anatomical regions investigated are scapular glenoid fossa; acetabulum; humeral proximal and distal epiphyses; ulnar proximal epiphysis; radial neck; proximal femur; humeral, femoral, ulnar and tibial shaft; lumbosacral joint; patella; calcaneum; and talar trochlea. In the Sima de los Huesos sample only the humeral midshaft perimeter shows an unusual high variation (only when it is expressed by the maximum ratio, not by the coefficient of variation). In spite of that the cranial capacity range at Sima de los Huesos almost spans the rest of the European and African Middle Pleistocene range. The maximum ratio is in the central part of the distribution of modern human samples. Thus, the hypothesis of a greater sexual dimorphism in Middle Pleistocene populations than in modern populations is not supported by either cranial or postcranial evidence from Sima de los Huesos. PMID:9590522
Directory of Open Access Journals (Sweden)
AR Silva
2011-03-01
Full Text Available O objetivo deste estudo foi determinar o tamanho apropriado de amostra por meio da técnica de simulação de subamostras para a caracterização de variáveis morfológicas de frutos de oito acessos (variedades de quatro espécies de pimenteira (Capsicum spp., que foram cultivadas em área experimental da UFPB. Foram analisados tamanhos reduzidos de amostras, variando de 3 a 29 frutos, com 100 amostras para cada tamanho simulado em um processo de amostragem com reposição de dados. Realizou-se análise de variância para os números mínimos de frutos por amostra que representasse a amostra de referência (30 frutos em cada variável estudada, constituindo um delineamento experimental inteiramente casualizado com duas repetições, onde cada dado representou o primeiro número de frutos na amostra simulada que não apresentou nenhum valor fora do intervalo de confiança da amostra de referência e que assim manteve-se até a última subamostra da simulação. A técnica de simulação utilizada permitiu obter, com a mesma precisão da amostra de 30 frutos, reduções do tamanho amostral em torno de 50%, dependendo da variável morfológica, não havendo diferenças entre os acessos.The appropriate sample size for the evaluation of morphological fruit traits of pepper was evaluated through a technique of simulation of subsamples. The treatments consisted of eight accessions of four pepper species (Capsicum spp., cultivated in an experimental area of the Universidade Federal da Paraíba. Small samples, ranging from 3 to 29 fruits were evaluated. For each sample size, 100 subsamples were simulated with data replacement. The data were submitted to analysis of variance, in a complete randomized design, for the minimum number of fruits per sample. Each collected data consisted of the first number of fruits in the simulated sample without values out of the confidence interval. This procedure was done up to the last subsample simulation. The
Mollet, Pierre; Kery, Marc; Gardner, Beth; Pasinelli, Gilberto; Royle, Andy
2015-01-01
We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR) model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex). The SCR model yielded atotal population size estimate (posterior mean) of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130–147). The observed sex ratio was skewed towards males (0.63). The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54–0.61), suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.
Directory of Open Access Journals (Sweden)
Pierre Mollet
Full Text Available We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex. The SCR model yielded a total population size estimate (posterior mean of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130-147. The observed sex ratio was skewed towards males (0.63. The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54-0.61, suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.
Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.
2015-01-01
Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.
International Nuclear Information System (INIS)
The aim of this study was to investigate the dose response relationship of dicentrics in human lymphocytes after CT scans at tube voltages of 80 and 140 kV. Blood samples from a healthy donor placed in tissue equivalent abdomen phantoms of standard, pediatric and adipose sizes were exposed at dose levels up to 0.1 Gy using a 64-slice CT scanner. It was found that both the tube voltage and the phantom size significantly influenced the CT scan-induced linear dose-response relationship of dicentrics in human lymphocytes. Using the same phantom (standard abdomen), 80 kV CT x-rays were biologically more effective than 140 kV CT x-rays. However, it could also be determined that the applied phantom size had much more influence on the biological effectiveness. Obviously, the increasing slopes of the CT scan-induced dose response relationships of dicentrics in human lymphocytes obtained in a pediatric, a standard and an adipose abdomen have been induced by scattering effects of photons, which strongly increase with increasing phantom size.
Jan, Pierre-Loup; Gracianne, Cécile; Fournet, Sylvain; Olivier, Eric; Arnaud, Jean-François; Porte, Catherine; Bardou-Valette, Sylvie; Denis, Marie-Christine; Petit, Eric J
2016-03-01
The sustainability of modern agriculture relies on strategies that can control the ability of pathogens to overcome chemicals or genetic resistances through natural selection. This evolutionary potential, which depends partly on effective population size (N e ), is greatly influenced by human activities. In this context, wild pathogen populations can provide valuable information for assessing the long-term risk associated with crop pests. In this study, we estimated the effective population size of the beet cyst nematode, Heterodera schachtii, by sampling 34 populations infecting the sea beet Beta vulgaris spp. maritima twice within a one-year period. Only 20 populations produced enough generations to analyze the variation in allele frequencies, with the remaining populations showing a high mortality rate of the host plant after only 1 year. The 20 analyzed populations showed surprisingly low effective population sizes, with most having N e close to 85 individuals. We attribute these low values to the variation in population size through time, systematic inbreeding, and unbalanced sex-ratios. Our results suggest that H. schachtii has low evolutionary potential in natural environments. Pest control strategies in which populations on crops mimic wild populations may help prevent parasite adaptation to host resistance.
Jan, Pierre-Loup; Gracianne, Cécile; Fournet, Sylvain; Olivier, Eric; Arnaud, Jean-François; Porte, Catherine; Bardou-Valette, Sylvie; Denis, Marie-Christine; Petit, Eric J
2016-03-01
The sustainability of modern agriculture relies on strategies that can control the ability of pathogens to overcome chemicals or genetic resistances through natural selection. This evolutionary potential, which depends partly on effective population size (N e ), is greatly influenced by human activities. In this context, wild pathogen populations can provide valuable information for assessing the long-term risk associated with crop pests. In this study, we estimated the effective population size of the beet cyst nematode, Heterodera schachtii, by sampling 34 populations infecting the sea beet Beta vulgaris spp. maritima twice within a one-year period. Only 20 populations produced enough generations to analyze the variation in allele frequencies, with the remaining populations showing a high mortality rate of the host plant after only 1 year. The 20 analyzed populations showed surprisingly low effective population sizes, with most having N e close to 85 individuals. We attribute these low values to the variation in population size through time, systematic inbreeding, and unbalanced sex-ratios. Our results suggest that H. schachtii has low evolutionary potential in natural environments. Pest control strategies in which populations on crops mimic wild populations may help prevent parasite adaptation to host resistance. PMID:26989440
George, Goldy C.; Hoelscher, Deanna M.; Nicklas, Theresa A.; Kelder, Steven H.
2016-01-01
Objective To examine diet- and body size-related attitudes and behaviors associated with supplement use in a representative sample of fourth-grade students in Texas. Design Cross-sectional data from the School Physical Activity and Nutrition study, a probability-based sample of schoolchildren. Children completed a questionnaire that assessed supplement use, food choices, diet-related attitudes, and physical activity; height and weight were measured. Setting School classrooms. Participants Representative sample of fourth-grade students in Texas (n = 5967; mean age = 9.7 years standard error of the mean [SEM] = .03 years, 46% Hispanic, 11% African-American). Main Outcome Measures Previous day vitamin supplement consumption, diet- and body size-related attitudes, food choices, demographic factors, and physical activity. Analysis Multivariable logistic regression models, P body image and greater interest in trying new food. Relative to nonusers, supplement users were less likely to perceive that they always ate healthful food, although supplement use was associated with more healthful food choices in boys and girls (P < .001). Conclusions and Implications The widespread use of supplements and clustering of supplement use with healthful diet and greater physical activity in fourth graders suggest that supplement use be closely investigated in studies of diet–disease precursor relations and lifestyle factors in children. PMID:19304254
International Nuclear Information System (INIS)
X-band electron magnetic resonance (EMR) measurements were done at 115≤T≤600 K on bulk and nanometer size-grain powder single-crystalline samples of La0.9Ca0.1MnO3, in order to study an impact of structural inhomogeneity on magnetic ordering. For the nano-crystal sample, two superimposed EMR lines are observed below 240 K, while for bulk-crystal one, a second line emerges in narrow temperature interval below 130 K. Temperature dependences of resonance field and line width of the main and the secondary line are drastically different. EMR data and complementary magnetic measurements of bulk-crystal sample reveal mixed-magnetic phase, which agrees with the published phase diagram of bulk La1-x Ca x MnO3. In a marked contrast, the same analysis for nano-crystal sample shows two phases one of which is definitely ferromagnetic (FM) and other is likely such, or super paramagnetic. The data obtained are interpreted in terms of very different magnetic ground states in the two samples, that is attributed to different randomness of the indirect FM exchange interactions mediated by bound holes
Energy Technology Data Exchange (ETDEWEB)
Degirmenci, B. [Department of Radiology, Faculty of Medicine, University of Kocatepe, Afyonkarahisar (Turkey)]. E-mail: bumin.degirmenci@gmail.com; Haktanir, A. [Department of Radiology, Faculty of Medicine, University of Kocatepe, Afyonkarahisar (Turkey); Albayrak, R. [Department of Radiology, Faculty of Medicine, University of Kocatepe, Afyonkarahisar (Turkey); Acar, M. [Department of Radiology, Faculty of Medicine, University of Kocatepe, Afyonkarahisar (Turkey); Sahin, D.A. [Department of General Surgery, Faculty of Medicine, University of Kocatepe, Afyonkarahisar (Turkey); Sahin, O. [Department of Pathology, Faculty of Medicine, University of Kocatepe, Afyonkarahisar (Turkey); Yucel, A. [Department of Radiology, Faculty of Medicine, University of Kocatepe, Afyonkarahisar (Turkey); Caliskan, G. [Department of Radiology, Faculty of Medicine, University of Kocatepe, Afyonkarahisar (Turkey)
2007-08-15
Aim: To evaluate the effects of sonographic characteristics of thyroid nodules, the diameter of needle used for sampling, and sampling technique on obtaining sufficient cytological material (SCM). Materials and methods: We performed sonography-guided fine-needle biopsy (FNB) in 232 solid thyroid nodules. Size-, echogenicity, vascularity, and localization of all nodules were evaluated by Doppler sonography before the biopsy. Needles of size 20, 22, and 24 G were used for biopsy. The biopsy specimen was acquired using two different methods after localisation. In first method, the needle tip was advanced into the nodule in various positions using a to-and-fro motion whilst in the nodule, along with concurrent aspiration. In the second method, the needle was advanced vigorously using a to-and-fro motion within the nodule whilst being rotated on its axis (capillary-action technique). Results: The mean nodule size was 2.1 {+-} 1.3 cm (range 0.4-7.2 cm). SCM was acquired from 154 (66.4%) nodules by sonography-guided FNB. In 78 (33.6%) nodules, SCM could not be collected. There was no significant difference between nodules with different echogenicity and vascularity for SCM. Regarding the needle size, the lowest rate of SCM was obtained using 20 G needles (56.6%) and the highest rate of adequate material was obtained using 24 G needles (82.5%; p = 0.001). The SCM rate was 76.9% with the capillary-action technique versus 49.4% with the aspiration technique (p < 0.001). Conclusion: Selecting finer needles (24-25 G) for sonography-guided FNB of thyroid nodules and using the capillary-action technique decreased the rate of inadequate material in cytological examination.
Influence of sample size on bryophyte ecological indices%样方大小对苔藓植物生态学指标的影响
Institute of Scientific and Technical Information of China (English)
沈蕾; 郭水良; 宋洪涛; 娄玉霞; 曹同
2011-01-01
为了分析样方大小对苔藓植物生态指标的影响,在环境相对一致的条件下,在各样点以巢式取样法调查苔藓植物盖度,取样的大小分别为20 cm× 20 cm,30 cm× 30 cm,40 cm×40 cm,50 cm× 50 cm,60 cm×60 cm.通过统计发现,随着取样面积的增加,目测法所获得的优势种、总的苔藓植物的盖度呈现下降趋势,但是非优势种和偶见种的盖度却有上升趋势;随着样方大小之间差异的扩大,所得调查数据间的差异也在扩大;随着取样面积的增加,样方中苔藓植物的多样性指数、生态位宽度和重叠值、苔藓植物的平均种数均符合饱和曲线的增加规律;取样面积大小对环境因子与苔藓植物分布之间关系的分析结果也有明显影响;在生境相对一致的土生环境下,苔藓植物的取样面积可考虑在40 cm×40 cm～50 cm× 50 cm的范围内.%In order to analyze the influences of sample sizes on byophyte ecological indices, plots were located using systematic sampling method under the similar ecological conditions,and the coverage of bryophytes were investigated by nested sampling method,the size of samples were 20 cm×20 cm,30 cm×30 cm,40 cm×40 cm,50 cm×50 cm and 60 cm× 60 cm, respectively. A total of 73 plots including 365 samples were surveyed in the present study. Bryophyrte coverages at each quadrat were recorded by vision estimation. Data analyses showed that the diversity indices,niche width and overlap,average species number of bryophyte per sample increased with the enlargement of sample size. Sampling size also affected the relationship between enviornmental varialbes and bryophyte distribution. In sites with relatively homogeneous habitats, sampling area for bryophyte communities could be considered from 40 cm× 40 cm to 50 cm× 50 cm.
样本量估计与检验效能分析(五)%Estimation of sample size and testing power (Part 5)
Institute of Scientific and Technical Information of China (English)
胡良平; 鲍晓蕾; 关雪; 周诗国
2012-01-01
Estimation of sample size and testing power is an important component of research design.This article introduced methods for sample size and testing power estimation of difference test for quantitative and qualitative data with the single-group design,the paired design or the crossover design.To be specific,this article introduced formulas for sample size and testing power estimation of difference test for quantitative and qualitative data with the above three designs,the realization based on the formulas and the POWER procedure of SAS software and elaborated it with examples,which will benefit researchers for implementing the repetition principle.%样本含量与检验效能估计是研究设计的重要内容之一.本文介绍了拟作单组设计、配对设计和交叉设计定量与定性资料差异性检验时的样本含量和检验效能估计方法,具体地说,就是针对前述的3种设计且为差异性检验的情形,介绍了效应指标为定量和定性资料时估计样本含量和检验效能的计算公式,以及基于公式和借用SAS软件中的POWER过程分别实现样本含量和检验效能估计的方法,并通过实例进行了演示和讲解,对实际工作者在试验设计时正确落实重复原则具有很好的指导作用.
Czuba, Jonathan A.; Straub, Timothy D.; Curran, Christopher A.; Landers, Mark N.; Domanski, Marian M.
2014-01-01
Laser-diffraction technology, recently adapted for in-stream measurement of fluvial suspended-sediment concentrations (SSCs) and particle-size distributions (PSDs), was tested with a streamlined (SL), isokinetic version of the Laser In-Situ Scattering and Transmissometry (LISST) for measuring volumetric SSCs and PSDs ranging from 1.8-415 µm in 32 log-spaced size classes. Measured SSCs and PSDs from the LISST-SL were compared to a suite of 22 datasets (262 samples in all) of concurrent suspended-sediment and streamflow measurements using a physical sampler and acoustic Doppler current profiler collected during 2010-12 at 16 U.S. Geological Survey streamflow-gaging stations in Illinois and Washington (basin areas: 38 – 69,264 km2). An unrealistically low computed effective density (mass SSC / volumetric SSC) of 1.24 g/ml (95% confidence interval: 1.05-1.45 g/ml) provided the best-fit value (R2 = 0.95; RMSE = 143 mg/L) for converting volumetric SSC to mass SSC for over 2 orders of magnitude of SSC (12-2,170 mg/L; covering a substantial range of SSC that can be measured by the LISST-SL) despite being substantially lower than the sediment particle density of 2.67 g/ml (range: 2.56-2.87 g/ml, 23 samples). The PSDs measured by the LISST-SL were in good agreement with those derived from physical samples over the LISST-SL's measureable size range. Technical and operational limitations of the LISST-SL are provided to facilitate the collection of more accurate data in the future. Additionally, the spatial and temporal variability of SSC and PSD measured by the LISST-SL is briefly described to motivate its potential for advancing our understanding of suspended-sediment transport by rivers.
DEFF Research Database (Denmark)
Petersen, Kurt Erling
1986-01-01
approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...... complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested....
Estimation of sample size and testing power (Part 4)%样本量估计与检验效能分析(四)
Institute of Scientific and Technical Information of China (English)
胡良平; 鲍晓蕾; 关雪; 周诗国
2012-01-01
Sample size estimation is necessary for any experimental or survey research.An appropriate estimation of sample size based on known information and statistical knowledge is of great significance.This article introduces methods of sample size estimation of difference test for data with the design of one factor with two levels,including sample size estimation formulas and realization based on the formulas and the POWER procedure of SAS software for quantitative data and qualitative data with the design of one factor with two levels.In addition,this article presents examples for analysis,which will play a leading role for researchers to implement the repetition principle during the research design phase.%样本含量估计是任何一项实验研究或调查研究都不可回避的问题.根据已知的基本情况和统计学知识,估计出合适的样本含量在科研实践中是十分有意义的.本文介绍了拟作成组设计定量资料与定性资料差异性检验时的样本含量估计方法.具体地说,就是仅针对成组设计且为差异性检验的情形,介绍了效应指标为定量资料和定性资料时估计样本含量的计算公式,以及基于公式和借用SAS软件中POWER过程分别实现样本含量估计的方法,并通过实例进行了讲解,对科研工作者在实验设计时正确落实重复原则具有很好的指导作用.
Energy Technology Data Exchange (ETDEWEB)
Wang, Chuji [U.S. Army Research Laboratory, Adelphi, MD 20783 (United States); Mississippi State University, Starkville, MS, 39759 (United States); Pan, Yong-Le, E-mail: yongle.pan.civ@mail.mil [U.S. Army Research Laboratory, Adelphi, MD 20783 (United States); James, Deryck; Wetmore, Alan E. [U.S. Army Research Laboratory, Adelphi, MD 20783 (United States); Redding, Brandon [Yale University, New Haven, CT 06510 (United States)
2014-04-01
Highlights: • A dual wavelength UV-LIF spectra-rotating drum impactor (RDI) technique was developed. • The technique was demonstrated by direct on-strip analysis of size- and time-resolved LIF spectra of atmospheric aerosol particles. • More than 2000 LIF spectra of atmospheric aerosol particles collected over three weeks in Djibouti were obtained and assigned to various fluorescence clusters. • The LIF spectra showed size- and time-sensitivity behavior with a time resolution of 3.6 h. - Abstract: We report a novel atmospheric aerosol characterization technique, in which dual wavelength UV laser induced fluorescence (LIF) spectrometry marries an eight-stage rotating drum impactor (RDI), namely UV-LIF-RDI, to achieve size- and time-resolved analysis of aerosol particles on-strip. The UV-LIF-RDI technique measured LIF spectra via direct laser beam illumination onto the particles that were impacted on a RDI strip with a spatial resolution of 1.2 mm, equivalent to an averaged time resolution in the aerosol sampling of 3.6 h. Excited by a 263 nm or 351 nm laser, more than 2000 LIF spectra within a 3-week aerosol collection time period were obtained from the eight individual RDI strips that collected particles in eight different sizes ranging from 0.09 to 10 μm in Djibouti. Based on the known fluorescence database from atmospheric aerosols in the US, the LIF spectra obtained from the Djibouti aerosol samples were found to be dominated by fluorescence clusters 2, 5, and 8 (peaked at 330, 370, and 475 nm) when excited at 263 nm and by fluorescence clusters 1, 2, 5, and 6 (peaked at 390 and 460 nm) when excited at 351 nm. Size- and time-dependent variations of the fluorescence spectra revealed some size and time evolution behavior of organic and biological aerosols from the atmosphere in Djibouti. Moreover, this analytical technique could locate the possible sources and chemical compositions contributing to these fluorescence clusters. Advantages, limitations, and
International Nuclear Information System (INIS)
Highlights: • A dual wavelength UV-LIF spectra-rotating drum impactor (RDI) technique was developed. • The technique was demonstrated by direct on-strip analysis of size- and time-resolved LIF spectra of atmospheric aerosol particles. • More than 2000 LIF spectra of atmospheric aerosol particles collected over three weeks in Djibouti were obtained and assigned to various fluorescence clusters. • The LIF spectra showed size- and time-sensitivity behavior with a time resolution of 3.6 h. - Abstract: We report a novel atmospheric aerosol characterization technique, in which dual wavelength UV laser induced fluorescence (LIF) spectrometry marries an eight-stage rotating drum impactor (RDI), namely UV-LIF-RDI, to achieve size- and time-resolved analysis of aerosol particles on-strip. The UV-LIF-RDI technique measured LIF spectra via direct laser beam illumination onto the particles that were impacted on a RDI strip with a spatial resolution of 1.2 mm, equivalent to an averaged time resolution in the aerosol sampling of 3.6 h. Excited by a 263 nm or 351 nm laser, more than 2000 LIF spectra within a 3-week aerosol collection time period were obtained from the eight individual RDI strips that collected particles in eight different sizes ranging from 0.09 to 10 μm in Djibouti. Based on the known fluorescence database from atmospheric aerosols in the US, the LIF spectra obtained from the Djibouti aerosol samples were found to be dominated by fluorescence clusters 2, 5, and 8 (peaked at 330, 370, and 475 nm) when excited at 263 nm and by fluorescence clusters 1, 2, 5, and 6 (peaked at 390 and 460 nm) when excited at 351 nm. Size- and time-dependent variations of the fluorescence spectra revealed some size and time evolution behavior of organic and biological aerosols from the atmosphere in Djibouti. Moreover, this analytical technique could locate the possible sources and chemical compositions contributing to these fluorescence clusters. Advantages, limitations, and
A Note on Strategic Sampling in Agencies
Robert Bushman; Chandra Kanodia
1996-01-01
This paper studies sample design for process control in principal-agent settings where deterrence rather than ex post detection is the main issue. We show how the magnitude of gains from additional sampling can be calculated and traded off against sampling costs. It is shown that the optimal sample size shrinks as target rates are lowered.
D'Huys, Elke; Seaton, Daniel B; Poedts, Stefaan
2016-01-01
Many natural processes exhibit power-law behavior. The power-law exponent is linked to the underlying physical process and therefore its precise value is of interest. With respect to the energy content of nanoflares, for example, a power-law exponent steeper than 2 is believed to be a necessary condition to solve the enigmatic coronal heating problem. Studying power-law distributions over several orders of magnitudes requires sufficient data and appropriate methodology. In this paper we demonstrate the shortcomings of some popular methods in solar physics that are applied to data of typical sample sizes. We use synthetic data to study the effect of the sample size on the performance of different estimation methods and show that vast amounts of data are needed to obtain a reliable result with graphical methods (where the power-law exponent is estimated by a linear fit on a log-transformed histogram of the data). We revisit published results on power laws for the angular width of solar coronal mass ejections an...
Energy Technology Data Exchange (ETDEWEB)
Rocklin, Gabriel J. [Department of Pharmaceutical Chemistry, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550, USA and Biophysics Graduate Program, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550 (United States); Mobley, David L. [Departments of Pharmaceutical Sciences and Chemistry, University of California Irvine, 147 Bison Modular, Building 515, Irvine, California 92697-0001, USA and Department of Chemistry, University of New Orleans, 2000 Lakeshore Drive, New Orleans, Louisiana 70148 (United States); Dill, Ken A. [Laufer Center for Physical and Quantitative Biology, 5252 Stony Brook University, Stony Brook, New York 11794-0001 (United States); Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch [Laboratory of Physical Chemistry, Swiss Federal Institute of Technology, ETH, 8093 Zürich (Switzerland)
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non
DEFF Research Database (Denmark)
Guenther, Claudia C.; Temming, Axel; Baumann, Hannes;
2012-01-01
An individual-based length back-calculation method was developed for juvenile Baltic sprat (Sprattus sprattus), accounting for ontogenetic changes in the relationship between fish length and otolith length. In sprat, metamorphosis from larvae to juveniles is characterized by the coincidence of low...
Injuk, J.; Otten, Ph.; Laane, R.; Maenhaut, W.; Van Grieken, R.
In an effort to assess the atmospheric input of heavy metals to the Southern Bight of the North Sea, aircraft-based aerosol samplings in the lower troposphere were performed between September 1988 and October 1989. Total atmospheric particulate and size-differentiated concentrations of Cd, Cu, Pb and Zn were determined as a function of altitude, wind direction, air-mass history and season. The obtained data are compared with results of ship-based measurements carried out previously and with literature values of Cu, Pb and Zn, for the marine troposphere of the southern North Sea. The results point out the high variability of the concentrations with the meterological conditions, as well as with time and location. The experimentally found particle size distribution are bimodal with a significant difference in fractions of small and large particles. These large aerosol particles have a direct and essential impact on the air-to-sea transfer of anthropogenic trace metals, in spite of their low numerical abundance and relatively low heavy metal content.
Energy Technology Data Exchange (ETDEWEB)
Andre, F.; Cariou, R.; Antignac, J.P.; Le Bizec, B. [Ecole Nationale Veterinaire de Nantes (FR). Laboratoire d' Etudes des Residus et Contaminants dans les Aliments (LABERCA); Debrauwer, L.; Zalko, D. [Institut National de Recherches Agronomiques (INRA), 31-Toulouse (France). UMR 1089 Xenobiotiques
2004-09-15
The impact of brominated flame retardants on the environment and their potential risk for animal and human health is a present time concern for the scientific community. Numerous studies related to the detection of tetrabromobisphenol A (TBBP-A) and polybrominated diphenylethers (PBDEs) have been developed over the last few years; they were mainly based on GC-ECD, GC-NCI-MS or GC-EI-HRMS, and recently GC-EI-MS/MS. The sample treatment is usually derived from the analytical methods used for dioxins, but recently some authors proposed the utilisation of solid phase extraction (SPE) cartridges. In this study, a new analytical strategy is presented for the multi-residue analysis of TBBP-A and PBDEs from a unique reduced size sample. The main objective of this analytical development is to be applied for background exposure assessment of French population groups to brominated flame retardants, for which, to our knowledge, no data exist. A second objective is to provide an efficient analytical tool to study the transfer of these contaminants through the environment to living organisms, including degradation reactions and metabolic biotransformations.
Belli, Sirio; Ellis, Richard S
2014-01-01
We analyze the stellar populations of a sample of 62 massive (log Mstar/Msun > 10.7) galaxies in the redshift range 1 < z < 1.6, with the main goal of investigating the role of recent quenching in the size growth of quiescent galaxies. We demonstrate that our sample is not biased toward bright, compact, or young galaxies, and thus is representative of the overall quiescent population. Our high signal-to-noise ratio Keck LRIS spectra probe the rest-frame Balmer break region which contains important absorption line diagnostics of recent star formation activity. We show that improved measures of the stellar population parameters, including the star-formation timescale tau, age and dust extinction, can be determined by fitting templates jointly to our spectroscopic and broad-band photometric data. These parameter fits allow us to backtrack the evolving trajectory of individual galaxies on the UVJ color-color plane. In addition to identifying which quiescent galaxies were recently quenched, we discover impor...
STUDY OF STRENGTH DEGRADATION LAW OF DAMAGED ROCK SAMPLE AND ITS SIZE EFFECT%损伤岩样强度衰减规律及其尺寸效应研究
Institute of Scientific and Technical Information of China (English)
靖洪文; 苏海健; 杨大林; 王辰; 孟波
2012-01-01
The degradation law of the damaged rock sample is an important task in the field of rock mechanics. A new method to precast the damaged samples was used. The uniaxial compression tests and triaxial compression tests were made on the samples and the test results were compared with that of intact rock samples which were nearly homogeneous. Under the uniaxial compression, splitting failure was accompanied with slant fracture failure. With the increase of the confining pressure, a new nearly horizontal failure was likely to appear at the cementation area. The strength degradation of the damaged sample to the slant sample increased with the increasing confining pressure, but the increasing range gradually fell. Based on the laboratory test and particle flow code(PFC) numerical software, the size effect of the strength degradation under uniaxial compression on the damaged sample was studied. The research shows that the strength degradation of the uniaxial compression fell with the increase of the height-diameter ratio but had a trend to become gentle. The theoretical model of strength degradation under uniaxial compression was proposed as △σ=△σ0 (a + b/γ) , where Act is the strength degradation value underuniaxial compression of any damaged sample; △σo is the strength degradation value under uniaxial compressionof the standard damaged sample; X is the height-diameter ratio of the column sample; a and b are the parametersof the material. The theoretical curve is consistent with values from the tests. The calculation result shows thatwhen the size is infinitely indefinitely large, the strength degradation value under uniaxial compression is closerto△σR .%损伤岩样的强度衰减规律是岩石力学领域的重要课题.采用一种新的方法预制损伤岩样,对其分别进行单轴和三轴压缩试验,并将结果与近似均质的完整岩样试验结果对比.结果表明,单轴压缩下,损伤岩样除劈裂破坏外还伴随有斜向断裂破坏;
Institute of Scientific and Technical Information of China (English)
张忠启; 于法展; 于东升; 胡丹
2016-01-01
,and then spatial distribution maps of SOC in 1982 and 2007 were plotted by means of Kriging. Through raster overlay operation of the two SOC content maps,characteristics of the temporal variation of SOC content during the period of 1982-2007 were worked out and then used as basis to estimate the number of sampling sites needed to exposit that variation. [Result]Results show that the mean SOC content in 1982-2007 increased by from 14.18 to 16.27 g kg-1 with a growth rate of 14.74%, while its coefficient of variation(CV)rose from 0.22 to 0.44,both demonstrating a large growth. However, SOC content varied with land-use. Among the three main patterns of land-use,paddy fields and forest lands experienced rising in SOC from 15.10 to 18.02 and from 12.63 to 15.75 g kg-1 or by 19.34% and 24.70%, respectively,during the period of 1982-2007,whereas uplands did oppositely in SOC content,decreasing from 11.62 to 9.07,or by 21.94%. However,in the meantime,the three patterns all had a drastic increase in CV of SOC content. In the light of spatial distribution of the variation of SOC content,the northern and southwestern parts of Yujiang County were on a substantially increasing trend,while the central-eastern part was on a declining trend,which was closely related to spatial distribution of the land use patterns. Based on the data of sample size of the periods,1982 and 2007,with confidential intervals being 95% and 90%, the rational number of sampling sites required to reveal temporal variability of SOC content in the whole county was calculated to be 186 and 147 for the two years,respectively. And based on the SOC variation in lands different in land use,the rational number of sampling sites for paddy field,dry land,and forest land was figured out to be 68,44,and 144,respectively,with confidence interval being 95% and 54,34, and 112,respectively,with confidence interval being 60%. Generally,the sample size for upland should be 60% or over of that for paddy field,while that for forest
Ross, Kenneth N.
1987-01-01
This article considers various kinds of probability and non-probability samples in both experimental and survey studies. Throughout, how a sample is chosen is stressed. Size alone is not the determining consideration in sample selection. Good samples do not occur by accident; they are the result of a careful design. (Author/JAZ)
Institute of Scientific and Technical Information of China (English)
LIANG Bingbing; YUE Xin; WANG Hongxia; LIU Baozhong
2016-01-01
The precise and accurate knowledge of genetic parameters is a prerequisite for making efficient selection strategies in breeding programs. A number of estimators of heritability about important economic traits in many marine mollusks are available in the literature, however very few research have evaluated about the accuracy of genetic parameters estimated with different family structures. Thus, in the present study, the effect of parent sample size for estimating the precision of genetic parameters of four growth traits in clamM. meretrix by factorial designs were analyzed through restricted maximum likelihood (REML) and Bayesian. The results showed that the average estimated heritabilities of growth traits obtained from REML were 0.23–0.32 for 9 and 16 full-sib families and 0.19–0.22 for 25 full-sib families. When using Bayesian inference, the average estimated heritabilities were 0.11–0.12 for 9 and 16 full-sib families and 0.13–0.16 for 25 full-sib families. Compared with REML, Bayesian got lower heritabilities, but still remained at a medium level. When the number of parents increased from 6 to 10, the estimated heritabilities were more closed to 0.20 in REML and 0.12 in Bayesian inference. Genetic correlations among traits were positive and high and had no significant difference between different sizes of designs. The accuracies of estimated breeding values from the 9 and 16 families were less precise than those from 25 families. Our results provide a basic genetic evaluation for growth traits and should be useful for the design and operation of a practical selective breeding program in the clamM. meretrix.
样本量估计与检验效能分析(三)%Estimation of sample size and testing power (Part 3)
Institute of Scientific and Technical Information of China (English)
胡良平; 鲍晓蕾; 关雪; 周诗国
2011-01-01
本文介绍了结果为二值变量的单因素两水平设计的3种特殊检验(即非劣效性检验、等效性检验和优效性检验)的基本概念和样本含量估计方法.非劣效性试验是指主要研究目的为显示试验药的疗效在临床上不比阳性对照药差的试验；等效性试验是指主要研究目的为显示两种药物的疗效在临床上是否等效的试验；优效性试验是指主要研究目的为显示试验药的疗效优于对照药的试验.通过实例,本文介绍了所需样本含量的计算公式和SAS实现方法.%This article introduces the definition and sample size estimation of three special tests (namely,nor-inferiority test,equivalence test and superiority test) for qualitative data with the design of one factor with two levels having a binary response variable.Non-inferiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is not clinically inferior to that of the positive control drug.Equivalence test refers to the research design of which the objective is to verify that the experimental drug and the control drug have clinically equivalent efficacy.Superiority test refers to the research design of which the objective is to verify that the efficacy of the experimental drug is clinically superior to that of the control drug.By specific examples,this article introduces formulas of sample size estimation for the three special tests,and their SAS realization in detail.
Tatiani Reis da Silveira; Marcos Toebe; Betânia Brum; Sidinei José Lopes; Alberto Cargnelutti Filho; Gabriele Casarotto
2012-01-01
In the study of linear relationships, it is important to define correctly the sample size, to estimate the Pearson correlation coefficient among pairs of characters, with acceptable reliability. The aim of this research was to determine the sample size (number of plants) to estimate the Pearson correlation coefficient among 21 characters of castor bean. It was evaluated 41 and 55 plants of the Sara and Lyra hybrids, respectively, regarding to the characters of seed, seedling, adult plant and ...
Directory of Open Access Journals (Sweden)
Marcel Holyoak
Full Text Available In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005-2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent
Holyoak, Marcel; Meese, Robert J; Graves, Emily E
2014-01-01
In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor) has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005-2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent investigation. The method we
International Nuclear Information System (INIS)
To recommend the optimal plan parameter set of grid size and angular increment for dose calculations in treatment planning for lung stereotactic body radiation therapy (SBRT) using dynamic conformal arc therapy (DCAT) considering both accuracy and computational efficiency. Dose variations with varying grid sizes (2, 3, and 4 mm) and angular increments (2°, 4°, 6°, and 10°) were analyzed in a thorax phantom for 3 spherical target volumes and in 9 patient cases. A 2-mm grid size and 2° angular increment are assumed sufficient to serve as reference values. The dosimetric effect was evaluated using dose–volume histograms, monitor units (MUs), and dose to organs at risk (OARs) for a definite volume corresponding to the dose–volume constraint in lung SBRT. The times required for dose calculations using each parameter set were compared for clinical practicality. Larger grid sizes caused a dose increase to the structures and required higher MUs to achieve the target coverage. The discrete beam arrangements at each angular increment led to over- and under-estimated OARs doses due to the undulating dose distribution. When a 2° angular increment was used in both studies, a 4-mm grid size changed the dose variation by up to 3–4% (50 cGy) for the heart and the spinal cord, while a 3-mm grid size produced a dose difference of <1% (12 cGy) in all tested OARs. When a 3-mm grid size was employed, angular increments of 6° and 10° caused maximum dose variations of 3% (23 cGy) and 10% (61 cGy) in the spinal cord, respectively, while a 4° increment resulted in a dose difference of <1% (8 cGy) in all cases except for that of one patient. The 3-mm grid size and 4° angular increment enabled a 78% savings in computation time without making any critical sacrifices to dose accuracy. A parameter set with a 3-mm grid size and a 4° angular increment is found to be appropriate for predicting patient dose distributions with a dose difference below 1% while reducing the
Estimation of sample size and testing power (Part 6)%样本量估计与检验效能分析(六)
Institute of Scientific and Technical Information of China (English)
胡良平; 鲍晓蕾; 关雪; 周诗国
2012-01-01
The design of one factor with k levels (k≥3) refers to the research that only involves one experimental factor with k levels (k≥3),and there is no arrangement for other important non-experimental factors.This paper introduces the estimation of sample size and testing power for quantitative data and qualitative data having a binary response variable with the design of one factor with k levels (k≥3).%所谓单因素多水平设计,是指试验中仅涉及一个具有k个水平(k≥3)的试验因素,未对其他任何重要非试验因素进行有计划的安排.本文向读者介绍单因素多水平设计一元定量资料与结果变量为二值变量的单因素多水平设计一元定性资料的样本含量与检验效能估计.
Zhou, H; Gong, J; Brisbin, J T; Yu, H; Sanei, B; Sabour, P; Sharif, S
2007-12-01
The bacterial microbiota in the broiler gastrointestinal tract are crucial for chicken health and growth. Their composition can vary among individual birds. To evaluate the composition of chicken microbiota in response to environmental disruption accurately, 4 different pools made up of 2, 5, 10, and 15 individuals were used to determine how many individuals in each pool were required to assess the degree of variation when using the PCR-denaturing gradient gel electrophoresis (DGGE) profiling technique. The correlation coefficients among 3 replicates within each pool group indicated that the optimal sample size for comparing PCR-DGGE bacterial profiles and downstream applications (such as identifying treatment effects) was 5 birds per pool for cecal microbiota. Subsequently, digesta from 5 birds was pooled to investigate the effects on the microbiota composition of the 2 most commonly used dietary antibiotics (virginiamycin and bacitracin methylene disalicylate) at 2 different doses by using PCR-DGGE, DNA sequencing, and quantitative PCR techniques. Thirteen DGGE DNA bands were identified, representing bacterial groups that had been affected by the antibiotics. Nine of them were validated. The effect of dietary antibiotics on the microbiota composition appeared to be dose and age dependent. These findings provide a working model for elucidating the mechanisms of antibiotic effects on the chicken intestinal microbiota and for developing alternatives to dietary antibiotics. PMID:18029800
Directory of Open Access Journals (Sweden)
Alice Guilleux
Full Text Available Patient-reported outcomes (PRO have gained importance in clinical and epidemiological research and aim at assessing quality of life, anxiety or fatigue for instance. Item Response Theory (IRT models are increasingly used to validate and analyse PRO. Such models relate observed variables to a latent variable (unobservable variable which is commonly assumed to be normally distributed. A priori sample size determination is important to obtain adequately powered studies to determine clinically important changes in PRO. In previous developments, the Raschpower method has been proposed for the determination of the power of the test of group effect for the comparison of PRO in cross-sectional studies with an IRT model, the Rasch model. The objective of this work was to evaluate the robustness of this method (which assumes a normal distribution for the latent variable to violations of distributional assumption. The statistical power of the test of group effect was estimated by the empirical rejection rate in data sets simulated using a non-normally distributed latent variable. It was compared to the power obtained with the Raschpower method. In both cases, the data were analyzed using a latent regression Rasch model including a binary covariate for group effect. For all situations, both methods gave comparable results whatever the deviations from the model assumptions. Given the results, the Raschpower method seems to be robust to the non-normality of the latent trait for determining the power of the test of group effect.
Guilleux, Alice; Blanchin, Myriam; Hardouin, Jean-Benoit; Sébille, Véronique
2014-01-01
Patient-reported outcomes (PRO) have gained importance in clinical and epidemiological research and aim at assessing quality of life, anxiety or fatigue for instance. Item Response Theory (IRT) models are increasingly used to validate and analyse PRO. Such models relate observed variables to a latent variable (unobservable variable) which is commonly assumed to be normally distributed. A priori sample size determination is important to obtain adequately powered studies to determine clinically important changes in PRO. In previous developments, the Raschpower method has been proposed for the determination of the power of the test of group effect for the comparison of PRO in cross-sectional studies with an IRT model, the Rasch model. The objective of this work was to evaluate the robustness of this method (which assumes a normal distribution for the latent variable) to violations of distributional assumption. The statistical power of the test of group effect was estimated by the empirical rejection rate in data sets simulated using a non-normally distributed latent variable. It was compared to the power obtained with the Raschpower method. In both cases, the data were analyzed using a latent regression Rasch model including a binary covariate for group effect. For all situations, both methods gave comparable results whatever the deviations from the model assumptions. Given the results, the Raschpower method seems to be robust to the non-normality of the latent trait for determining the power of the test of group effect. PMID:24427276
Cooling rate calculations for silicate glasses.
Birnie, D. P., III; Dyar, M. D.
1986-03-01
Series solution calculations of cooling rates are applied to a variety of samples with different thermal properties, including an analog of an Apollo 15 green glass and a hypothetical silicate melt. Cooling rates for the well-studied green glass and a generalized silicate melt are tabulated for different sample sizes, equilibration temperatures and quench media. Results suggest that cooling rates are heavily dependent on sample size and quench medium and are less dependent on values of physical properties. Thus cooling histories for glasses from planetary surfaces can be estimated on the basis of size distributions alone. In addition, the variation of cooling rate with sample size and quench medium can be used to control quench rate.
Gritti, Fabrice; Guiochon, Georges
2009-06-01
A general reduced HETP (height equivalent to a theoretical plate) equation is proposed that accounts for the mass transfer of a wide range of molecular weight compounds in monolithic columns. The detailed derivatization of each one of the individual and independent mass transfer contributions (longitudinal diffusion, eddy dispersion, film mass transfer resistance, and trans-skeleton mass transfer resistance) is discussed. The reduced HETPs of a series of small molecules (phenol, toluene, acenaphthene, and amylbenzene) and of a larger molecule, insulin, were measured on three research grade monolithic columns (M150, M225, M350) having different average pore size (approximately 150, 225, and 350 A, respectively) but the same dimension (100 mm x 4.6 mm). The first and second central moments of 2 muL samples were measured and corrected for the extra-column contributions. The h data were fitted to the new HETP equation in order to identify which contribution controls the band broadening in monolithic columns. The contribution of the B-term was found to be negligible compared to that of the A-term, even at very low reduced velocities (numass transfer across the column. Experimental chromatograms exhibited variable degrees of systematic peak fronting, depending on the column studied. The heterogeneity of the distribution of eluent velocities from the column center to its wall (average 5%) is the source of this peak fronting. At high reduced velocities (nu>5), the C-term of the monolithic columns is controlled by film mass transfer resistance between the eluent circulating in the large throughpores and the eluent stagnant inside the thin porous skeleton. The experimental Sherwood number measured on the monolith columns increases from 0.05 to 0.22 while the adsorption energy increases by nearly 6 kJ/mol. Stronger adsorption leads to an increase in the value of the estimated film mass transfer coefficient when a first order film mass transfer rate is assumed (j proportional
DEFF Research Database (Denmark)
Schmitz, T.; Blaickner, M.; Schütz, C.;
2010-01-01
and pin-diodes. Material and methods. When L-α-alanine is irradiated with ionizing radiation, it forms a stable radical which can be detected by electron spin resonance (ESR) spectroscopy. The value of the ESR signal correlates to the amount of absorbed dose. The dose for each pellet is calculated using...... biological effectiveness (RBE) of liver and cancer cells in our mixed neutron and gamma field. We work with alanine detectors in combination with Monte Carlo simulations, where we can measure and characterize the dose. To verify our calculations we perform neutron flux measurements using gold foil activation...... to the neutron fluence directly. Results and discussion. Gold foil activation and the pin-diode are reliable fluence measurement systems for the TRIGA reactor, Mainz. Alanine dosimetry of the photon field and charged particle field from secondary reactions can in principle be carried out in combination with MC-calculations...
Energy Technology Data Exchange (ETDEWEB)
Kostou, T; Papadimitroulas, P; Kagadis, GC [University of Patras, Rion, Ahaia (Greece); Loudos, G [Technical Educational Institute of Athens, Aigaleo, Attiki (Greece)
2014-06-15
Purpose: Commonly used radiopharmaceuticals were tested to define the most important dosimetric factors in preclinical studies. Dosimetric calculations were applied in two different whole-body mouse models, with varying organ size, so as to determine their impact on absorbed doses and S-values. Organ mass influence was evaluated with computational models and Monte Carlo(MC) simulations. Methods: MC simulations were executed on GATE to determine dose distribution in the 4D digital MOBY mouse phantom. Two mouse models, 28 and 34 g respectively, were constructed based on realistic preclinical exams to calculate the absorbed doses and S-values of five commonly used radionuclides in SPECT/PET studies (18F, 68Ga, 177Lu, 111In and 99mTc).Radionuclide biodistributions were obtained from literature. Realistic statistics (uncertainty lower than 4.5%) were acquired using the standard physical model in Geant4. Comparisons of the dosimetric calculations on the two different phantoms for each radiopharmaceutical are presented. Results: Dose per organ in mGy was calculated for all radiopharmaceuticals. The two models introduced a difference of 0.69% in their brain masses, while the largest differences were observed in the marrow 18.98% and in the thyroid 18.65% masses.Furthermore, S-values of the most important target-organs were calculated for each isotope. Source-organ was selected to be the whole mouse body.Differences on the S-factors were observed in the 6.0–30.0% range. Tables with all the calculations as reference dosimetric data were developed. Conclusion: Accurate dose per organ and the most appropriate S-values are derived for specific preclinical studies. The impact of the mouse model size is rather high (up to 30% for a 17.65% difference in the total mass), and thus accurate definition of the organ mass is a crucial parameter for self-absorbed S values calculation.Our goal is to extent the study for accurate estimations in small animal imaging, whereas it is known
Institute of Scientific and Technical Information of China (English)
刘循
2013-01-01
The existing methods have such problems as large calculation error, low calculation efficiency, application risks and so on. Therefore, a method for train operation calculation of rail transit based on variable step size iteration approximation was presented. According to train operation path, the guaranteed e-mergency braking limit points and stopping service braking points were determined to calculate train traction operation curve. Variable step size iteration approximation method was adopted to calculate the positions of guaranteed emergency braking trigger point and stopping service braking trigger point. The positions of guaranteed emergency braking trigger point were taken as non-stopping service braking trigger point positions. Based on these positions, train uniform speed operation curve, non-stopping service braking curve and stopping service braking curve were calculated. Thus, the most efficient train operation curve was shaped. This method was adopted for example calculation, and results show that both the efficiency and accuracy of train operation calculation are higher. The calculation results accord with the safety control principle for actual train operation. The accuracy and efficiency of train operation calculation can be effectively controlled through adjusting the threshold value of permissible position error. The calculated train operation curve basically coincides with the actual train operation curve.%既有轨道交通列车运行计算方法存在计算误差大、效率低、应用有风险等问题,因此提出1种基于变步长迭代逼近的轨道交通列车运行计算方法.根据列车运行路径,确定保障紧急制动限制点和停站常用制动停车点,计算列车牵引运行曲线；采用变步长迭代逼近方法计算确定保障紧急制动触发点位置和停站常用制动触发点位置,将保障紧急制动触发点位置作为非停站常用制动触发点位置；据此位置计算列车匀速运行曲线、
Directory of Open Access Journals (Sweden)
Thomas Newton Martin
2008-06-01
Full Text Available Os objetivos deste trabalho foram demarcar regiões homogêneas e estimar o número de anos de avaliações para as variáveis insolação, radiação solar global e radiação fotossinteticamente ativa para o Estado de São Paulo. Utilizaram-se dados da média mensal de insolação, radiação solar e radiação fotossinteticamente ativa de 18 locais do Estado de São Paulo. A homogeneidade das variâncias entre os meses do ano para os 18 locais (variabilidade temporal e a homogeneidade das variâncias entre os locais em cada mês (variabilidade espacial foram testadas pelo teste de homogeneidade de Bartlett. Estimou-se o tamanho de amostra para cada local durante o ano. Como resultados há variabilidade temporal e espacial para as estimativas de insolação, radiação solar e radiação fotossinteticamente ativa para os 18 municípios avaliados. Além disso, a variabilidade do tamanho de amostra para a insolação, radiação solar e radiação fotossinteticamente ativa depende do local e da época do ano no Estado de São Paulo.The purpose of this study was to separate homogeneous regions and to estimate the numbers of years necessary to evaluate the variables: sunshine, global solar radiation and photossintetically active radiation in Sao Paulo State. Monthly data of sunshine, solar radiation and photossintetically active radiation for 18 places in Sao Paulo State were used in the analysis. The homogeneity of the variances among the months for the 18 places (seasonal variability and the homogeneity of variances among places in each month (spatial variability were tested by the test of homogeneity of Bartlett. In addition, the sample size for each place was calculated during the year. The results show the existence of seasonal and spatial variability in the estimates of sunshine, solar radiation and photossintetically active radiation for the 18 cities evaluated in Sao Paulo State. Moreover, the variability of the sample size for sunshine
US Fish and Wildlife Service, Department of the Interior — Provides guidelines concerning sampling effort to achieve appropriate level of precision regarding avian point count sampling in the MAV. To compare efficacy of...
Particle size distribution in the tilapia Recirculating Aquaculture System
Stokic, Jelena
2012-01-01
This study was to evaluate methods for measuring and describing particle size distribution from three different spots in Tilapia recirculating system at University of Life Ccience in Ås, Norway. For this purpose serial filtration over different mesh size and parallel filtration over different mesh size methods were compared. Water samples were taken from before drum filter, after drum filter and after bio-filter (MBBR) and filtrated through eight different mesh size classes and calculated in ...
Henjum, Michael B; Hozalski, Raymond M; Wennen, Christine R; Novak, Paige J; Arnold, William A
2010-01-01
A network of in situ sensors and nutrient analyzers was deployed to measure nitrate, specific conductance (surrogate for chloride), and turbidity (surrogate for total suspended solids (TSS)) for 28 days in two urban streams near Minneapolis, MN. The primary objectives of the study were: (1) to determine the accuracy associated with quantifying pollutant loading using periodic discrete (i.e., grab) samples in comparison to in situ near real-time monitoring and (2) to identify pollutant sources. Within a highly impervious drainage area (>35%) the majority of pollutant load (>90% for nitrate, chloride, and TSS) was observed to be discharged in a small percentage of time (<20%). Consequently, periodic sampling is prone to underestimate pollutant loads. Additionally, when compared to loads based on near real-time sampling, average errors of 19-200% were associated with sampling 1-2 times a month. There are also limitations of periodic sampling with respect to pollutant source determination. Resulting implications with regard to total maximum daily load (TMDL) assessments are discussed.
Energy Technology Data Exchange (ETDEWEB)
Helton, Jon Craig (Arizona State University, Tempe, AZ); Sallaberry, Cedric J. PhD. (.; .)
2007-04-01
A deep geologic repository for high level radioactive waste is under development by the U.S. Department of Energy at Yucca Mountain (YM), Nevada. As mandated in the Energy Policy Act of 1992, the U.S. Environmental Protection Agency (EPA) has promulgated public health and safety standards (i.e., 40 CFR Part 197) for the YM repository, and the U.S. Nuclear Regulatory Commission has promulgated licensing standards (i.e., 10 CFR Parts 2, 19, 20, etc.) consistent with 40 CFR Part 197 that the DOE must establish are met in order for the YM repository to be licensed for operation. Important requirements in 40 CFR Part 197 and 10 CFR Parts 2, 19, 20, etc. relate to the determination of expected (i.e., mean) dose to a reasonably maximally exposed individual (RMEI) and the incorporation of uncertainty into this determination. This presentation describes and illustrates how general and typically nonquantitive statements in 40 CFR Part 197 and 10 CFR Parts 2, 19, 20, etc. can be given a formal mathematical structure that facilitates both the calculation of expected dose to the RMEI and the appropriate separation in this calculation of aleatory uncertainty (i.e., randomness in the properties of future occurrences such as igneous and seismic events) and epistemic uncertainty (i.e., lack of knowledge about quantities that are poorly known but assumed to have constant values in the calculation of expected dose to the RMEI).
Anderson, Richard B; Doherty, Michael E; Berg, Neil D; Friedrich, Jeff C
2005-01-01
Simulations examined the hypothesis that small samples can provide better grounds for inferring the existence of a population correlation, p, than can large samples. Samples of 5, 7, 10, 15, or 30 data pairs were drawn either from a population with p=0 or from one with p>0. When decision accuracy was assessed independently for each level of the decision criterion, there was a criterion-specific small-sample advantage. For liberal criteria, accuracy was greater for large than for small samples, but for conservative criteria, the opposite result occurred. There was no small-sample advantage when accuracy was measured as the area under a receiver operating characteristic curve or as the posterior probability of a hit. The results show that small-sample advantages can occur, but under limited conditions. PMID:15631599
Holterman, H.J.
2009-01-01
In agriculture, spray drift research is carried out in field experiments and by computer simulation. Regarding the latter approach, accurate knowledge of the initial spray is required. Not only is the overall drop size distribution of the spray an important factor in the spraying process, but also its local variation within the spray cone below a nozzle. Furthermore, the velocity distribution of drops in the spray cone has to be considered, which is a function of drop size and location in the...
Schmitz, Tobias; Blaickner, Matthias; Schütz, Christian; Wiehl, Norbert; Kratz, Jens V; Bassler, Niels; Holzscheiter, Michael H; Palmans, Hugo; Sharpe, Peter; Otto, Gerd; Hampel, Gabriele
2010-10-01
To establish Boron Neutron Capture Therapy (BNCT) for non-resectable liver metastases and for in vitro experiments at the TRIGA Mark II reactor at the University of Mainz, Germany, it is necessary to have a reliable dose monitoring system. The in vitro experiments are used to determine the relative biological effectiveness (RBE) of liver and cancer cells in our mixed neutron and gamma field. We work with alanine detectors in combination with Monte Carlo simulations, where we can measure and characterize the dose. To verify our calculations we perform neutron flux measurements using gold foil activation and pin-diodes. Material and methods. When L-α-alanine is irradiated with ionizing radiation, it forms a stable radical which can be detected by electron spin resonance (ESR) spectroscopy. The value of the ESR signal correlates to the amount of absorbed dose. The dose for each pellet is calculated using FLUKA, a multipurpose Monte Carlo transport code. The pin-diode is augmented by a lithium fluoride foil. This foil converts the neutrons into alpha and tritium particles which are products of the (7)Li(n,α)(3)H-reaction. These particles are detected by the diode and their amount correlates to the neutron fluence directly. Results and discussion. Gold foil activation and the pin-diode are reliable fluence measurement systems for the TRIGA reactor, Mainz. Alanine dosimetry of the photon field and charged particle field from secondary reactions can in principle be carried out in combination with MC-calculations for mixed radiation fields and the Hansen & Olsen alanine detector response model. With the acquired data about the background dose and charged particle spectrum, and with the acquired information of the neutron flux, we are capable of calculating the dose to the tissue. Conclusion. Monte Carlo simulation of the mixed neutron and gamma field of the TRIGA Mainz is possible in order to characterize the neutron behavior in the thermal column. Currently we also
International Nuclear Information System (INIS)
To establish Boron Neutron Capture Therapy (BNCT) for non-resectable liver metastases and for in vitro experiments at the TRIGA Mark II reactor at the Univ. of Mainz (DE), it is necessary to have a reliable dose monitoring system. The in vitro experiments are used to determine the relative biological effectiveness (RBE) of liver and cancer cells in our mixed neutron and gamma field. We work with alanine detectors in combination with Monte Carlo simulations, where we can measure and characterize the dose. To verify our calculations we perform neutron flux measurements using gold foil activation and pin-diodes. Material and methods. When L-a-alanine is irradiated with ionizing radiation, it forms a stable radical which can be detected by electron spin resonance (ESR) spectroscopy. The value of the ESR signal correlates to the amount of absorbed dose. The dose for each pellet is calculated using FLUKA, a multipurpose Monte Carlo transport code. The pin-diode is augmented by a lithium fluoride foil. This foil converts the neutrons into alpha and tritium particles which are products of the 7Li(n,a)3H-reaction. These particles are detected by the diode and their amount correlates to the neutron fluence directly. Results and discussion. Gold foil activation and the pin-diode are reliable fluence measurement systems for the TRIGA reactor, Mainz. Alanine dosimetry of the photon field and charged particle field from secondary reactions can in principle be carried out in combination with MC-calculations for mixed radiation fields and the Hansen and Olsen alanine detector response model. With the acquired data about the background dose and charged particle spectrum, and with the acquired information of the neutron flux, we are capable of calculating the dose to the tissue. Conclusion. Monte Carlo simulation of the mixed neutron and gamma field of the TRIGA Mainz is possible in order to characterize the neutron behavior in the thermal column. Currently we also speculate on
Schmitz, Tobias; Blaickner, Matthias; Schütz, Christian; Wiehl, Norbert; Kratz, Jens V; Bassler, Niels; Holzscheiter, Michael H; Palmans, Hugo; Sharpe, Peter; Otto, Gerd; Hampel, Gabriele
2010-10-01
To establish Boron Neutron Capture Therapy (BNCT) for non-resectable liver metastases and for in vitro experiments at the TRIGA Mark II reactor at the University of Mainz, Germany, it is necessary to have a reliable dose monitoring system. The in vitro experiments are used to determine the relative biological effectiveness (RBE) of liver and cancer cells in our mixed neutron and gamma field. We work with alanine detectors in combination with Monte Carlo simulations, where we can measure and characterize the dose. To verify our calculations we perform neutron flux measurements using gold foil activation and pin-diodes. Material and methods. When L-α-alanine is irradiated with ionizing radiation, it forms a stable radical which can be detected by electron spin resonance (ESR) spectroscopy. The value of the ESR signal correlates to the amount of absorbed dose. The dose for each pellet is calculated using FLUKA, a multipurpose Monte Carlo transport code. The pin-diode is augmented by a lithium fluoride foil. This foil converts the neutrons into alpha and tritium particles which are products of the (7)Li(n,α)(3)H-reaction. These particles are detected by the diode and their amount correlates to the neutron fluence directly. Results and discussion. Gold foil activation and the pin-diode are reliable fluence measurement systems for the TRIGA reactor, Mainz. Alanine dosimetry of the photon field and charged particle field from secondary reactions can in principle be carried out in combination with MC-calculations for mixed radiation fields and the Hansen & Olsen alanine detector response model. With the acquired data about the background dose and charged particle spectrum, and with the acquired information of the neutron flux, we are capable of calculating the dose to the tissue. Conclusion. Monte Carlo simulation of the mixed neutron and gamma field of the TRIGA Mainz is possible in order to characterize the neutron behavior in the thermal column. Currently we also
Energy Technology Data Exchange (ETDEWEB)
Garcez, R.W.D.; Lopes, J.M.; Silva, A.X., E-mail: marqueslopez@yahoo.com.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/PEN/UFRJ), Rio de Janeiro, RJ (Brazil). Centro de Tecnologia; Domingues, A.M. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Instituto de Fisica; Lima, M.A.F. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Instituto de Biologia
2014-07-01
A method based on gamma spectroscopy and on the use of voxel phantoms to calculate dose due to ingestion of {sup 40}K contained in bean samples are presented in this work. To quantify the activity of radionuclide, HPGe detector was used and the data entered in the input file of MCNP code. The highest value of equivalent dose was 7.83 μSv.y{sup -1} in the stomach for white beans, whose activity 452.4 Bq.Kg{sup -1} was the highest of the five analyzed. The tool proved to be appropriate when you want to calculate the dose in organs due to ingestion of food. (author)
International Nuclear Information System (INIS)
Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)
Holterman, H.J.
2009-01-01
In agriculture, spray drift research is carried out in field experiments and by computer simulation. Regarding the latter approach, accurate knowledge of the initial spray is required. Not only is the overall drop size distribution of the spray an important factor in the spraying process, but also i
Romagnan, Jean Baptiste; Aldamman, Lama; Gasparini, Stéphane; Nival, Paul; Aubert, Anaïs; Jamet, Jean Louis; Stemmann, Lars
2016-10-01
The present work aims to show that high throughput imaging systems can be useful to estimate mesozooplankton community size and taxonomic descriptors that can be the base for consistent large scale monitoring of plankton communities. Such monitoring is required by the European Marine Strategy Framework Directive (MSFD) in order to ensure the Good Environmental Status (GES) of European coastal and offshore marine ecosystems. Time and cost-effective, automatic, techniques are of high interest in this context. An imaging-based protocol has been applied to a high frequency time series (every second day between April 2003 to April 2004 on average) of zooplankton obtained in a coastal site of the NW Mediterranean Sea, Villefranche Bay. One hundred eighty four mesozooplankton net collected samples were analysed with a Zooscan and an associated semi-automatic classification technique. The constitution of a learning set designed to maximize copepod identification with more than 10,000 objects enabled the automatic sorting of copepods with an accuracy of 91% (true positives) and a contamination of 14% (false positives). Twenty seven samples were then chosen from the total copepod time series for detailed visual sorting of copepods after automatic identification. This method enabled the description of the dynamics of two well-known copepod species, Centropages typicus and Temora stylifera, and 7 other taxonomically broader copepod groups, in terms of size, biovolume and abundance-size distributions (size spectra). Also, total copepod size spectra underwent significant changes during the sampling period. These changes could be partially related to changes in the copepod assemblage taxonomic composition and size distributions. This study shows that the use of high throughput imaging systems is of great interest to extract relevant coarse (i.e. total abundance, size structure) and detailed (i.e. selected species dynamics) descriptors of zooplankton dynamics. Innovative
International Nuclear Information System (INIS)
A new report on food habits of the Austrian population in the year 2006/2007 was released in 2008. Mixed diets and foodstuffs are measured within a monitoring program according to the Austrian radiation protection law, food law, and the Commission Recommendation 2000/473/Euratom on the application of Article 36 of the Euratom Treaty concerning the monitoring of the levels of radioactivity in the environment for the purpose of assessing the exposure of the population as a whole. In addition, drinking and mineral water samples are measured for natural and artificial radionuclides. The ingestion dose for the Austrian population is recalculated based on the results of these measurements, literature data, and the data of the new report on food habits. In general, the major part of the ingestion dose is caused by natural radionuclides, especially 40K. (author)
DEFF Research Database (Denmark)
Qiao, Jixin; Hou, Xiaolin; Roos, Per
2013-01-01
A novel method for bioassay of large volumes of human urine samples using manganese dioxide coprecipitation for preconcentration was developed for rapid determination of 237Np. 242Pu was utilized as a nonisotopic tracer to monitor the chemical yield of 237Np. A sequential injection extraction...
Institute of Scientific and Technical Information of China (English)
辛飞飞; 陈小鸿; 林航飞
2009-01-01
Taxies were selected as probe cars to collect Floating Car Data (FCD) which can be used to evaluate traffic conditions in road network. In order to study the uncertainty of probe cars when traveling in urban road network, the definition of Detecting Capability (DC) was put forward. The relationship between DC of FCD and probe car sample size was investigated, which can help find reasonable sample size.Two indexes named Detecting Intensity (DI) and Detecting Rate (DR) were designed to analyze the change of DC under 10 samples with different sample sizes by simple random sampling and regression.The result shows that (1) there are similar time-variable characteristics of DI and DR under different sample sizes; (2) average DI in the network increases linearly with the growth of sample size and the growth rate of DI in arterial roads is much higher than that in access roads; (3) DR in road network increases non-linearly with the growth of sample size and DR of arterial roads can reach steady-state value more rapidly than that of access roads when sample size increases; (4) the sample size of probe cars can be reduced when FCD are used to evaluate traffic conditions in arterial roads.%以出租车作为浮动车数据采集探测车,针对探测车在路网上行驶的不确定性,提出浮动车数据路网覆盖能力的概念,研究覆盖能力与探测车样本量之间的关系.以覆盖强度和覆盖率为指标,采用简单随机抽样方法,研究了在10种样本条件下浮动车数据路网覆盖能力的变化情况,并进行回归分析.结果表明:在不同样本容量下,浮动车数据对各等级路网的覆盖强度和覆盖率均具有相似的时变特征;各等级路网的平均覆盖强度随样本容量线性增长,增长率随道路等级下降而迅速减小,高等级道路具有更高的覆盖强度;各等级路网的覆盖率随样本容量增大呈非线性增长,高等级道路能够更快达到稳定值;用浮动车数据判别高等级
Relative eye size in elasmobranchs.
Lisney, Thomas J; Collin, Shaun P
2007-01-01
Variation in relative eye size was investigated in a sample of 46 species of elasmobranch, 32 species of sharks and 14 species of batoids (skates and rays). To get a measure of eye size relative to body size, eye axial diameter was scaled with body mass using least-squares linear regression, using both raw species data, where species are treated as independent data points, and phylogenetically independent contrasts. Residual values calculated for each species, using the regression equations describing these scaling relationships, were then used as a measure of relative eye size. Relative and absolute eye size varies considerably in elasmobranchs, although sharks have significantly relatively larger eyes than batoids. The sharks with the relatively largest eyes are oceanic species; either pelagic sharks that move between the epipelagic (0-200 m) and 'upper' mesopelagic (200-600 m) zones, or benthic and benthopelagic species that live in the mesopelagic (200-1,000 m) and, to a lesser extent, bathypelagic (1,000-4,000 m) zones. The elasmobranchs with the relatively smallest eyes tend to be coastal, often benthic, batoids and sharks. Active benthopelagic and pelagic species, which prey on active, mobile prey also have relatively larger eyes than more sluggish, benthic elasmobranchs that feed on more sedentary prey such as benthic invertebrates. A significant positive correlation was found between absolute eye size and relative eye size, but some very large sharks, such as Carcharodon carcharias have absolutely large eyes, but have relatively small eyes in relation to body mass. PMID:17314474
Institute of Scientific and Technical Information of China (English)
任金鑫; 班俊生; 张燕婕; 刘桂珍; 杨莹雪; 郑寅
2016-01-01
This paper introduced a small size vacuum package machine to seal the samples which enabled the hydrostatic hook electronic balance to make a fast measurement of the small volumetric weight of geologic samples.Experiment results indicated that by applying the vacuum bags of the same batch in measuring the small volumetric weight of geological samples of different physical properties (dense and hard quartzite, bauxite,magnetite,lead and zinc,aluminum block and absorbent ores with large porosity),the result could be accurate and reliable and be consistent with the result generated by applying traditional wax-sealing method. The relative standard deviation of the density measured values of the vacuum bags of the same batch ranges between 0.08% and 0.10%.Therefore,in actual test,it requires just a single measurement of the mass and density of the vacuum bags of the same batch,and the small volumetric weight of geological sample can be calculated by weighing the mass of the sample and that of its vacuum bag which are sealed in the water. Compared with traditional wax-sealing method,the vacuum package sealing method is cost effective and easy and simple to operate.Especially in terms of the measurement of the small volumetric weight of absorbent geological samples,the ores will maintain the in situ situation by vacuum sealing,without the need for measurement of the moisture content of the samples or any relative correction process.Moreover,the hydrostatic hook electronic balance can generate a more accurate measuring result than an ordinary electronic balance do when weighing the mass of samples in water.This method deserves application and promotion in any rock and mineral analysis laboratory.%文章介绍了一种使用小型真空包装机密封试样，进而使静水力学挂钩电子天平快速测量地质样品小体重的方法。实验表明：使用同批次真空袋包装测量物理特性不同的地质样品(致密坚硬的石英岩、铝土矿、磁铁矿
Energy Technology Data Exchange (ETDEWEB)
Gasco, C.; Navarro, N.; Gonzalez, P.; Heras, M. C.; Gapan, M. P.; Alonso, C.; Calderon, A.; Sanchez, D.; Morante, R.; Fernandez, M.; Gajate, A.; Alvarez, A.
2008-08-06
The Department of Vigilance Radiologica y Radiactividad Ambiental from CIEMAT has developed an appropriate analytical methodology for Fe-55 and Ni-63 sequential determination in environmental samples based on the procedure used by RIS0 Laboratories. The experimental results obtained in the mayor and minor elements behaviour (soil and air constituents) in the different types of resins used for separating Fe-55 and Ni-63 are showed in this report. The measuring method of both isotopes by scintillation counting has been optimized with Ultima Gold liquid with different concentrations of stable element Fe and Ni. The decontamination factors of different gamma-emitters are experimentally determined in this method with the presence of soil matrix. The Fe-55 and Ni-63 activity concentrations and their associated uncertainties have been calculated from the counting data and sample preparation. A computer application has been implemented in Visual Basic in excel sheets for: (I) obtaining the counting data from spectrometer and counts in each window, (II) representing graphically the background and sample spectrums, (III) determining the activity concentration and its associated uncertainty and (IV) calculating the characteristic limits using ISO 11929 (2007) with various confidence levels. (Author) 30 refs.
Ferrari, Ulisse
2016-08-01
Maximum entropy models provide the least constrained probability distributions that reproduce statistical properties of experimental datasets. In this work we characterize the learning dynamics that maximizes the log-likelihood in the case of large but finite datasets. We first show how the steepest descent dynamics is not optimal as it is slowed down by the inhomogeneous curvature of the model parameters' space. We then provide a way for rectifying this space which relies only on dataset properties and does not require large computational efforts. We conclude by solving the long-time limit of the parameters' dynamics including the randomness generated by the systematic use of Gibbs sampling. In this stochastic framework, rather than converging to a fixed point, the dynamics reaches a stationary distribution, which for the rectified dynamics reproduces the posterior distribution of the parameters. We sum up all these insights in a "rectified" data-driven algorithm that is fast and by sampling from the parameters' posterior avoids both under- and overfitting along all the directions of the parameters' space. Through the learning of pairwise Ising models from the recording of a large population of retina neurons, we show how our algorithm outperforms the steepest descent method.