On power and sample size calculation in ethnic sensitivity studies.
Zhang, Wei; Sethuraman, Venkat
2011-01-01
In ethnic sensitivity studies, it is of interest to know whether the same dose has the same effect over populations in different regions. Glasbrenner and Rosenkranz (2006) proposed a criterion for ethnic sensitivity studies in the context of different dose-exposure models. Their method is liberal in the sense that their sample size will not achieve the target power. We will show that the power function can be easily calculated by numeric integration, and the sample size can be determined by bisection.
Sample size calculation for comparing two negative binomial rates.
Zhu, Haiyuan; Lakkis, Hassan
2014-02-10
Negative binomial model has been increasingly used to model the count data in recent clinical trials. It is frequently chosen over Poisson model in cases of overdispersed count data that are commonly seen in clinical trials. One of the challenges of applying negative binomial model in clinical trial design is the sample size estimation. In practice, simulation methods have been frequently used for sample size estimation. In this paper, an explicit formula is developed to calculate sample size based on the negative binomial model. Depending on different approaches to estimate the variance under null hypothesis, three variations of the sample size formula are proposed and discussed. Important characteristics of the formula include its accuracy and its ability to explicitly incorporate dispersion parameter and exposure time. The performance of the formula with each variation is assessed using simulations.
Consultants' forum: should post hoc sample size calculations be done?
Walters, Stephen J
2009-01-01
Pre-study sample size calculations for clinical trial research protocols are now mandatory. When an investigator is designing a study to compare the outcomes of an intervention, an essential step is the calculation of sample sizes that will allow a reasonable chance (power) of detecting a pre-determined difference (effect size) in the outcome variable, at a given level of statistical significance. Frequently studies will recruit fewer patients than the initial pre-study sample size calculation suggested. Investigators are faced with the fact that their study may be inadequately powered to detect the pre-specified treatment effect and the statistical analysis of the collected outcome data may or may not report a statistically significant result. If the data produces a "non-statistically significant result" then investigators are frequently tempted to ask the question "Given the actual final study size, what is the power of the study, now, to detect a treatment effect or difference?" The aim of this article is to debate whether or not it is desirable to answer this question and to undertake a power calculation, after the data have been collected and analysed.
Power and Sample Size Calculations for Contrast Analysis in ANCOVA.
Shieh, Gwowen
2017-01-01
Analysis of covariance (ANCOVA) is commonly used in behavioral and educational research to reduce the error variance and improve the power of analysis of variance by adjusting the covariate effects. For planning and evaluating randomized ANCOVA designs, a simple sample-size formula has been proposed to account for the variance deflation factor in the comparison of two treatment groups. The objective of this article is to highlight an overlooked and potential problem of the exiting approximation and to provide an alternative and exact solution of power and sample size assessments for testing treatment contrasts. Numerical investigations are conducted to reveal the relative performance of the two procedures as a reliable technique to accommodate the covariate features that make ANCOVA design particularly distinctive. The described approach has important advantages over the current method in general applicability, methodological justification, and overall accuracy. To enhance the practical usefulness, computer algorithms are presented to implement the recommended power calculations and sample-size determinations.
Sample size calculation for meta-epidemiological studies.
Giraudeau, Bruno; Higgins, Julian P T; Tavernier, Elsa; Trinquart, Ludovic
2016-01-30
Meta-epidemiological studies are used to compare treatment effect estimates between randomized clinical trials with and without a characteristic of interest. To our knowledge, there is presently nothing to help researchers to a priori specify the required number of meta-analyses to be included in a meta-epidemiological study. We derived a theoretical power function and sample size formula in the framework of a hierarchical model that allows for variation in the impact of the characteristic between trials within a meta-analysis and between meta-analyses. A simulation study revealed that the theoretical function overestimated power (because of the assumption of equal weights for each trial within and between meta-analyses). We also propose a simulation approach that allows for relaxing the constraints used in the theoretical approach and is more accurate. We illustrate that the two variables that mostly influence power are the number of trials per meta-analysis and the proportion of trials with the characteristic of interest. We derived a closed-form power function and sample size formula for estimating the impact of trial characteristics in meta-epidemiological studies. Our analytical results can be used as a 'rule of thumb' for sample size calculation for a meta-epidemiologic study. A more accurate sample size can be derived with a simulation study.
Sample Size Calculations for Precise Interval Estimation of the Eta-Squared Effect Size
Shieh, Gwowen
2015-01-01
Analysis of variance is one of the most frequently used statistical analyses in the behavioral, educational, and social sciences, and special attention has been paid to the selection and use of an appropriate effect size measure of association in analysis of variance. This article presents the sample size procedures for precise interval estimation…
Vallejo, Adriana; Muniesa, Ana; Ferreira, Chelo; de Blas, Ignacio
2013-10-01
Nowadays the formula to calculate the sample size for estimate a proportion (as prevalence) is based on the Normal distribution, however it would be based on a Binomial distribution which confidence interval was possible to be calculated using the Wilson Score method. By comparing the two formulae (Normal and Binomial distributions), the variation of the amplitude of the confidence intervals is relevant in the tails and the center of the curves. In order to calculate the needed sample size we have simulated an iterative sampling procedure, which shows an underestimation of the sample size for values of prevalence closed to 0 or 1, and also an overestimation for values closed to 0.5. Attending to these results we proposed an algorithm based on Wilson Score method that provides similar values for the sample size than empirically obtained by simulation.
Power and Sample Size Calculations for Logistic Regression Tests for Differential Item Functioning
Li, Zhushan
2014-01-01
Logistic regression is a popular method for detecting uniform and nonuniform differential item functioning (DIF) effects. Theoretical formulas for the power and sample size calculations are derived for likelihood ratio tests and Wald tests based on the asymptotic distribution of the maximum likelihood estimators for the logistic regression model.…
Sample Size Calculation for Estimating or Testing a Nonzero Squared Multiple Correlation Coefficient
Krishnamoorthy, K.; Xia, Yanping
2008-01-01
The problems of hypothesis testing and interval estimation of the squared multiple correlation coefficient of a multivariate normal distribution are considered. It is shown that available one-sided tests are uniformly most powerful, and the one-sided confidence intervals are uniformly most accurate. An exact method of calculating sample size to…
Sample size calculation for microarray experiments with blocked one-way design
Directory of Open Access Journals (Sweden)
Jung Sin-Ho
2009-05-01
Full Text Available Abstract Background One of the main objectives of microarray analysis is to identify differentially expressed genes for different types of cells or treatments. Many statistical methods have been proposed to assess the treatment effects in microarray experiments. Results In this paper, we consider discovery of the genes that are differentially expressed among K (> 2 treatments when each set of K arrays consists of a block. In this case, the array data among K treatments tend to be correlated because of block effect. We propose to use the blocked one-way ANOVA F-statistic to test if each gene is differentially expressed among K treatments. The marginal p-values are calculated using a permutation method accounting for the block effect, adjusting for the multiplicity of the testing procedure by controlling the false discovery rate (FDR. We propose a sample size calculation method for microarray experiments with a blocked one-way design. With FDR level and effect sizes of genes specified, our formula provides a sample size for a given number of true discoveries. Conclusion The calculated sample size is shown via simulations to provide an accurate number of true discoveries while controlling the FDR at the desired level.
Shao, Quanxi; Wang, You-Gan
2009-09-01
Power calculation and sample size determination are critical in designing environmental monitoring programs. The traditional approach based on comparing the mean values may become statistically inappropriate and even invalid when substantial proportions of the response values are below the detection limits or censored because strong distributional assumptions have to be made on the censored observations when implementing the traditional procedures. In this paper, we propose a quantile methodology that is robust to outliers and can also handle data with a substantial proportion of below-detection-limit observations without the need of imputing the censored values. As a demonstration, we applied the methods to a nutrient monitoring project, which is a part of the Perth Long-Term Ocean Outlet Monitoring Program. In this example, the sample size required by our quantile methodology is, in fact, smaller than that by the traditional t-test, illustrating the merit of our method.
Sample-size calculations for multi-group comparison in population pharmacokinetic experiments.
Ogungbenro, Kayode; Aarons, Leon
2010-01-01
This paper describes an approach for calculating sample size for population pharmacokinetic experiments that involve hypothesis testing based on multi-group comparison detecting the difference in parameters between groups under mixed-effects modelling. This approach extends what has been described for generalized linear models and nonlinear population pharmacokinetic models that involve only binary covariates to more complex nonlinear population pharmacokinetic models. The structural nonlinear model is linearized around the random effects to obtain the marginal model and the hypothesis testing involving model parameters is based on Wald's test. This approach provides an efficient and fast method for calculating sample size for hypothesis testing in population pharmacokinetic models. The approach can also handle different design problems such as unequal allocation of subjects to groups and unbalanced sampling times between and within groups. The results obtained following application to a one compartment intravenous bolus dose model that involved three different hypotheses under different scenarios showed good agreement between the power obtained from NONMEM simulations and nominal power.
Larson, Michael J; Carbine, Kaylie A
2017-01-01
There is increasing focus across scientific fields on adequate sample sizes to ensure non-biased and reproducible effects. Very few studies, however, report sample size calculations or even the information needed to accurately calculate sample sizes for grants and future research. We systematically reviewed 100 randomly selected clinical human electrophysiology studies from six high impact journals that frequently publish electroencephalography (EEG) and event-related potential (ERP) research to determine the proportion of studies that reported sample size calculations, as well as the proportion of studies reporting the necessary components to complete such calculations. Studies were coded by the two authors blinded to the other's results. Inter-rater reliability was 100% for the sample size calculations and kappa above 0.82 for all other variables. Zero of the 100 studies (0%) reported sample size calculations. 77% utilized repeated-measures designs, yet zero studies (0%) reported the necessary variances and correlations among repeated measures to accurately calculate future sample sizes. Most studies (93%) reported study statistical values (e.g., F or t values). Only 40% reported effect sizes, 56% reported mean values, and 47% reported indices of variance (e.g., standard deviations/standard errors). Absence of such information hinders accurate determination of sample sizes for study design, grant applications, and meta-analyses of research and whether studies were adequately powered to detect effects of interest. Increased focus on sample size calculations, utilization of registered reports, and presenting information detailing sample size calculations and statistics for future researchers are needed and will increase sample size-related scientific rigor in human electrophysiology research.
Directory of Open Access Journals (Sweden)
Pitchaiah Mandava
provide the user with programs to calculate and incorporate errors into sample size estimation.
Reliable calculation in probabilistic logic: Accounting for small sample size and model uncertainty
Energy Technology Data Exchange (ETDEWEB)
Ferson, S. [Applied Biomathematics, Setauket, NY (United States)
1996-12-31
A variety of practical computational problems arise in risk and safety assessments, forensic statistics and decision analyses in which the probability of some event or proposition E is to be estimated from the probabilities of a finite list of related subevents or propositions F,G,H,.... In practice, the analyst`s knowledge may be incomplete in two ways. First, the probabilities of the subevents may be imprecisely known from statistical estimations, perhaps based on very small sample sizes. Second, relationships among the subevents may be known imprecisely. For instance, there may be only limited information about their stochastic dependencies. Representing probability estimates as interval ranges on has been suggested as a way to address the first source of imprecision. A suite of AND, OR and NOT operators defined with reference to the classical Frochet inequalities permit these probability intervals to be used in calculations that address the second source of imprecision, in many cases, in a best possible way. Using statistical confidence intervals as inputs unravels the closure properties of this approach however, requiring that probability estimates be characterized by a nested stack of intervals for all possible levels of statistical confidence, from a point estimate (0% confidence) to the entire unit interval (100% confidence). The corresponding logical operations implied by convolutive application of the logical operators for every possible pair of confidence intervals reduces by symmetry to a manageably simple level-wise iteration. The resulting calculus can be implemented in software that allows users to compute comprehensive and often level-wise best possible bounds on probabilities for logical functions of events.
Directory of Open Access Journals (Sweden)
Elsa Tavernier
Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.
Bayesian sample size calculation for estimation of the difference between two binomial proportions.
Pezeshk, Hamid; Nematollahi, Nader; Maroufy, Vahed; Marriott, Paul; Gittins, John
2013-12-01
In this study, we discuss a decision theoretic or fully Bayesian approach to the sample size question in clinical trials with binary responses. Data are assumed to come from two binomial distributions. A Dirichlet distribution is assumed to describe prior knowledge of the two success probabilities p1 and p2. The parameter of interest is p = p1 - p2. The optimal size of the trial is obtained by maximising the expected net benefit function. The methodology presented in this article extends previous work by the assumption of dependent prior distributions for p1 and p2.
DEFF Research Database (Denmark)
Chan, A.W.; Hrobjartsson, A.; Jorgensen, K.J.;
2008-01-01
OBJECTIVE: To evaluate how often sample size calculations and methods of statistical analysis are pre-specified or changed in randomised trials. DESIGN: Retrospective cohort study. Data source Protocols and journal publications of published randomised parallel group trials initially approved...... in 1994-5 by the scientific-ethics committees for Copenhagen and Frederiksberg, Denmark (n=70). MAIN OUTCOME MEASURE: Proportion of protocols and publications that did not provide key information about sample size calculations and statistical methods; proportion of trials with discrepancies between...... information presented in the protocol and the publication. RESULTS: Only 11/62 trials described existing sample size calculations fully and consistently in both the protocol and the publication. The method of handling protocol deviations was described in 37 protocols and 43 publications. The method...
Exact calculation of power and sample size in bioequivalence studies using two one-sided tests.
Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V
2015-01-01
The number of subjects in a pharmacokinetic two-period two-treatment crossover bioequivalence study is typically small, most often less than 60. The most common approach to testing for bioequivalence is the two one-sided tests procedure. No explicit mathematical formula for the power function in the context of the two one-sided tests procedure exists in the statistical literature, although the exact power based on Owen's special case of bivariate noncentral t-distribution has been tabulated and graphed. Several approximations have previously been published for the probability of rejection in the two one-sided tests procedure for crossover bioequivalence studies. These approximations and associated sample size formulas are reviewed in this article and compared for various parameter combinations with exact power formulas derived here, which are computed analytically as univariate integrals and which have been validated by Monte Carlo simulations. The exact formulas for power and sample size are shown to improve markedly in realistic parameter settings over the previous approximations.
The impact of metrology study sample size on uncertainty in IAEA safeguards calculations
Directory of Open Access Journals (Sweden)
Burr Tom
2016-01-01
Full Text Available Quantitative conclusions by the International Atomic Energy Agency (IAEA regarding States' nuclear material inventories and flows are provided in the form of material balance evaluations (MBEs. MBEs use facility estimates of the material unaccounted for together with verification data to monitor for possible nuclear material diversion. Verification data consist of paired measurements (usually operators' declarations and inspectors' verification results that are analysed one-item-at-a-time to detect significant differences. Also, to check for patterns, an overall difference of the operator-inspector values using a “D (difference statistic” is used. The estimated DP and false alarm probability (FAP depend on the assumed measurement error model and its random and systematic error variances, which are estimated using data from previous inspections (which are used for metrology studies to characterize measurement error variance components. Therefore, the sample sizes in both the previous and current inspections will impact the estimated DP and FAP, as is illustrated by simulated numerical examples. The examples include application of a new expression for the variance of the D statistic assuming the measurement error model is multiplicative and new application of both random and systematic error variances in one-item-at-a-time testing.
45 CFR Appendix C to Part 1356 - Calculating Sample Size for NYTD Follow-Up Populations
2010-10-01
... Populations C Appendix C to Part 1356 Public Welfare Regulations Relating to Public Welfare (Continued) OFFICE... Follow-Up Populations 1. Using Finite Population Correction The Finite Population Correction (FPC) is applied when the sample is drawn from a population of one to 5,000 youth, because the sample is more...
Reich, Nicholas G; Myers, Jessica A; Obeng, Daniel; Milstone, Aaron M; Perl, Trish M
2012-01-01
In recent years, the number of studies using a cluster-randomized design has grown dramatically. In addition, the cluster-randomized crossover design has been touted as a methodological advance that can increase efficiency of cluster-randomized studies in certain situations. While the cluster-randomized crossover trial has become a popular tool, standards of design, analysis, reporting and implementation have not been established for this emergent design. We address one particular aspect of cluster-randomized and cluster-randomized crossover trial design: estimating statistical power. We present a general framework for estimating power via simulation in cluster-randomized studies with or without one or more crossover periods. We have implemented this framework in the clusterPower software package for R, freely available online from the Comprehensive R Archive Network. Our simulation framework is easy to implement and users may customize the methods used for data analysis. We give four examples of using the software in practice. The clusterPower package could play an important role in the design of future cluster-randomized and cluster-randomized crossover studies. This work is the first to establish a universal method for calculating power for both cluster-randomized and cluster-randomized clinical trials. More research is needed to develop standardized and recommended methodology for cluster-randomized crossover studies.
To discuss different calculation methods of sample size%样本含量估算方法探讨
Institute of Scientific and Technical Information of China (English)
喻宁芳
2014-01-01
目的：介绍和比较医学实验设计中不同的样本含量估算方法。方法：以PI3K抑制剂对小鼠气道炎症影响的实验研究*为例运用不同方法计算样本含量。结果：①公式法计算需12例②PASS软件Simple法计算需10例③Stata软件计算需8例，验算其检验效能：1-β>0.9结论：3种不同方法估算的样本含量都是合理有效的，实验研究人员可以以多种计算结果为依据，分析实验研究性质，综合考虑研究成本、可行性与伦理学要求对样本含量的影响确定合适的样本数。%Objective: To introduce and compare different calculation Methods of sample size in experiment design.Methods: As an example of PI3K inhibitor reduces respiratory tract inflammation in a murine model of Asthma.Results: In method of formula,12;in PASS software,8;in Stata software, 10.1-β>0.9.Conclusion: Proper analysis of the nature of research design,setting the correct parameters,Based on a variety of calculations to estimate the sample size, and then considering the research costs, feasibility and ethics requirements impact on sample size, and ultimately determine the most appropriate number of samples.
Directory of Open Access Journals (Sweden)
Finch Stephen J
2005-04-01
Full Text Available Abstract Background Phenotype error causes reduction in power to detect genetic association. We present a quantification of phenotype error, also known as diagnostic error, on power and sample size calculations for case-control genetic association studies between a marker locus and a disease phenotype. We consider the classic Pearson chi-square test for independence as our test of genetic association. To determine asymptotic power analytically, we compute the distribution's non-centrality parameter, which is a function of the case and control sample sizes, genotype frequencies, disease prevalence, and phenotype misclassification probabilities. We derive the non-centrality parameter in the presence of phenotype errors and equivalent formulas for misclassification cost (the percentage increase in minimum sample size needed to maintain constant asymptotic power at a fixed significance level for each percentage increase in a given misclassification parameter. We use a linear Taylor Series approximation for the cost of phenotype misclassification to determine lower bounds for the relative costs of misclassifying a true affected (respectively, unaffected as a control (respectively, case. Power is verified by computer simulation. Results Our major findings are that: (i the median absolute difference between analytic power with our method and simulation power was 0.001 and the absolute difference was no larger than 0.011; (ii as the disease prevalence approaches 0, the cost of misclassifying a unaffected as a case becomes infinitely large while the cost of misclassifying an affected as a control approaches 0. Conclusion Our work enables researchers to specifically quantify power loss and minimum sample size requirements in the presence of phenotype errors, thereby allowing for more realistic study design. For most diseases of current interest, verifying that cases are correctly classified is of paramount importance.
Neumann, Christoph; Taub, Margaret A; Younkin, Samuel G; Beaty, Terri H; Ruczinski, Ingo; Schwender, Holger
2014-11-01
Case-parent trio studies considering genotype data from children affected by a disease and their parents are frequently used to detect single nucleotide polymorphisms (SNPs) associated with disease. The most popular statistical tests for this study design are transmission/disequilibrium tests (TDTs). Several types of these tests have been developed, for example, procedures based on alleles or genotypes. Therefore, it is of great interest to examine which of these tests have the highest statistical power to detect SNPs associated with disease. Comparisons of the allelic and the genotypic TDT for individual SNPs have so far been conducted based on simulation studies, since the test statistic of the genotypic TDT was determined numerically. Recently, however, it has been shown that this test statistic can be presented in closed form. In this article, we employ this analytic solution to derive equations for calculating the statistical power and the required sample size for different types of the genotypic TDT. The power of this test is then compared with the one of the corresponding score test assuming the same mode of inheritance as well as the allelic TDT based on a multiplicative mode of inheritance, which is equivalent to the score test assuming an additive mode of inheritance. This is, thus, the first time the power of these tests are compared based on equations, yielding instant results and omitting the need for time-consuming simulation studies. This comparison reveals that these tests have almost the same power, with the score test being slightly more powerful.
Dong, Nianbo; Maynard, Rebecca
2013-01-01
This paper and the accompanying tool are intended to complement existing supports for conducting power analysis tools by offering a tool based on the framework of Minimum Detectable Effect Sizes (MDES) formulae that can be used in determining sample size requirements and in estimating minimum detectable effect sizes for a range of individual- and…
Desu, M M
2012-01-01
One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria
Directory of Open Access Journals (Sweden)
Rakesh R. Pathak
2012-02-01
Full Text Available Based on the law of large numbers which is derived from probability theory, we tend to increase the sample size to the maximum. Central limit theorem is another inference from the same probability theory which approves largest possible number as sample size for better validity of measuring central tendencies like mean and median. Sometimes increase in sample-size turns only into negligible betterment or there is no increase at all in statistical relevance due to strong dependence or systematic error. If we can afford a little larger sample, statistically power of 0.90 being taken as acceptable with medium Cohen's d (<0.5 and for that we can take a sample size of 175 very safely and considering problem of attrition 200 samples would suffice. [Int J Basic Clin Pharmacol 2012; 1(1.000: 43-44
Calculating Optimal Inventory Size
Directory of Open Access Journals (Sweden)
Ruby Perez
2010-01-01
Full Text Available The purpose of the project is to find the optimal value for the Economic Order Quantity Model and then use a lean manufacturing Kanban equation to find a numeric value that will minimize the total cost and the inventory size.
Basic Statistical Concepts for Sample Size Estimation
Directory of Open Access Journals (Sweden)
Vithal K Dhulkhed
2008-01-01
Full Text Available For grant proposals the investigator has to include an estimation of sample size .The size of the sample should be adequate enough so that there is sufficient data to reliably answer the research question being addressed by the study. At the very planning stage of the study the investigator has to involve the statistician. To have meaningful dialogue with the statistician every research worker should be familiar with the basic concepts of statistics. This paper is concerned with simple principles of sample size calculation. Concepts are explained based on logic rather than rigorous mathematical calculations to help him assimilate the fundamentals.
How sample size influences research outcomes
Directory of Open Access Journals (Sweden)
Jorge Faber
2014-08-01
Full Text Available Sample size calculation is part of the early stages of conducting an epidemiological, clinical or lab study. In preparing a scientific paper, there are ethical and methodological indications for its use. Two investigations conducted with the same methodology and achieving equivalent results, but different only in terms of sample size, may point the researcher in different directions when it comes to making clinical decisions. Therefore, ideally, samples should not be small and, contrary to what one might think, should not be excessive. The aim of this paper is to discuss in clinical language the main implications of the sample size when interpreting a study.
Size definitions for particle sampling
Energy Technology Data Exchange (ETDEWEB)
1981-05-01
The recommendations of an ad hoc working group appointed by Committee TC 146 of the International Standards Organization on size definitions for particle sampling are reported. The task of the group was to collect the various definitions of 'respirable dust' and to propose a practical definition on recommendations for handling standardization on this matter. One of two proposed cut-sizes in regard to division at the larynx will be adopted after a ballot.
Biostatistics Series Module 5: Determining Sample Size.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Determining the appropriate sample size for a study, whatever be its type, is a fundamental aspect of biomedical research. An adequate sample ensures that the study will yield reliable information, regardless of whether the data ultimately suggests a clinically important difference between the interventions or elements being studied. The probability of Type 1 and Type 2 errors, the expected variance in the sample and the effect size are the essential determinants of sample size in interventional studies. Any method for deriving a conclusion from experimental data carries with it some risk of drawing a false conclusion. Two types of false conclusion may occur, called Type 1 and Type 2 errors, whose probabilities are denoted by the symbols σ and β. A Type 1 error occurs when one concludes that a difference exists between the groups being compared when, in reality, it does not. This is akin to a false positive result. A Type 2 error occurs when one concludes that difference does not exist when, in reality, a difference does exist, and it is equal to or larger than the effect size defined by the alternative to the null hypothesis. This may be viewed as a false negative result. When considering the risk of Type 2 error, it is more intuitive to think in terms of power of the study or (1 - β). Power denotes the probability of detecting a difference when a difference does exist between the groups being compared. Smaller α or larger power will increase sample size. Conventional acceptable values for power and α are 80% or above and 5% or below, respectively, when calculating sample size. Increasing variance in the sample tends to increase the sample size required to achieve a given power level. The effect size is the smallest clinically important difference that is sought to be detected and, rather than statistical convention, is a matter of past experience and clinical judgment. Larger samples are required if smaller differences are to be detected. Although the
[Clinical research V. Sample size].
Talavera, Juan O; Rivas-Ruiz, Rodolfo; Bernal-Rosales, Laura Paola
2011-01-01
In clinical research it is impossible and inefficient to study all patients with a specific pathology, so it is necessary to study a sample of them. The estimation of the sample size before starting a study guarantees the stability of the results and allows us to foresee the feasibility of the study depending on the availability of patients and cost. The basic structure of sample size estimation is based on the premise that seeks to demonstrate, among other cases, that the observed difference between two or more maneuvers in the subsequent state is real. Initially, it requires knowing the value of the expected difference (δ) and its data variation (standard deviation). These data are usually obtained from previous studies. Then, other components must be considered: a (alpha), percentage of error in the assertion that the difference between means is real, usually 5 %; and β, error rate accepting the claim that the no-difference between the means is real, usually ranging from 15 to 20 %. Finally, these values are substituted into the formula or in an electronic program for estimating sample size. While summary and dispersion measures vary with the type of variable according to the outcome, the basic structure is the same.
Institute of Scientific and Technical Information of China (English)
赵健; 龚婷婷; 范肖肖; 姚科; 朱彩蓉
2013-01-01
目的 探讨科研设计中,计算样本含量时所需条件不足情形下的应对措施.方法 结合实例提出问题,利用PASS11软件解决问题.结果 通过合理设置参数取值范围来计算样本含量时所需条件不足的问题;采用Heish提出的方法应对Logistic回归协变量信息不足的问题；采用Lakatos法应对生存分析样本含量计算时生存时间分布未知等问题.结论 在科研设计中,灵活地使用PASS11软件可以解决样本含量计算所需条件不足的一些问题,但还有部分问题尚待深入研究.%OBJECTIVE To explore the solution to sample size calculation for scientific research design without enough information. METHODS Some examples were applied to illustrate different conditions of uncertain parameters. Sample size calculations were settled with the help of PASS11. RESULTS Reasonable range was set up for the parameters to solve the problem of uncertain parameters in calculating sample size by setting up. The Heish method was used to solve the problem of absence of co-variables information in Logistic regression sample-size calculation. The Lakatos method was used to solve the problem of unknown distribution of survival-time in the sample-size calculation of survival analysis. CONCLUSION In the design stage of scientific research, PASS 11 can be used flexibly to solve few problems of sample size estimation. Therefore, further study is needed to solve other complicated problems.
Sample size for morphological traits of pigeonpea
Directory of Open Access Journals (Sweden)
Giovani Facco
2015-12-01
Full Text Available The objectives of this study were to determine the sample size (i.e., number of plants required to accurately estimate the average of morphological traits of pigeonpea (Cajanus cajan L. and to check for variability in sample size between evaluation periods and seasons. Two uniformity trials (i.e., experiments without treatment were conducted for two growing seasons. In the first season (2011/2012, the seeds were sown by broadcast seeding, and in the second season (2012/2013, the seeds were sown in rows spaced 0.50 m apart. The ground area in each experiment was 1,848 m2, and 360 plants were marked in the central area, in a 2 m × 2 m grid. Three morphological traits (e.g., number of nodes, plant height and stem diameter were evaluated 13 times during the first season and 22 times in the second season. Measurements for all three morphological traits were normally distributed and confirmed through the Kolmogorov-Smirnov test. Randomness was confirmed using the Run Test, and the descriptive statistics were calculated. For each trait, the sample size (n was calculated for the semiamplitudes of the confidence interval (i.e., estimation error equal to 2, 4, 6, ..., 20% of the estimated mean with a confidence coefficient (1-? of 95%. Subsequently, n was fixed at 360 plants, and the estimation error of the estimated percentage of the average for each trait was calculated. Variability of the sample size for the pigeonpea culture was observed between the morphological traits evaluated, among the evaluation periods and between seasons. Therefore, to assess with an accuracy of 6% of the estimated average, at least 136 plants must be evaluated throughout the pigeonpea crop cycle to determine the sample size for the traits (e.g., number of nodes, plant height and stem diameter in the different evaluation periods and between seasons.
Institute of Scientific and Technical Information of China (English)
林洁; 孙志明
2015-01-01
Objective To analyze the differences between SAS, PASS and Stata for sample size calculation in a test of two means (rates) and recommend the appropriate software for sample size calculation. Methods By setting different pa-rameters, sample sizes were calculated using three kinds of software respectively and compared with the formula results. Results In two sample means test, Stata and PASS had the most accurate results, the results in SAS were affected by different parameters. In two sample rates test, the SAS results were the best of three, the accuracy of PASS was related with the sample size, the results in Stata were larger than others and affected by different parameters. Conclusion The results are not consistent using different software, SAS is recommended for two sample mean (rate) of sample size calcu-lation.%目的：分析和探讨运用SAS、PASS、Stata 3种软件在两均数(率)比较中进行样本量估计的结果差异,推荐合适的样本量估计软件。方法通过设定不同的参数情况,分别运用3种软件计算各自样本量,并且与公式计算结果进行比较。结果在两均数比较时,Stata和PASS的样本量估计结果最准确,不同的参数会影响SAS的结果；在两个率比较时,SAS最准确,PASS的准确性与样本量大小有关系,Stata结果偏大且受不同参数的影响。结论不同软件计算结果并不一致,综合考虑推荐用SAS软件进行两样本均数(率)比较的样本量估计。
Heckmann, Tobias; Gegg, Katharina; Becht, Michael
2013-04-01
Statistical approaches to landslide susceptibility modelling on the catchment and regional scale are used very frequently compared to heuristic and physically based approaches. In the present study, we deal with the problem of the optimal sample size for a logistic regression model. More specifically, a stepwise approach has been chosen in order to select those independent variables (from a number of derivatives of a digital elevation model and landcover data) that explain best the spatial distribution of debris flow initiation zones in two neighbouring central alpine catchments in Austria (used mutually for model calculation and validation). In order to minimise problems arising from spatial autocorrelation, we sample a single raster cell from each debris flow initiation zone within an inventory. In addition, as suggested by previous work using the "rare events logistic regression" approach, we take a sample of the remaining "non-event" raster cells. The recommendations given in the literature on the size of this sample appear to be motivated by practical considerations, e.g. the time and cost of acquiring data for non-event cases, which do not apply to the case of spatial data. In our study, we aim at finding empirically an "optimal" sample size in order to avoid two problems: First, a sample too large will violate the independent sample assumption as the independent variables are spatially autocorrelated; hence, a variogram analysis leads to a sample size threshold above which the average distance between sampled cells falls below the autocorrelation range of the independent variables. Second, if the sample is too small, repeated sampling will lead to very different results, i.e. the independent variables and hence the result of a single model calculation will be extremely dependent on the choice of non-event cells. Using a Monte-Carlo analysis with stepwise logistic regression, 1000 models are calculated for a wide range of sample sizes. For each sample size
Sample size determination in clinical trials with multiple endpoints
Sozu, Takashi; Hamasaki, Toshimitsu; Evans, Scott R
2015-01-01
This book integrates recent methodological developments for calculating the sample size and power in trials with more than one endpoint considered as multiple primary or co-primary, offering an important reference work for statisticians working in this area. The determination of sample size and the evaluation of power are fundamental and critical elements in the design of clinical trials. If the sample size is too small, important effects may go unnoticed; if the sample size is too large, it represents a waste of resources and unethically puts more participants at risk than necessary. Recently many clinical trials have been designed with more than one endpoint considered as multiple primary or co-primary, creating a need for new approaches to the design and analysis of these clinical trials. The book focuses on the evaluation of power and sample size determination when comparing the effects of two interventions in superiority clinical trials with multiple endpoints. Methods for sample size calculation in clin...
Sample size in qualitative interview studies
DEFF Research Database (Denmark)
Malterud, Kirsti; Siersma, Volkert Dirk; Guassora, Ann Dorrit Kristiane
2016-01-01
Sample sizes must be ascertained in qualitative studies like in quantitative studies but not by the same means. The prevailing concept for sample size in qualitative studies is “saturation.” Saturation is closely tied to a specific methodology, and the term is inconsistently applied. We propose...... the concept “information power” to guide adequate sample size for qualitative studies. Information power indicates that the more information the sample holds, relevant for the actual study, the lower amount of participants is needed. We suggest that the size of a sample with sufficient information power...... and during data collection of a qualitative study is discussed....
Cutoff sample size estimation for survival data: a simulation study
2014-01-01
This thesis demonstrates the possible cutoff sample size point that balances goodness of es-timation and study expenditure by a practical cancer case. As it is crucial to determine the sample size in designing an experiment, researchers attempt to find the suitable sample size that achieves desired power and budget efficiency at the same time. The thesis shows how simulation can be used for sample size and precision calculations with survival data. The pre-sentation concentrates on the simula...
Institute of Scientific and Technical Information of China (English)
李印; 张美晶; 沈朝建; 孙向东; 康京丽; 黄保续; 王幼明
2016-01-01
In order to illuminate how to decide the sampling size and interpret the testing results during conducting antibody surveillance,so as to provide reference for practitioners,relevant epidemiological principles were introduced and debates about sampling size and result interpretation were analyzed and demonstrated systematically. Through necessary calculations,different ways of evaluating the immune qualification rate towards different scale livestock and poultry farms were put forward. The results indicated that the deficiencies of post-vaccination antibody surveillance still existed in China,and evidences on why the“70%coverage rate”was not suitable were given. This discussion and calculation would offer reference on how to further improve post-vaccination antibody surveillance in China.%为详细阐述抗体监测中抽样数量及合格群体判定标准等问题，进而为相关人员提供科学参考，本文对相关流行病学原理进行了介绍，对国内在免疫后抗体监测中抽样数量及结果判定等方面存在的争议进行了系统分析，并通过计算给出了不同规模场户评价免疫合格率时的抽样量及结果判定方法。分析结果证实了目前我国在免疫抗体监测中存在不足，并说明了为何不应固守“70%抗体免疫合格率”这一指标的理由。这些结果为进一步改进动物免疫后的抗体监测提供了借鉴。
[Effect sizes, statistical power and sample sizes in "the Japanese Journal of Psychology"].
Suzukawa, Yumi; Toyoda, Hideki
2012-04-01
This study analyzed the statistical power of research studies published in the "Japanese Journal of Psychology" in 2008 and 2009. Sample effect sizes and sample statistical powers were calculated for each statistical test and analyzed with respect to the analytical methods and the fields of the studies. The results show that in the fields like perception, cognition or learning, the effect sizes were relatively large, although the sample sizes were small. At the same time, because of the small sample sizes, some meaningful effects could not be detected. In the other fields, because of the large sample sizes, meaningless effects could be detected. This implies that researchers who could not get large enough effect sizes would use larger samples to obtain significant results.
Experimental determination of size distributions: analyzing proper sample sizes
Buffo, A.; Alopaeus, V.
2016-04-01
The measurement of various particle size distributions is a crucial aspect for many applications in the process industry. Size distribution is often related to the final product quality, as in crystallization or polymerization. In other cases it is related to the correct evaluation of heat and mass transfer, as well as reaction rates, depending on the interfacial area between the different phases or to the assessment of yield stresses of polycrystalline metals/alloys samples. The experimental determination of such distributions often involves laborious sampling procedures and the statistical significance of the outcome is rarely investigated. In this work, we propose a novel rigorous tool, based on inferential statistics, to determine the number of samples needed to obtain reliable measurements of size distribution, according to specific requirements defined a priori. Such methodology can be adopted regardless of the measurement technique used.
Sample size in orthodontic randomized controlled trials: are numbers justified?
Koletsi, Despina; Pandis, Nikolaos; Fleming, Padhraig S
2014-02-01
Sample size calculations are advocated by the Consolidated Standards of Reporting Trials (CONSORT) group to justify sample sizes in randomized controlled trials (RCTs). This study aimed to analyse the reporting of sample size calculations in trials published as RCTs in orthodontic speciality journals. The performance of sample size calculations was assessed and calculations verified where possible. Related aspects, including number of authors; parallel, split-mouth, or other design; single- or multi-centre study; region of publication; type of data analysis (intention-to-treat or per-protocol basis); and number of participants recruited and lost to follow-up, were considered. Of 139 RCTs identified, complete sample size calculations were reported in 41 studies (29.5 per cent). Parallel designs were typically adopted (n = 113; 81 per cent), with 80 per cent (n = 111) involving two arms and 16 per cent having three arms. Data analysis was conducted on an intention-to-treat (ITT) basis in a small minority of studies (n = 18; 13 per cent). According to the calculations presented, overall, a median of 46 participants were required to demonstrate sufficient power to highlight meaningful differences (typically at a power of 80 per cent). The median number of participants recruited was 60, with a median of 4 participants being lost to follow-up. Our finding indicates good agreement between projected numbers required and those verified (median discrepancy: 5.3 per cent), although only a minority of trials (29.5 per cent) could be examined. Although sample size calculations are often reported in trials published as RCTs in orthodontic speciality journals, presentation is suboptimal and in need of significant improvement.
Heidel, R. Eric
2016-01-01
Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power. PMID:27073717
Directory of Open Access Journals (Sweden)
R. Eric Heidel
2016-01-01
Full Text Available Statistical power is the ability to detect a significant effect, given that the effect actually exists in a population. Like most statistical concepts, statistical power tends to induce cognitive dissonance in hepatology researchers. However, planning for statistical power by an a priori sample size calculation is of paramount importance when designing a research study. There are five specific empirical components that make up an a priori sample size calculation: the scale of measurement of the outcome, the research design, the magnitude of the effect size, the variance of the effect size, and the sample size. A framework grounded in the phenomenon of isomorphism, or interdependencies amongst different constructs with similar forms, will be presented to understand the isomorphic effects of decisions made on each of the five aforementioned components of statistical power.
Sample size of the reference sample in a case-augmented study.
Ghosh, Palash; Dewanji, Anup
2017-03-13
The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd.
Sample size considerations for clinical research studies in nuclear cardiology.
Chiuzan, Cody; West, Erin A; Duong, Jimmy; Cheung, Ken Y K; Einstein, Andrew J
2015-12-01
Sample size calculation is an important element of research design that investigators need to consider in the planning stage of the study. Funding agencies and research review panels request a power analysis, for example, to determine the minimum number of subjects needed for an experiment to be informative. Calculating the right sample size is crucial to gaining accurate information and ensures that research resources are used efficiently and ethically. The simple question "How many subjects do I need?" does not always have a simple answer. Before calculating the sample size requirements, a researcher must address several aspects, such as purpose of the research (descriptive or comparative), type of samples (one or more groups), and data being collected (continuous or categorical). In this article, we describe some of the most frequent methods for calculating the sample size with examples from nuclear cardiology research, including for t tests, analysis of variance (ANOVA), non-parametric tests, correlation, Chi-squared tests, and survival analysis. For the ease of implementation, several examples are also illustrated via user-friendly free statistical software.
Predicting sample size required for classification performance
Directory of Open Access Journals (Sweden)
Figueroa Rosa L
2012-02-01
Full Text Available Abstract Background Supervised learning methods need annotated data in order to generate efficient models. Annotated data, however, is a relatively scarce resource and can be expensive to obtain. For both passive and active learning methods, there is a need to estimate the size of the annotated sample required to reach a performance target. Methods We designed and implemented a method that fits an inverse power law model to points of a given learning curve created using a small annotated training set. Fitting is carried out using nonlinear weighted least squares optimization. The fitted model is then used to predict the classifier's performance and confidence interval for larger sample sizes. For evaluation, the nonlinear weighted curve fitting method was applied to a set of learning curves generated using clinical text and waveform classification tasks with active and passive sampling methods, and predictions were validated using standard goodness of fit measures. As control we used an un-weighted fitting method. Results A total of 568 models were fitted and the model predictions were compared with the observed performances. Depending on the data set and sampling method, it took between 80 to 560 annotated samples to achieve mean average and root mean squared error below 0.01. Results also show that our weighted fitting method outperformed the baseline un-weighted method (p Conclusions This paper describes a simple and effective sample size prediction algorithm that conducts weighted fitting of learning curves. The algorithm outperformed an un-weighted algorithm described in previous literature. It can help researchers determine annotation sample size for supervised machine learning.
Sample size estimation and sampling techniques for selecting a representative sample
Directory of Open Access Journals (Sweden)
Aamir Omair
2014-01-01
Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.
Hand calculations for transport of radioactive aerosols through sampling systems.
Hogue, Mark; Thompson, Martha; Farfan, Eduardo; Hadlock, Dennis
2014-05-01
Workplace air monitoring programs for sampling radioactive aerosols in nuclear facilities sometimes must rely on sampling systems to move the air to a sample filter in a safe and convenient location. These systems may consist of probes, straight tubing, bends, contractions and other components. Evaluation of these systems for potential loss of radioactive aerosols is important because significant losses can occur. However, it can be very difficult to find fully described equations to model a system manually for a single particle size and even more difficult to evaluate total system efficiency for a polydispersed particle distribution. Some software methods are available, but they may not be directly applicable to the components being evaluated and they may not be completely documented or validated per current software quality assurance requirements. This paper offers a method to model radioactive aerosol transport in sampling systems that is transparent and easily updated with the most applicable models. Calculations are shown with the R Programming Language, but the method is adaptable to other scripting languages. The method has the advantage of transparency and easy verifiability. This paper shows how a set of equations from published aerosol science models may be applied to aspiration and transport efficiency of aerosols in common air sampling system components. An example application using R calculation scripts is demonstrated. The R scripts are provided as electronic attachments.
Estimation of individual reference intervals in small sample sizes
DEFF Research Database (Denmark)
Hansen, Ase Marie; Garde, Anne Helene; Eller, Nanna Hurwitz
2007-01-01
of that order of magnitude for all topics in question. Therefore, new methods to estimate reference intervals for small sample sizes are needed. We present an alternative method based on variance component models. The models are based on data from 37 men and 84 women taking into account biological variation...... presented in this study. The presented method enables occupational health researchers to calculate reference intervals for specific groups, i.e. smokers versus non-smokers, etc. In conclusion, the variance component models provide an appropriate tool to estimate reference intervals based on small sample...
Effect size estimates: current use, calculations, and interpretation.
Fritz, Catherine O; Morris, Peter E; Richler, Jennifer J
2012-02-01
The Publication Manual of the American Psychological Association (American Psychological Association, 2001, American Psychological Association, 2010) calls for the reporting of effect sizes and their confidence intervals. Estimates of effect size are useful for determining the practical or theoretical importance of an effect, the relative contributions of factors, and the power of an analysis. We surveyed articles published in 2009 and 2010 in the Journal of Experimental Psychology: General, noting the statistical analyses reported and the associated reporting of effect size estimates. Effect sizes were reported for fewer than half of the analyses; no article reported a confidence interval for an effect size. The most often reported analysis was analysis of variance, and almost half of these reports were not accompanied by effect sizes. Partial η2 was the most commonly reported effect size estimate for analysis of variance. For t tests, 2/3 of the articles did not report an associated effect size estimate; Cohen's d was the most often reported. We provide a straightforward guide to understanding, selecting, calculating, and interpreting effect sizes for many types of data and to methods for calculating effect size confidence intervals and power analysis.
Calculation of size for bound-state constituents
Glazek, Stanislaw D
2014-01-01
Elements are given of a calculation that identifies the size of a proton in the Schroedinger equation for lepton-proton bound states, using the renormalization group procedure for effective particles (RGPEP) in quantum field theory, executed only up to the second order of expansion in powers of the coupling constant. Already in this crude approximation, the extraction of size of a proton from bound-state observables is found to depend on the lepton mass, so that the smaller the lepton mass the larger the proton size extracted from the same observable bound-state energy splitting. In comparison of Hydrogen and muon-proton bound-state dynamics, the crude calculation suggests that the difference between extracted proton sizes in these two cases can be a few percent. Such values would match the order of magnitude of currently discussed proton-size differences in leptonic atoms. Calculations using the RGPEP of higher order than second are required for a precise interpretation of the energy splittings in terms of t...
Directory of Open Access Journals (Sweden)
A C Bouman
Full Text Available Non-inferiority trials are performed when the main therapeutic effect of the new therapy is expected to be not unacceptably worse than that of the standard therapy, and the new therapy is expected to have advantages over the standard therapy in costs or other (health consequences. These advantages however are not included in the classic frequentist approach of sample size calculation for non-inferiority trials. In contrast, the decision theory approach of sample size calculation does include these factors. The objective of this study is to compare the conceptual and practical aspects of the frequentist approach and decision theory approach of sample size calculation for non-inferiority trials, thereby demonstrating that the decision theory approach is more appropriate for sample size calculation of non-inferiority trials.The frequentist approach and decision theory approach of sample size calculation for non-inferiority trials are compared and applied to a case of a non-inferiority trial on individually tailored duration of elastic compression stocking therapy compared to two years elastic compression stocking therapy for the prevention of post thrombotic syndrome after deep vein thrombosis.The two approaches differ substantially in conceptual background, analytical approach, and input requirements. The sample size calculated according to the frequentist approach yielded 788 patients, using a power of 80% and a one-sided significance level of 5%. The decision theory approach indicated that the optimal sample size was 500 patients, with a net value of €92 million.This study demonstrates and explains the differences between the classic frequentist approach and the decision theory approach of sample size calculation for non-inferiority trials. We argue that the decision theory approach of sample size estimation is most suitable for sample size calculation of non-inferiority trials.
IBAR: Interacting boson model calculations for large system sizes
Casperson, R. J.
2012-04-01
Scaling the system size of the interacting boson model-1 (IBM-1) into the realm of hundreds of bosons has many interesting applications in the field of nuclear structure, most notably quantum phase transitions in nuclei. We introduce IBAR, a new software package for calculating the eigenvalues and eigenvectors of the IBM-1 Hamiltonian, for large numbers of bosons. Energies and wavefunctions of the nuclear states, as well as transition strengths between them, are calculated using these values. Numerical errors in the recursive calculation of reduced matrix elements of the d-boson creation operator are reduced by using an arbitrary precision mathematical library. This software has been tested for up to 1000 bosons using comparisons to analytic expressions. Comparisons have also been made to the code PHINT for smaller system sizes. Catalogue identifier: AELI_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELI_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 28 734 No. of bytes in distributed program, including test data, etc.: 4 104 467 Distribution format: tar.gz Programming language: C++ Computer: Any computer system with a C++ compiler Operating system: Tested under Linux RAM: 150 MB for 1000 boson calculations with angular momenta of up to L=4 Classification: 17.18, 17.20 External routines: ARPACK (http://www.caam.rice.edu/software/ARPACK/) Nature of problem: Construction and diagonalization of large Hamiltonian matrices, using reduced matrix elements of the d-boson creation operator. Solution method: Reduced matrix elements of the d-boson creation operator have been stored in data files at machine precision, after being recursively calculated with higher than machine precision. The Hamiltonian matrix is calculated and diagonalized, and the requested transition strengths are calculated
Are sample sizes clear and justified in RCTs published in dental journals?
Directory of Open Access Journals (Sweden)
Despina Koletsi
Full Text Available Sample size calculations are advocated by the CONSORT group to justify sample sizes in randomized controlled trials (RCTs. The aim of this study was primarily to evaluate the reporting of sample size calculations, to establish the accuracy of these calculations in dental RCTs and to explore potential predictors associated with adequate reporting. Electronic searching was undertaken in eight leading specific and general dental journals. Replication of sample size calculations was undertaken where possible. Assumed variances or odds for control and intervention groups were also compared against those observed. The relationship between parameters including journal type, number of authors, trial design, involvement of methodologist, single-/multi-center study and region and year of publication, and the accuracy of sample size reporting was assessed using univariable and multivariable logistic regression. Of 413 RCTs identified, sufficient information to allow replication of sample size calculations was provided in only 121 studies (29.3%. Recalculations demonstrated an overall median overestimation of sample size of 15.2% after provisions for losses to follow-up. There was evidence that journal, methodologist involvement (OR = 1.97, CI: 1.10, 3.53, multi-center settings (OR = 1.86, CI: 1.01, 3.43 and time since publication (OR = 1.24, CI: 1.12, 1.38 were significant predictors of adequate description of sample size assumptions. Among journals JCP had the highest odds of adequately reporting sufficient data to permit sample size recalculation, followed by AJODO and JDR, with 61% (OR = 0.39, CI: 0.19, 0.80 and 66% (OR = 0.34, CI: 0.15, 0.75 lower odds, respectively. Both assumed variances and odds were found to underestimate the observed values. Presentation of sample size calculations in the dental literature is suboptimal; incorrect assumptions may have a bearing on the power of RCTs.
Are sample sizes clear and justified in RCTs published in dental journals?
Koletsi, Despina; Fleming, Padhraig S; Seehra, Jadbinder; Bagos, Pantelis G; Pandis, Nikolaos
2014-01-01
Sample size calculations are advocated by the CONSORT group to justify sample sizes in randomized controlled trials (RCTs). The aim of this study was primarily to evaluate the reporting of sample size calculations, to establish the accuracy of these calculations in dental RCTs and to explore potential predictors associated with adequate reporting. Electronic searching was undertaken in eight leading specific and general dental journals. Replication of sample size calculations was undertaken where possible. Assumed variances or odds for control and intervention groups were also compared against those observed. The relationship between parameters including journal type, number of authors, trial design, involvement of methodologist, single-/multi-center study and region and year of publication, and the accuracy of sample size reporting was assessed using univariable and multivariable logistic regression. Of 413 RCTs identified, sufficient information to allow replication of sample size calculations was provided in only 121 studies (29.3%). Recalculations demonstrated an overall median overestimation of sample size of 15.2% after provisions for losses to follow-up. There was evidence that journal, methodologist involvement (OR = 1.97, CI: 1.10, 3.53), multi-center settings (OR = 1.86, CI: 1.01, 3.43) and time since publication (OR = 1.24, CI: 1.12, 1.38) were significant predictors of adequate description of sample size assumptions. Among journals JCP had the highest odds of adequately reporting sufficient data to permit sample size recalculation, followed by AJODO and JDR, with 61% (OR = 0.39, CI: 0.19, 0.80) and 66% (OR = 0.34, CI: 0.15, 0.75) lower odds, respectively. Both assumed variances and odds were found to underestimate the observed values. Presentation of sample size calculations in the dental literature is suboptimal; incorrect assumptions may have a bearing on the power of RCTs.
Dose Rate Calculations for Rotary Mode Core Sampling Exhauster
Foust, D J
2000-01-01
This document provides the calculated estimated dose rates for three external locations on the Rotary Mode Core Sampling (RMCS) exhauster HEPA filter housing, per the request of Characterization Field Engineering.
How Small Is Big: Sample Size and Skewness.
Piovesana, Adina; Senior, Graeme
2016-09-21
Sample sizes of 50 have been cited as sufficient to obtain stable means and standard deviations in normative test data. The influence of skewness on this minimum number, however, has not been evaluated. Normative test data with varying levels of skewness were compiled for 12 measures from 7 tests collected as part of ongoing normative studies in Brisbane, Australia. Means and standard deviations were computed from sample sizes of 10 to 100 drawn with replacement from larger samples of 272 to 973 cases. The minimum sample size was determined by the number at which both mean and standard deviation estimates remained within the 90% confidence intervals surrounding the population estimates. Sample sizes of greater than 85 were found to generate stable means and standard deviations regardless of the level of skewness, with smaller samples required in skewed distributions. A formula was derived to compute recommended sample size at differing levels of skewness.
Campolina, Daniel de A. M.; Lima, Claubia P. B.; Veloso, Maria Auxiliadora F.
2014-06-01
For all the physical components that comprise a nuclear system there is an uncertainty. Assessing the impact of uncertainties in the simulation of fissionable material systems is essential for a best estimate calculation that has been replacing the conservative model calculations as the computational power increases. The propagation of uncertainty in a simulation using a Monte Carlo code by sampling the input parameters is recent because of the huge computational effort required. In this work a sample space of MCNPX calculations was used to propagate the uncertainty. The sample size was optimized using the Wilks formula for a 95th percentile and a two-sided statistical tolerance interval of 95%. Uncertainties in input parameters of the reactor considered included geometry dimensions and densities. It was showed the capacity of the sampling-based method for burnup when the calculations sample size is optimized and many parameter uncertainties are investigated together, in the same input.
7 CFR 52.775 - Sample unit size.
2010-01-01
... United States Standards for Grades of Canned Red Tart Pitted Cherries 1 Sample Unit Size § 52.775 Sample... drained cherries. (b) Defects (other than harmless extraneous material)—100 cherries. (c)...
40 CFR 80.127 - Sample size guidelines.
2010-07-01
... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Sample size guidelines. 80.127 Section 80.127 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) REGULATION OF FUELS AND FUEL ADDITIVES Attest Engagements § 80.127 Sample size guidelines. In performing...
Sample Size Requirements for Estimating Pearson, Spearman and Kendall Correlations.
Bonett, Douglas G.; Wright, Thomas A.
2000-01-01
Reviews interval estimates of the Pearson, Kendall tau-alpha, and Spearman correlates and proposes an improved standard error for the Spearman correlation. Examines the sample size required to yield a confidence interval having the desired width. Findings show accurate results from a two-stage approximation to the sample size. (SLD)
Approaches to sample size determination for multivariate data
Saccenti, Edoardo; Timmerman, Marieke E.
2016-01-01
Sample size determination is a fundamental step in the design of experiments. Methods for sample size determination are abundant for univariate analysis methods, but scarce in the multivariate case. Omics data are multivariate in nature and are commonly investigated using multivariate statistical
Hickson, Kevin J; O'Keefe, Graeme J
2014-09-01
The scalable XCAT voxelised phantom was used with the GATE Monte Carlo toolkit to investigate the effect of voxel size on dosimetry estimates of internally distributed radionuclide calculated using direct Monte Carlo simulation. A uniformly distributed Fluorine-18 source was simulated in the Kidneys of the XCAT phantom with the organ self dose (kidney ← kidney) and organ cross dose (liver ← kidney) being calculated for a number of organ and voxel sizes. Patient specific dose factors (DF) from a clinically acquired FDG PET/CT study have also been calculated for kidney self dose and liver ← kidney cross dose. Using the XCAT phantom it was found that significantly small voxel sizes are required to achieve accurate calculation of organ self dose. It has also been used to show that a voxel size of 2 mm or less is suitable for accurate calculations of organ cross dose. To compensate for insufficient voxel sampling a correction factor is proposed. This correction factor is applied to the patient specific dose factors calculated with the native voxel size of the PET/CT study.
Power Analysis and Sample Size Determination in Metabolic Phenotyping.
Blaise, Benjamin J; Correia, Gonçalo; Tin, Adrienne; Young, J Hunter; Vergnaud, Anne-Claire; Lewis, Matthew; Pearce, Jake T M; Elliott, Paul; Nicholson, Jeremy K; Holmes, Elaine; Ebbels, Timothy M D
2016-05-17
Estimation of statistical power and sample size is a key aspect of experimental design. However, in metabolic phenotyping, there is currently no accepted approach for these tasks, in large part due to the unknown nature of the expected effect. In such hypothesis free science, neither the number or class of important analytes nor the effect size are known a priori. We introduce a new approach, based on multivariate simulation, which deals effectively with the highly correlated structure and high-dimensionality of metabolic phenotyping data. First, a large data set is simulated based on the characteristics of a pilot study investigating a given biomedical issue. An effect of a given size, corresponding either to a discrete (classification) or continuous (regression) outcome is then added. Different sample sizes are modeled by randomly selecting data sets of various sizes from the simulated data. We investigate different methods for effect detection, including univariate and multivariate techniques. Our framework allows us to investigate the complex relationship between sample size, power, and effect size for real multivariate data sets. For instance, we demonstrate for an example pilot data set that certain features achieve a power of 0.8 for a sample size of 20 samples or that a cross-validated predictivity QY(2) of 0.8 is reached with an effect size of 0.2 and 200 samples. We exemplify the approach for both nuclear magnetic resonance and liquid chromatography-mass spectrometry data from humans and the model organism C. elegans.
SNS Sample Activation Calculator Flux Recommendations and Validation
Energy Technology Data Exchange (ETDEWEB)
McClanahan, Tucker C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Gallmeier, Franz X. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Iverson, Erik B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS); Lu, Wei [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Spallation Neutron Source (SNS)
2015-02-01
The Spallation Neutron Source (SNS) at Oak Ridge National Laboratory (ORNL) uses the Sample Activation Calculator (SAC) to calculate the activation of a sample after the sample has been exposed to the neutron beam in one of the SNS beamlines. The SAC webpage takes user inputs (choice of beamline, the mass, composition and area of the sample, irradiation time, decay time, etc.) and calculates the activation for the sample. In recent years, the SAC has been incorporated into the user proposal and sample handling process, and instrument teams and users have noticed discrepancies in the predicted activation of their samples. The Neutronics Analysis Team validated SAC by performing measurements on select beamlines and confirmed the discrepancies seen by the instrument teams and users. The conclusions were that the discrepancies were a result of a combination of faulty neutron flux spectra for the instruments, improper inputs supplied by SAC (1.12), and a mishandling of cross section data in the Sample Activation Program for Easy Use (SAPEU) (1.1.2). This report focuses on the conclusion that the SAPEU (1.1.2) beamline neutron flux spectra have errors and are a significant contributor to the activation discrepancies. The results of the analysis of the SAPEU (1.1.2) flux spectra for all beamlines will be discussed in detail. The recommendations for the implementation of improved neutron flux spectra in SAPEU (1.1.3) are also discussed.
EMPIRICAL MODEL FOR HYDROCYCLONES CORRECTED CUT SIZE CALCULATION
Directory of Open Access Journals (Sweden)
André Carlos Silva
2012-12-01
Full Text Available Hydrocyclones are devices worldwide used in mineral processing for desliming, classification, selective classification, thickening and pre-concentration. A hydrocyclone is composed by one cylindrical and one conical section joint together, without any moving parts and it is capable of perform granular material separation in pulp. The mineral particles separation mechanism acting in a hydrocyclone is complex and its mathematical modelling is usually empirical. The most used model for hydrocyclone corrected cut size is proposed by Plitt. Over the years many revisions and corrections to Plitt´s model were proposed. The present paper shows a modification in the Plitt´s model constant, obtained by exponential regression of simulated data for three different hydrocyclones geometry: Rietema, Bradley and Krebs. To validate the proposed model literature data obtained from phosphate ore using fifteen different hydrocyclones geometry are used. The proposed model shows a correlation equals to 88.2% between experimental and calculated corrected cut size, while the correlation obtained using Plitt´s model is 11.5%.
Hellman-Feynman operator sampling in diffusion Monte Carlo calculations.
Gaudoin, R; Pitarke, J M
2007-09-21
Diffusion Monte Carlo (DMC) calculations typically yield highly accurate results in solid-state and quantum-chemical calculations. However, operators that do not commute with the Hamiltonian are at best sampled correctly up to second order in the error of the underlying trial wave function once simple corrections have been applied. This error is of the same order as that for the energy in variational calculations. Operators that suffer from these problems include potential energies and the density. This Letter presents a new method, based on the Hellman-Feynman theorem, for the correct DMC sampling of all operators diagonal in real space. Our method is easy to implement in any standard DMC code.
Sample size calculations for 3-level cluster randomized trials
Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.
2008-01-01
BACKGROUND: The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health ca
Sample size calculations for 3-level cluster randomized trials
Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.
2008-01-01
Background The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health car
Size selective sampling using mobile, 3D nanoporous membranes.
Randall, Christina L; Gillespie, Aubri; Singh, Siddarth; Leong, Timothy G; Gracias, David H
2009-02-01
We describe the fabrication of 3D membranes with precisely patterned surface nanoporosity and their utilization in size selective sampling. The membranes were self-assembled as porous cubes from lithographically fabricated 2D templates (Leong et al., Langmuir 23:8747-8751, 2007) with face dimensions of 200 microm, volumes of 8 nL, and monodisperse pores ranging in size from approximately 10 microm to 100 nm. As opposed to conventional sampling and filtration schemes where fluid is moved across a static membrane, we demonstrate sampling by instead moving the 3D nanoporous membrane through the fluid. This new scheme allows for straightforward sampling in small volumes, with little to no loss. Membranes with five porous faces and one open face were moved through fluids to sample and retain nanoscale beads and cells based on pore size. Additionally, cells retained within the membranes were subsequently cultured and multiplied using standard cell culture protocols upon retrieval.
CT dose survey in adults: what sample size for what precision?
Energy Technology Data Exchange (ETDEWEB)
Taylor, Stephen [Hopital Ambroise Pare, Department of Radiology, Mons (Belgium); Muylem, Alain van [Hopital Erasme, Department of Pneumology, Brussels (Belgium); Howarth, Nigel [Clinique des Grangettes, Department of Radiology, Chene-Bougeries (Switzerland); Gevenois, Pierre Alain [Hopital Erasme, Department of Radiology, Brussels (Belgium); Tack, Denis [EpiCURA, Clinique Louis Caty, Department of Radiology, Baudour (Belgium)
2017-01-15
To determine variability of volume computed tomographic dose index (CTDIvol) and dose-length product (DLP) data, and propose a minimum sample size to achieve an expected precision. CTDIvol and DLP values of 19,875 consecutive CT acquisitions of abdomen (7268), thorax (3805), lumbar spine (3161), cervical spine (1515) and head (4106) were collected in two centers. Their variabilities were investigated according to sample size (10 to 1000 acquisitions) and patient body weight categories (no weight selection, 67-73 kg and 60-80 kg). The 95 % confidence interval in percentage of their median (CI95/med) value was calculated for increasing sample sizes. We deduced the sample size that set a 95 % CI lower than 10 % of the median (CI95/med ≤ 10 %). Sample size ensuring CI95/med ≤ 10 %, ranged from 15 to 900 depending on the body region and the dose descriptor considered. In sample sizes recommended by regulatory authorities (i.e., from 10-20 patients), mean CTDIvol and DLP of one sample ranged from 0.50 to 2.00 times its actual value extracted from 2000 samples. The sampling error in CTDIvol and DLP means is high in dose surveys based on small samples of patients. Sample size should be increased at least tenfold to decrease this variability. (orig.)
Sample Size Requirements for Traditional and Regression-Based Norms.
Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas
2016-04-01
Test norms enable determining the position of an individual test taker in the group. The most frequently used approach to obtain test norms is traditional norming. Regression-based norming may be more efficient than traditional norming and is rapidly growing in popularity, but little is known about its technical properties. A simulation study was conducted to compare the sample size requirements for traditional and regression-based norming by examining the 95% interpercentile ranges for percentile estimates as a function of sample size, norming method, size of covariate effects on the test score, test length, and number of answer categories in an item. Provided the assumptions of the linear regression model hold in the data, for a subdivision of the total group into eight equal-size subgroups, we found that regression-based norming requires samples 2.5 to 5.5 times smaller than traditional norming. Sample size requirements are presented for each norming method, test length, and number of answer categories. We emphasize that additional research is needed to establish sample size requirements when the assumptions of the linear regression model are violated.
Size variation in samples of fossil and recent murid teeth
Freudenthal, M.; Martín Suárez, E.
1990-01-01
The variability coefficient proposed by Freudenthal & Cuenca Bescós (1984) for samples of fossil cricetid teeth, is calculated for about 200 samples of fossil and recent murid teeth. The results are discussed, and compared with those obtained for the Cricetidae.
Estimating hidden population size using Respondent-Driven Sampling data.
Handcock, Mark S; Gile, Krista J; Mar, Corinne M
Respondent-Driven Sampling (RDS) is n approach to sampling design and inference in hard-to-reach human populations. It is often used in situations where the target population is rare and/or stigmatized in the larger population, so that it is prohibitively expensive to contact them through the available frames. Common examples include injecting drug users, men who have sex with men, and female sex workers. Most analysis of RDS data has focused on estimating aggregate characteristics, such as disease prevalence. However, RDS is often conducted in settings where the population size is unknown and of great independent interest. This paper presents an approach to estimating the size of a target population based on data collected through RDS. The proposed approach uses a successive sampling approximation to RDS to leverage information in the ordered sequence of observed personal network sizes. The inference uses the Bayesian framework, allowing for the incorporation of prior knowledge. A flexible class of priors for the population size is used that aids elicitation. An extensive simulation study provides insight into the performance of the method for estimating population size under a broad range of conditions. A further study shows the approach also improves estimation of aggregate characteristics. Finally, the method demonstrates sensible results when used to estimate the size of known networked populations from the National Longitudinal Study of Adolescent Health, and when used to estimate the size of a hard-to-reach population at high risk for HIV.
Institute of Scientific and Technical Information of China (English)
文世梅; 陈云飞; 刘华
2013-01-01
AIM: To calculate sample size and power of measurement data in non-inferior trials using SAS programming. METHODS;Two cal culation method, formula and SAS program ming, were compared based on examples and the SAS macro for power was given as well. RE SULTS: The calculation results of SAS program ming was consistent with the formula, and SAS programming can directly give the result was more convenient without looking-up table to ob tain the relevant parameters. CONCLUSION: It is helpful for investigator to have a better under standing at this programming to calculate sample size and power provided in this paper.%目的:利用SAS简易快速对计量资料非劣效临床试验进行样本量及把握度的计算.方法:比较公式及SAS编程两种计算方法,同时给出反推把握度的SAS宏程序.结果:公式和SAS程序两种计算方法的结果一致,利用SAS程序可以直接给出结果,无需要再进行查表获得相关参数,更简单、快捷.结论:利用本文提供的程序可以更好地帮助研究者理解与运用此程序进行样本量和把握度的估算,为此类新药临床试验服务.
Sample size considerations for historical control studies with survival outcomes
Zhu, Hong; Zhang, Song; Ahn, Chul
2015-01-01
Historical control trials (HCTs) are frequently conducted to compare an experimental treatment with a control treatment from a previous study, when they are applicable and favored over a randomized clinical trial (RCT) due to feasibility, ethics and cost concerns. Makuch and Simon developed a sample size formula for historical control (HC) studies with binary outcomes, assuming that the observed response rate in the HC group is the true response rate. This method was extended by Dixon and Simon to specify sample size for HC studies comparing survival outcomes. For HC studies with binary and continuous outcomes, many researchers have shown that the popular Makuch and Simon method does not preserve the nominal power and type I error, and suggested alternative approaches. For HC studies with survival outcomes, we reveal through simulation that the conditional power and type I error over all the random realizations of the HC data have highly skewed distributions. Therefore, the sampling variability of the HC data needs to be appropriately accounted for in determining sample size. A flexible sample size formula that controls arbitrary percentiles, instead of means, of the conditional power and type I error, is derived. Although an explicit sample size formula with survival outcomes is not available, the computation is straightforward. Simulations demonstrate that the proposed method preserves the operational characteristics in a more realistic scenario where the true hazard rate of the HC group is unknown. A real data application of an advanced non-small cell lung cancer (NSCLC) clinical trial is presented to illustrate sample size considerations for HC studies in comparison of survival outcomes. PMID:26098200
Sample size in psychological research over the past 30 years.
Marszalek, Jacob M; Barber, Carolyn; Kohlhart, Julie; Holmes, Cooper B
2011-04-01
The American Psychological Association (APA) Task Force on Statistical Inference was formed in 1996 in response to a growing body of research demonstrating methodological issues that threatened the credibility of psychological research, and made recommendations to address them. One issue was the small, even dramatically inadequate, size of samples used in studies published by leading journals. The present study assessed the progress made since the Task Force's final report in 1999. Sample sizes reported in four leading APA journals in 1955, 1977, 1995, and 2006 were compared using nonparametric statistics, while data from the last two waves were fit to a hierarchical generalized linear growth model for more in-depth analysis. Overall, results indicate that the recommendations for increasing sample sizes have not been integrated in core psychological research, although results slightly vary by field. This and other implications are discussed in the context of current methodological critique and practice.
On an Approach to Bayesian Sample Sizing in Clinical Trials
Muirhead, Robb J
2012-01-01
This paper explores an approach to Bayesian sample size determination in clinical trials. The approach falls into the category of what is often called "proper Bayesian", in that it does not mix frequentist concepts with Bayesian ones. A criterion for a "successful trial" is defined in terms of a posterior probability, its probability is assessed using the marginal distribution of the data, and this probability forms the basis for choosing sample sizes. We illustrate with a standard problem in clinical trials, that of establishing superiority of a new drug over a control.
Using electron microscopy to calculate optical properties of biological samples.
Wu, Wenli; Radosevich, Andrew J; Eshein, Adam; Nguyen, The-Quyen; Yi, Ji; Cherkezyan, Lusik; Roy, Hemant K; Szleifer, Igal; Backman, Vadim
2016-11-01
The microscopic structural origins of optical properties in biological media are still not fully understood. Better understanding these origins can serve to improve the utility of existing techniques and facilitate the discovery of other novel techniques. We propose a novel analysis technique using electron microscopy (EM) to calculate optical properties of specific biological structures. This method is demonstrated with images of human epithelial colon cell nuclei. The spectrum of anisotropy factor g, the phase function and the shape factor D of the nuclei are calculated. The results show strong agreement with an independent study. This method provides a new way to extract the true phase function of biological samples and provides an independent validation for optical property measurement techniques.
The PowerAtlas: a power and sample size atlas for microarray experimental design and research
Directory of Open Access Journals (Sweden)
Wang Jelai
2006-02-01
Full Text Available Abstract Background Microarrays permit biologists to simultaneously measure the mRNA abundance of thousands of genes. An important issue facing investigators planning microarray experiments is how to estimate the sample size required for good statistical power. What is the projected sample size or number of replicate chips needed to address the multiple hypotheses with acceptable accuracy? Statistical methods exist for calculating power based upon a single hypothesis, using estimates of the variability in data from pilot studies. There is, however, a need for methods to estimate power and/or required sample sizes in situations where multiple hypotheses are being tested, such as in microarray experiments. In addition, investigators frequently do not have pilot data to estimate the sample sizes required for microarray studies. Results To address this challenge, we have developed a Microrarray PowerAtlas 1. The atlas enables estimation of statistical power by allowing investigators to appropriately plan studies by building upon previous studies that have similar experimental characteristics. Currently, there are sample sizes and power estimates based on 632 experiments from Gene Expression Omnibus (GEO. The PowerAtlas also permits investigators to upload their own pilot data and derive power and sample size estimates from these data. This resource will be updated regularly with new datasets from GEO and other databases such as The Nottingham Arabidopsis Stock Center (NASC. Conclusion This resource provides a valuable tool for investigators who are planning efficient microarray studies and estimating required sample sizes.
Rock sampling. [method for controlling particle size distribution
Blum, P. (Inventor)
1971-01-01
A method for sampling rock and other brittle materials and for controlling resultant particle sizes is described. The method involves cutting grooves in the rock surface to provide a grouping of parallel ridges and subsequently machining the ridges to provide a powder specimen. The machining step may comprise milling, drilling, lathe cutting or the like; but a planing step is advantageous. Control of the particle size distribution is effected primarily by changing the height and width of these ridges. This control exceeds that obtainable by conventional grinding.
Load calculations of radiant cooling systems for sizing the plant
DEFF Research Database (Denmark)
Bourdakis, Eleftherios; Kazanci, Ongun Berk; Olesen, Bjarne W.
2015-01-01
The aim of this study was, by using a building simulation software, to prove that a radiant cooling system should not be sized based on the maximum cooling load but at a lower value. For that reason six radiant cooling models were simulated with two control principles using 100%, 70% and 50% of t...
Sample size cognizant detection of signals in white noise
Rao, N Raj
2007-01-01
The detection and estimation of signals in noisy, limited data is a problem of interest to many scientific and engineering communities. We present a computationally simple, sample eigenvalue based procedure for estimating the number of high-dimensional signals in white noise when there are relatively few samples. We highlight a fundamental asymptotic limit of sample eigenvalue based detection of weak high-dimensional signals from a limited sample size and discuss its implication for the detection of two closely spaced signals. This motivates our heuristic definition of the 'effective number of identifiable signals.' Numerical simulations are used to demonstrate the consistency of the algorithm with respect to the effective number of signals and the superior performance of the algorithm with respect to Wax and Kailath's "asymptotically consistent" MDL based estimator.
A fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation.
Li, Haisen S; Romeijn, H Edwin; Dempsey, James F
2006-09-01
We developed an analytical method for determining the maximum acceptable grid size for discrete dose calculation in proton therapy treatment plan optimization, so that the accuracy of the optimized dose distribution is guaranteed in the phase of dose sampling and the superfluous computational work is avoided. The accuracy of dose sampling was judged by the criterion that the continuous dose distribution could be reconstructed from the discrete dose within a 2% error limit. To keep the error caused by the discrete dose sampling under a 2% limit, the dose grid size cannot exceed a maximum acceptable value. The method was based on Fourier analysis and the Shannon-Nyquist sampling theorem as an extension of our previous analysis for photon beam intensity modulated radiation therapy [J. F. Dempsey, H. E. Romeijn, J. G. Li, D. A. Low, and J. R. Palta, Med. Phys. 32, 380-388 (2005)]. The proton beam model used for the analysis was a near monoenergetic (of width about 1% the incident energy) and monodirectional infinitesimal (nonintegrated) pencil beam in water medium. By monodirection, we mean that the proton particles are in the same direction before entering the water medium and the various scattering prior to entrance to water is not taken into account. In intensity modulated proton therapy, the elementary intensity modulation entity for proton therapy is either an infinitesimal or finite sized beamlet. Since a finite sized beamlet is the superposition of infinitesimal pencil beams, the result of the maximum acceptable grid size obtained with infinitesimal pencil beam also applies to finite sized beamlet. The analytic Bragg curve function proposed by Bortfeld [T. Bortfeld, Med. Phys. 24, 2024-2033 (1997)] was employed. The lateral profile was approximated by a depth dependent Gaussian distribution. The model included the spreads of the Bragg peak and the lateral profiles due to multiple Coulomb scattering. The dependence of the maximum acceptable dose grid size on the
Hydrophobicity of soil samples and soil size fractions
Energy Technology Data Exchange (ETDEWEB)
Lowen, H.A.; Dudas, M.J. [Alberta Univ., Edmonton, AB (Canada). Dept. of Renewable Resources; Roy, J.L. [Imperial Oil Resources Canada, Calgary, AB (Canada); Johnson, R.L. [Alberta Research Council, Vegreville, AB (Canada); McGill, W.B. [Alberta Univ., Edmonton, AB (Canada). Dept. of Renewable Resources
2001-07-01
The inability of dry soil to absorb water droplets within 10 seconds or less is defined as soil hydrophobicity. The severity, persistence and circumstances causing it vary greatly. There is a possibility that hydrophobicity in Alberta is a symptom of crude oil spills. In this study, the authors investigated the severity of soil hydrophobicity, as determined by the molarity of ethanol droplet test (MED) and dichloromethane extractable organic (DEO) concentration. The soil samples were collected from pedons within 12 hydrophobic soil sites, located northeast from Calgary to Cold Lake, Alberta. All the sites were located at an elevation ranging from 450 metres to 990 metres above sea level. The samples contained compounds from the Chernozemic, Gleysolic, Luvisolic, and Solonetzic soil orders. The results obtained indicated that the MED and DEO were positively correlated in whole soil samples. No relationships were found between MED and DEO in soil samples divided in soil fractions. More severe hydrophobicity and lower DEO concentrations were exhibited in clay- and silt-sized particles in the less than 53 micrometres, when compared to the samples in the other fraction (between 53 and 2000 micrometres). It was concluded that hydrophobicity was not restricted to a particular soil particle size class. 5 refs., 4 figs.
Power and sample size in cost-effectiveness analysis.
Laska, E M; Meisner, M; Siegel, C
1999-01-01
For resource allocation under a constrained budget, optimal decision rules for mutually exclusive programs require that the treatment with the highest incremental cost-effectiveness ratio (ICER) below a willingness-to-pay (WTP) criterion be funded. This is equivalent to determining the treatment with the smallest net health cost. The designer of a cost-effectiveness study needs to select a sample size so that the power to reject the null hypothesis, the equality of the net health costs of two treatments, is high. A recently published formula derived under normal distribution theory overstates sample-size requirements. Using net health costs, the authors present simple methods for power analysis based on conventional normal and on nonparametric statistical theory.
Directory of Open Access Journals (Sweden)
David Normando
2011-12-01
Full Text Available INTRODUÇÃO: o dimensionamento adequado da amostra estudada e a análise apropriada do erro do método são passos importantes na validação dos dados obtidos em determinado estudo científico, além das questões éticas e econômicas. OBJETIVO: esta investigação tem o objetivo de avaliar, quantitativamente, com que frequência os pesquisadores da ciência ortodôntica têm empregado o cálculo amostral e a análise do erro do método em pesquisas publicadas no Brasil e nos Estados Unidos. MÉTODOS: dois importantes periódicos, de acordo com a Capes (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior, foram analisados, a Revista Dental Press de Ortodontia e Ortopedia Facial (Dental Press e o American Journal of Orthodontics and Dentofacial Orthopedics (AJO-DO. Apenas artigos publicados entre os anos de 2005 e 2008 foram analisados. RESULTADOS: a maioria das pesquisas publicadas em ambas as revistas emprega alguma forma de análise do erro do método, quando essa metodologia pode ser aplicada. Porém, apenas um número muito pequeno dos artigos publicados nesses periódicos apresenta qualquer descrição de como foram dimensionadas as amostras estudadas. Essa proporção, já pequena (21,1% na revista editada nos Estados Unidos (AJO-DO, é significativamente menor (p=0,008 na revista editada no Brasil (Dental Press (3,9%. CONCLUSÃO: os pesquisadores e o corpo editorial, de ambas as revistas, deveriam dedicar uma maior atenção ao exame dos erros inerentes à ausência de tais análises na pesquisa científica, em especial aos erros inerentes a um dimensionamento inadequado das amostras.INTRODUCTION: Reliable sample size and an appropriate analysis of error are important steps to validate the data obtained in a scientific study, in addition to the ethical and economic issues. OBJECTIVE: To evaluate, quantitatively, how often the researchers of orthodontic science have used the calculation of sample size and evaluated the
Calculating ensemble averaged descriptions of protein rigidity without sampling.
Directory of Open Access Journals (Sweden)
Luis C González
Full Text Available Previous works have demonstrated that protein rigidity is related to thermodynamic stability, especially under conditions that favor formation of native structure. Mechanical network rigidity properties of a single conformation are efficiently calculated using the integer body-bar Pebble Game (PG algorithm. However, thermodynamic properties require averaging over many samples from the ensemble of accessible conformations to accurately account for fluctuations in network topology. We have developed a mean field Virtual Pebble Game (VPG that represents the ensemble of networks by a single effective network. That is, all possible number of distance constraints (or bars that can form between a pair of rigid bodies is replaced by the average number. The resulting effective network is viewed as having weighted edges, where the weight of an edge quantifies its capacity to absorb degrees of freedom. The VPG is interpreted as a flow problem on this effective network, which eliminates the need to sample. Across a nonredundant dataset of 272 protein structures, we apply the VPG to proteins for the first time. Our results show numerically and visually that the rigidity characterizations of the VPG accurately reflect the ensemble averaged [Formula: see text] properties. This result positions the VPG as an efficient alternative to understand the mechanical role that chemical interactions play in maintaining protein stability.
A simple nomogram for sample size for estimating sensitivity and specificity of medical tests
Directory of Open Access Journals (Sweden)
Malhotra Rajeev
2010-01-01
Full Text Available Sensitivity and specificity measure inherent validity of a diagnostic test against a gold standard. Researchers develop new diagnostic methods to reduce the cost, risk, invasiveness, and time. Adequate sample size is a must to precisely estimate the validity of a diagnostic test. In practice, researchers generally decide about the sample size arbitrarily either at their convenience, or from the previous literature. We have devised a simple nomogram that yields statistically valid sample size for anticipated sensitivity or anticipated specificity. MS Excel version 2007 was used to derive the values required to plot the nomogram using varying absolute precision, known prevalence of disease, and 95% confidence level using the formula already available in the literature. The nomogram plot was obtained by suitably arranging the lines and distances to conform to this formula. This nomogram could be easily used to determine the sample size for estimating the sensitivity or specificity of a diagnostic test with required precision and 95% confidence level. Sample size at 90% and 99% confidence level, respectively, can also be obtained by just multiplying 0.70 and 1.75 with the number obtained for the 95% confidence level. A nomogram instantly provides the required number of subjects by just moving the ruler and can be repeatedly used without redoing the calculations. This can also be applied for reverse calculations. This nomogram is not applicable for testing of the hypothesis set-up and is applicable only when both diagnostic test and gold standard results have a dichotomous category.
Directory of Open Access Journals (Sweden)
Stefanović Milena
2013-01-01
Full Text Available In studies of population variability, particular attention has to be paid to the selection of a representative sample. The aim of this study was to assess the size of the new representative sample on the basis of the variability of chemical content of the initial sample on the example of a whitebark pine population. Statistical analysis included the content of 19 characteristics (terpene hydrocarbons and their derivates of the initial sample of 10 elements (trees. It was determined that the new sample should contain 20 trees so that the mean value calculated from it represents a basic set with a probability higher than 95 %. Determination of the lower limit of the representative sample size that guarantees a satisfactory reliability of generalization proved to be very important in order to achieve cost efficiency of the research. [Projekat Ministarstva nauke Republike Srbije, br. OI-173011, br. TR-37002 i br. III-43007
Directory of Open Access Journals (Sweden)
Ismet DOGAN
2015-10-01
Full Text Available Objective: Choosing the most efficient statistical test is one of the essential problems of statistics. Asymptotic relative efficiency is a notion which enables to implement in large samples the quantitative comparison of two different tests used for testing of the same statistical hypothesis. The notion of the asymptotic efficiency of tests is more complicated than that of asymptotic efficiency of estimates. This paper discusses the effect of sample size on expected values and variances of non-parametric tests for independent two samples and determines the most effective test for different sample sizes using Fraser efficiency value. Material and Methods: Since calculating the power value in comparison of the tests is not practical most of the time, using the asymptotic relative efficiency value is favorable. Asymptotic relative efficiency is an indispensable technique for comparing and ordering statistical test in large samples. It is especially useful in nonparametric statistics where there exist numerous heuristic tests such as the linear rank tests. In this study, the sample size is determined as 2 ≤ n ≤ 50. Results: In both balanced and unbalanced cases, it is found that, as the sample size increases expected values and variances of all the tests discussed in this paper increase as well. Additionally, considering the Fraser efficiency, Mann-Whitney U test is found as the most efficient test among the non-parametric tests that are used in comparison of independent two samples regardless of their sizes. Conclusion: According to Fraser efficiency, Mann-Whitney U test is found as the most efficient test.
Sample size for monitoring sirex populations and their natural enemies
Directory of Open Access Journals (Sweden)
Susete do Rocio Chiarello Penteado
2016-09-01
Full Text Available The woodwasp Sirex noctilio Fabricius (Hymenoptera: Siricidae was introduced in Brazil in 1988 and became the main pest in pine plantations. It has spread to about 1.000.000 ha, at different population levels, in the states of Rio Grande do Sul, Santa Catarina, Paraná, São Paulo and Minas Gerais. Control is done mainly by using a nematode, Deladenus siricidicola Bedding (Nematoda: Neothylenchidae. The evaluation of the efficiency of natural enemies has been difficult because there are no appropriate sampling systems. This study tested a hierarchical sampling system to define the sample size to monitor the S. noctilio population and the efficiency of their natural enemies, which was found to be perfectly adequate.
Voss, Sebastian; Zimmermann, Beate; Zimmermann, Alexander
2016-09-01
In the last decades, an increasing number of studies analyzed spatial patterns in throughfall by means of variograms. The estimation of the variogram from sample data requires an appropriate sampling scheme: most importantly, a large sample and a layout of sampling locations that often has to serve both variogram estimation and geostatistical prediction. While some recommendations on these aspects exist, they focus on Gaussian data and high ratios of the variogram range to the extent of the study area. However, many hydrological data, and throughfall data in particular, do not follow a Gaussian distribution. In this study, we examined the effect of extent, sample size, sampling design, and calculation method on variogram estimation of throughfall data. For our investigation, we first generated non-Gaussian random fields based on throughfall data with large outliers. Subsequently, we sampled the fields with three extents (plots with edge lengths of 25 m, 50 m, and 100 m), four common sampling designs (two grid-based layouts, transect and random sampling) and five sample sizes (50, 100, 150, 200, 400). We then estimated the variogram parameters by method-of-moments (non-robust and robust estimators) and residual maximum likelihood. Our key findings are threefold. First, the choice of the extent has a substantial influence on the estimation of the variogram. A comparatively small ratio of the extent to the correlation length is beneficial for variogram estimation. Second, a combination of a minimum sample size of 150, a design that ensures the sampling of small distances and variogram estimation by residual maximum likelihood offers a good compromise between accuracy and efficiency. Third, studies relying on method-of-moments based variogram estimation may have to employ at least 200 sampling points for reliable variogram estimates. These suggested sample sizes exceed the number recommended by studies dealing with Gaussian data by up to 100 %. Given that most previous
On the repeated measures designs and sample sizes for randomized controlled trials.
Tango, Toshiro
2016-04-01
For the analysis of longitudinal or repeated measures data, generalized linear mixed-effects models provide a flexible and powerful tool to deal with heterogeneity among subject response profiles. However, the typical statistical design adopted in usual randomized controlled trials is an analysis of covariance type analysis using a pre-defined pair of "pre-post" data, in which pre-(baseline) data are used as a covariate for adjustment together with other covariates. Then, the major design issue is to calculate the sample size or the number of subjects allocated to each treatment group. In this paper, we propose a new repeated measures design and sample size calculations combined with generalized linear mixed-effects models that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for the analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size, compared with the simple pre-post design. The proposed designs and the sample size calculations are illustrated with real data arising from randomized controlled trials.
A preliminary model to avoid the overestimation of sample size in bioequivalence studies.
Ramírez, E; Abraira, V; Guerra, P; Borobia, A M; Duque, B; López, J L; Mosquera, B; Lubomirov, R; Carcas, A J; Frías, J
2013-02-01
Often the only available data in literature for sample size estimations in bioequivalence studies is intersubject variability, which tends to result in overestimation of sample size. In this paper, we proposed a preliminary model of intrasubject variability based on intersubject variability for Cmax and AUC data from randomized, crossovers, bioequivalence (BE) studies. From 93 Cmax and 121 AUC data from test-reference comparisons that fulfilled BE criteria, we calculated intersubject variability for the reference formulation and intrasubject variability from ANOVA. Lineal and exponential models (y=a(1-e-bx)) were fitted weighted by the inverse of the variance, to predict the intrasubject variability based on intersubject variability. To validate the model we calculated the coefficient of cross-validation of data from 30 new BE studies. The models fit very well (R2=0.997 and 0.990 for Cmax and AUC respectively) and the cross-validation correlation were 0.847 for Cmax and 0.572 for AUC. A preliminary model analyses allow us to estimate the intrasubject variability based on intersubject variability for sample size calculation purposes in BE studies. This approximation provides an opportunity for sample size reduction avoiding unnecessary exposure of healthy volunteers. Further modelling studies are desirable to confirm these results especially suggestions of the higher intersubject variability range.
Luchko, Tyler; Blinov, Nikolay; Limon, Garrett C.; Joyce, Kevin P.; Kovalenko, Andriy
2016-11-01
Implicit solvent methods for classical molecular modeling are frequently used to provide fast, physics-based hydration free energies of macromolecules. Less commonly considered is the transferability of these methods to other solvents. The Statistical Assessment of Modeling of Proteins and Ligands 5 (SAMPL5) distribution coefficient dataset and the accompanying explicit solvent partition coefficient reference calculations provide a direct test of solvent model transferability. Here we use the 3D reference interaction site model (3D-RISM) statistical-mechanical solvation theory, with a well tested water model and a new united atom cyclohexane model, to calculate partition coefficients for the SAMPL5 dataset. The cyclohexane model performed well in training and testing (R=0.98 for amino acid neutral side chain analogues) but only if a parameterized solvation free energy correction was used. In contrast, the same protocol, using single solute conformations, performed poorly on the SAMPL5 dataset, obtaining R=0.73 compared to the reference partition coefficients, likely due to the much larger solute sizes. Including solute conformational sampling through molecular dynamics coupled with 3D-RISM (MD/3D-RISM) improved agreement with the reference calculation to R=0.93. Since our initial calculations only considered partition coefficients and not distribution coefficients, solute sampling provided little benefit comparing against experiment, where ionized and tautomer states are more important. Applying a simple pK_{ {a}} correction improved agreement with experiment from R=0.54 to R=0.66, despite a small number of outliers. Better agreement is possible by accounting for tautomers and improving the ionization correction.
Sample size and precision in NIH peer review.
Directory of Open Access Journals (Sweden)
David Kaplan
Full Text Available The Working Group on Peer Review of the Advisory Committee to the Director of NIH has recommended that at least 4 reviewers should be used to assess each grant application. A sample size analysis of the number of reviewers needed to evaluate grant applications reveals that a substantially larger number of evaluators are required to provide the level of precision that is currently mandated. NIH should adjust their peer review system to account for the number of reviewers needed to provide adequate precision in their evaluations.
GUIDE TO CALCULATING TRANSPORT EFFICIENCY OF AEROSOLS IN OCCUPATIONAL AIR SAMPLING SYSTEMS
Energy Technology Data Exchange (ETDEWEB)
Hogue, M.; Hadlock, D.; Thompson, M.; Farfan, E.
2013-11-12
This report will present hand calculations for transport efficiency based on aspiration efficiency and particle deposition losses. Because the hand calculations become long and tedious, especially for lognormal distributions of aerosols, an R script (R 2011) will be provided for each element examined. Calculations are provided for the most common elements in a remote air sampling system, including a thin-walled probe in ambient air, straight tubing, bends and a sample housing. One popular alternative approach would be to put such calculations in a spreadsheet, a thorough version of which is shared by Paul Baron via the Aerocalc spreadsheet (Baron 2012). To provide greater transparency and to avoid common spreadsheet vulnerabilities to errors (Burns 2012), this report uses R. The particle size is based on the concept of activity median aerodynamic diameter (AMAD). The AMAD is a particle size in an aerosol where fifty percent of the activity in the aerosol is associated with particles of aerodynamic diameter greater than the AMAD. This concept allows for the simplification of transport efficiency calculations where all particles are treated as spheres with the density of water (1g cm-3). In reality, particle densities depend on the actual material involved. Particle geometries can be very complicated. Dynamic shape factors are provided by Hinds (Hinds 1999). Some example factors are: 1.00 for a sphere, 1.08 for a cube, 1.68 for a long cylinder (10 times as long as it is wide), 1.05 to 1.11 for bituminous coal, 1.57 for sand and 1.88 for talc. Revision 1 is made to correct an error in the original version of this report. The particle distributions are based on activity weighting of particles rather than based on the number of particles of each size. Therefore, the mass correction made in the original version is removed from the text and the calculations. Results affected by the change are updated.
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account
Tsai, Chen-An; Huang, Chih-Yang; Liu, Jen-Pei
2014-08-30
The approval of generic drugs requires the evidence of average bioequivalence (ABE) on both the area under the concentration-time curve and the peak concentration Cmax . The bioequivalence (BE) hypothesis can be decomposed into the non-inferiority (NI) and non-superiority (NS) hypothesis. Most of regulatory agencies employ the two one-sided tests (TOST) procedure to test ABE between two formulations. As it is based on the intersection-union principle, the TOST procedure is conservative in terms of the type I error rate. However, the type II error rate is the sum of the type II error rates with respect to each null hypothesis of NI and NS hypotheses. When the difference in population means between two treatments is not 0, no close-form solution for the sample size for the BE hypothesis is available. Current methods provide the sample sizes with either insufficient power or unnecessarily excessive power. We suggest an approximate method for sample size determination, which can also provide the type II rate for each of NI and NS hypotheses. In addition, the proposed method is flexible to allow extension from one pharmacokinetic (PK) response to determination of the sample size required for multiple PK responses. We report the results of a numerical study. An R code is provided to calculate the sample size for BE testing based on the proposed methods.
Enhancing sampling design in mist-net bat surveys by accounting for sample size optimization.
Trevelin, Leonardo Carreira; Novaes, Roberto Leonan Morim; Colas-Rosas, Paul François; Benathar, Thayse Cristhina Melo; Peres, Carlos A
2017-01-01
The advantages of mist-netting, the main technique used in Neotropical bat community studies to date, include logistical implementation, standardization and sampling representativeness. Nonetheless, study designs still have to deal with issues of detectability related to how different species behave and use the environment. Yet there is considerable sampling heterogeneity across available studies in the literature. Here, we approach the problem of sample size optimization. We evaluated the common sense hypothesis that the first six hours comprise the period of peak night activity for several species, thereby resulting in a representative sample for the whole night. To this end, we combined re-sampling techniques, species accumulation curves, threshold analysis, and community concordance of species compositional data, and applied them to datasets of three different Neotropical biomes (Amazonia, Atlantic Forest and Cerrado). We show that the strategy of restricting sampling to only six hours of the night frequently results in incomplete sampling representation of the entire bat community investigated. From a quantitative standpoint, results corroborated the existence of a major Sample Area effect in all datasets, although for the Amazonia dataset the six-hour strategy was significantly less species-rich after extrapolation, and for the Cerrado dataset it was more efficient. From the qualitative standpoint, however, results demonstrated that, for all three datasets, the identity of species that are effectively sampled will be inherently impacted by choices of sub-sampling schedule. We also propose an alternative six-hour sampling strategy (at the beginning and the end of a sample night) which performed better when resampling Amazonian and Atlantic Forest datasets on bat assemblages. Given the observed magnitude of our results, we propose that sample representativeness has to be carefully weighed against study objectives, and recommend that the trade-off between
Son, Dae-Soon; Lee, DongHyuk; Lee, Kyusang; Jung, Sin-Ho; Ahn, Taejin; Lee, Eunjin; Sohn, Insuk; Chung, Jongsuk; Park, Woongyang; Huh, Nam; Lee, Jae Won
2015-02-01
An empirical method of sample size determination for building prediction models was proposed recently. Permutation method which is used in this procedure is a commonly used method to address the problem of overfitting during cross-validation while evaluating the performance of prediction models constructed from microarray data. But major drawback of such methods which include bootstrapping and full permutations is prohibitively high cost of computation required for calculating the sample size. In this paper, we propose that a single representative null distribution can be used instead of a full permutation by using both simulated and real data sets. During simulation, we have used a dataset with zero effect size and confirmed that the empirical type I error approaches to 0.05. Hence this method can be confidently applied to reduce overfitting problem during cross-validation. We have observed that pilot data set generated by random sampling from real data could be successfully used for sample size determination. We present our results using an experiment that was repeated for 300 times while producing results comparable to that of full permutation method. Since we eliminate full permutation, sample size estimation time is not a function of pilot data size. In our experiment we have observed that this process takes around 30min. With the increasing number of clinical studies, developing efficient sample size determination methods for building prediction models is critical. But empirical methods using bootstrap and permutation usually involve high computing costs. In this study, we propose a method that can reduce required computing time drastically by using representative null distribution of permutations. We use data from pilot experiments to apply this method for designing clinical studies efficiently for high throughput data.
General Conformity Training Modules: Appendix A Sample Emissions Calculations
Appendix A of the training modules gives example calculations for external and internal combustion sources, construction, fuel storage and transfer, on-road vehicles, aircraft operations, storage piles, and paved roads.
Lachin, John M
2006-10-15
Various methods have been described for re-estimating the final sample size in a clinical trial based on an interim assessment of the treatment effect. Many re-weight the observations after re-sizing so as to control the pursuant inflation in the type I error probability alpha. Lan and Trost (Estimation of parameters and sample size re-estimation. Proceedings of the American Statistical Association Biopharmaceutical Section 1997; 48-51) proposed a simple procedure based on conditional power calculated under the current trend in the data (CPT). The study is terminated for futility if CPT or = CU, or re-sized by a factor m to yield CPT = CU if CL stopping for futility can balance the inflation due to sample size re-estimation, thus permitting any form of final analysis with no re-weighting. Herein the statistical properties of this approach are described including an evaluation of the probabilities of stopping for futility or re-sizing, the distribution of the re-sizing factor m, and the unconditional type I and II error probabilities alpha and beta. Since futility stopping does not allow a type I error but commits a type II error, then as the probability of stopping for futility increases, alpha decreases and beta increases. An iterative procedure is described for choice of the critical test value and the futility stopping boundary so as to ensure that specified alpha and beta are obtained. However, inflation in beta is controlled by reducing the probability of futility stopping, that in turn dramatically increases the possible re-sizing factor m. The procedure is also generalized to limit the maximum sample size inflation factor, such as at m max = 4. However, doing so then allows for a non-trivial fraction of studies to be re-sized at this level that still have low conditional power. These properties also apply to other methods for sample size re-estimation with a provision for stopping for futility. Sample size re-estimation procedures should be used with caution
40 CFR 91.419 - Raw emission sampling calculations.
2010-07-01
... mass flow rate , MHCexh = Molecular weight of hydrocarbons in the exhaust; see the following equation: MHCexh = 12.01 + 1.008 × α Where: α=Hydrocarbon/carbon atomic ratio of the fuel. Mexh=Molecular weight of..., calculated from the following equation: ER04OC96.019 WCO = Mass rate of CO in exhaust, MCO = Molecular...
40 CFR 91.426 - Dilute emission sampling calculations.
2010-07-01
... hydrocarbons, ie., the molecular weight of the hydrocarbon molecule divided by the number of carbon atoms in... weight of carbon = 12.01 . MH = Molecular weight of hydrogen = 1.008 . α = Hydrogen to carbon ratio of... is calculated based on the assumption that the fuel used has a carbon to hydrogen ratio of...
Threshold-dependent sample sizes for selenium assessment with stream fish tissue
Hitt, Nathaniel P.; Smith, David R.
2015-01-01
Natural resource managers are developing assessments of selenium (Se) contamination in freshwater ecosystems based on fish tissue concentrations. We evaluated the effects of sample size (i.e., number of fish per site) on the probability of correctly detecting mean whole-body Se values above a range of potential management thresholds. We modeled Se concentrations as gamma distributions with shape and scale parameters fitting an empirical mean-to-variance relationship in data from southwestern West Virginia, USA (63 collections, 382 individuals). We used parametric bootstrapping techniques to calculate statistical power as the probability of detecting true mean concentrations up to 3 mg Se/kg above management thresholds ranging from 4 to 8 mg Se/kg. Sample sizes required to achieve 80% power varied as a function of management thresholds and Type I error tolerance (α). Higher thresholds required more samples than lower thresholds because populations were more heterogeneous at higher mean Se levels. For instance, to assess a management threshold of 4 mg Se/kg, a sample of eight fish could detect an increase of approximately 1 mg Se/kg with 80% power (given α = 0.05), but this sample size would be unable to detect such an increase from a management threshold of 8 mg Se/kg with more than a coin-flip probability. Increasing α decreased sample size requirements to detect above-threshold mean Se concentrations with 80% power. For instance, at an α-level of 0.05, an 8-fish sample could detect an increase of approximately 2 units above a threshold of 8 mg Se/kg with 80% power, but when α was relaxed to 0.2, this sample size was more sensitive to increasing mean Se concentrations, allowing detection of an increase of approximately 1.2 units with equivalent power. Combining individuals into 2- and 4-fish composite samples for laboratory analysis did not decrease power because the reduced number of laboratory samples was compensated for by increased
7 CFR 51.308 - Methods of sampling and calculation of percentages.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Methods of sampling and calculation of percentages. 51..., CERTIFICATION, AND STANDARDS) United States Standards for Grades of Apples Methods of Sampling and Calculation of Percentages § 51.308 Methods of sampling and calculation of percentages. (a) When the...
40 CFR Appendix II to Part 600 - Sample Fuel Economy Calculations
2010-07-01
... 40 Protection of Environment 29 2010-07-01 2010-07-01 false Sample Fuel Economy Calculations II... FUEL ECONOMY AND CARBON-RELATED EXHAUST EMISSIONS OF MOTOR VEHICLES Pt. 600, App. II Appendix II to Part 600—Sample Fuel Economy Calculations (a) This sample fuel economy calculation is applicable...
Using electron microscopy to calculate optical properties of biological samples
Wu, Wenli; Radosevich, Andrew J.; Eshein, Adam; Nguyen, The-Quyen; Yi, Ji; Cherkezyan, Lusik; Roy, Hemant K.; Szleifer, Igal; Backman, Vadim
2016-01-01
The microscopic structural origins of optical properties in biological media are still not fully understood. Better understanding these origins can serve to improve the utility of existing techniques and facilitate the discovery of other novel techniques. We propose a novel analysis technique using electron microscopy (EM) to calculate optical properties of specific biological structures. This method is demonstrated with images of human epithelial colon cell nuclei. The spectrum of anisotropy...
A simulation study of sample size for multilevel logistic regression models
Directory of Open Access Journals (Sweden)
Moineddin Rahim
2007-07-01
Full Text Available Abstract Background Many studies conducted in health and social sciences collect individual level data as outcome measures. Usually, such data have a hierarchical structure, with patients clustered within physicians, and physicians clustered within practices. Large survey data, including national surveys, have a hierarchical or clustered structure; respondents are naturally clustered in geographical units (e.g., health regions and may be grouped into smaller units. Outcomes of interest in many fields not only reflect continuous measures, but also binary outcomes such as depression, presence or absence of a disease, and self-reported general health. In the framework of multilevel studies an important problem is calculating an adequate sample size that generates unbiased and accurate estimates. Methods In this paper simulation studies are used to assess the effect of varying sample size at both the individual and group level on the accuracy of the estimates of the parameters and variance components of multilevel logistic regression models. In addition, the influence of prevalence of the outcome and the intra-class correlation coefficient (ICC is examined. Results The results show that the estimates of the fixed effect parameters are unbiased for 100 groups with group size of 50 or higher. The estimates of the variance covariance components are slightly biased even with 100 groups and group size of 50. The biases for both fixed and random effects are severe for group size of 5. The standard errors for fixed effect parameters are unbiased while for variance covariance components are underestimated. Results suggest that low prevalent events require larger sample sizes with at least a minimum of 100 groups and 50 individuals per group. Conclusion We recommend using a minimum group size of 50 with at least 50 groups to produce valid estimates for multi-level logistic regression models. Group size should be adjusted under conditions where the prevalence
Space resection model calculation based on Random Sample Consensus algorithm
Liu, Xinzhu; Kang, Zhizhong
2016-03-01
Resection has been one of the most important content in photogrammetry. It aims at the position and attitude information of camera at the shooting point. However in some cases, the observed values for calculating are with gross errors. This paper presents a robust algorithm that using RANSAC method with DLT model can effectually avoiding the difficulties to determine initial values when using co-linear equation. The results also show that our strategies can exclude crude handicap and lead to an accurate and efficient way to gain elements of exterior orientation.
DEFF Research Database (Denmark)
Kristensen, Philip Trøst; Lodahl, Peter; Mørk, Jesper
2009-01-01
We present a multipole solution to the Lippmann-Schwinger equation for electromagnetic scattering in inhomogeneous geometries. The method is illustrated by calculating the Green’s function for a finite sized two-dimensional photonic crystal waveguide.......We present a multipole solution to the Lippmann-Schwinger equation for electromagnetic scattering in inhomogeneous geometries. The method is illustrated by calculating the Green’s function for a finite sized two-dimensional photonic crystal waveguide....
Institute of Scientific and Technical Information of China (English)
Hong Tang; Xiaogang Sun; Guibin Yuan
2007-01-01
In total light scattering particle sizing technique, the relationship among Sauter mean diameter D32, mean extinction efficiency Q, and particle size distribution function is studied in order to inverse the mean diameter and particle size distribution simply. We propose a method which utilizes the mean extinction efficiency ratio at only two selected wavelengths to solve D32 and then to inverse the particle size distribution associated with (Q) and D32. Numerical simulation results show that the particle size distribution is inversed accurately with this method, and the number of wavelengths used is reduced to the greatest extent in the measurement range. The calculation method has the advantages of simplicity and rapidness.
Hauschke, D; Steinijans, W V; Diletti, E; Schall, R; Luus, H G; Elze, M; Blume, H
1994-07-01
Bioequivalence studies are generally performed as crossover studies and, therefore, information on the intrasubject coefficient of variation is needed for sample size planning. Unfortunately, this information is usually not presented in publications on bioequivalence studies, and only the pooled inter- and intrasubject coefficient of variation for either test or reference formulation is reported. Thus, the essential information for sample size planning of future studies is not made available to other researchers. In order to overcome such shortcomings, the presentation of results from bioequivalence studies should routinely include the intrasubject coefficient of variation. For the relevant coefficients of variation, theoretical background together with modes of calculation and presentation are given in this communication with particular emphasis on the multiplicative model.
IMAGE PROFILE AREA CALCULATION BASED ON CIRCULAR SAMPLE MEASUREMENT CALIBRATION
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A practical approach of measurement calibration is presented for obtaining the true area of the photographed objects projected in the 2-D image scene. The calibration is performed using three circular samples with given diameters. The process is first to obtain the ratio mm/pixel in two orthogonal directions, and then use the obtained ratios with the total number of pixels scanned within projected area of the object of interest to compute the desired area. Compared the optically measured areas with their corresponding true areas, the results show that the proposed method is quite encouraging and the relevant application also proves the approach adequately accurate.
Probing finite size effects in $(\\lambda \\Phi^{4})_4$ MonteCarlo calculations
Agodi, A
1999-01-01
The Constrained Effective Potential (CEP) is known to be equivalent to the usual Effective Potential (EP) in the infinite volume limit. We have carried out MonteCarlo calculations based on the two different definitions to get informations on finite size effects. We also compared these calculations with those based on an Improved CEP (ICEP) which takes into account the finite size of the lattice. It turns out that ICEP actually reduces the finite size effects which are more visible near the vanishing of the external source.
Communication: Finite size correction in periodic coupled cluster theory calculations of solids
Liao, Ke; Grüneis, Andreas
2016-10-01
We present a method to correct for finite size errors in coupled cluster theory calculations of solids. The outlined technique shares similarities with electronic structure factor interpolation methods used in quantum Monte Carlo calculations. However, our approach does not require the calculation of density matrices. Furthermore we show that the proposed finite size corrections achieve chemical accuracy in the convergence of second-order Møller-Plesset perturbation and coupled cluster singles and doubles correlation energies per atom for insulating solids with two atomic unit cells using 2 × 2 × 2 and 3 × 3 × 3 k-point meshes only.
Institute of Scientific and Technical Information of China (English)
R. A. KUHNLE; D. G. WREN; J. P. CHAMBERS
2007-01-01
Collection of samples of suspended sediment transported by streams and rivers is difficult and expensive. Emerging technologies, such as acoustic backscatter, have promise to decrease costs and allow more thorough sampling of transported sediment in streams and rivers. Acoustic backscatter information may be used to calculate the concentration of suspended sand-sized sediment given the vertical distribution of sediment size. Therefore, procedures to accurately compute suspended sediment size distributions from easily obtained river data are badly needed. In this study, techniques to predict the size of suspended sand are examined and their application to measuring concentrations using acoustic backscatter data are explored. Three methods to predict the size of sediment in suspension using bed sediment, flow criteria, and a modified form of the Rouse equation yielded mean suspended sediment sizes that differed from means of measured data by 7 to 50 percent. When one sample near the bed was used as a reference, mean error was reduced to about 5 percent. These errors in size determination translate into errors of 7 to 156 percent in the prediction of sediment concentration using backscatter data from 1 MHz single frequency acoustics.
Directory of Open Access Journals (Sweden)
Tudor DRUGAN
2003-08-01
Full Text Available The aim of the paper was to present the usefulness of the binomial distribution in studying of the contingency tables and the problems of approximation to normality of binomial distribution (the limits, advantages, and disadvantages. The classification of the medical keys parameters reported in medical literature and expressing them using the contingency table units based on their mathematical expressions restrict the discussion of the confidence intervals from 34 parameters to 9 mathematical expressions. The problem of obtaining different information starting with the computed confidence interval for a specified method, information like confidence intervals boundaries, percentages of the experimental errors, the standard deviation of the experimental errors and the deviation relative to significance level was solves through implementation in PHP programming language of original algorithms. The cases of expression, which contain two binomial variables, were separately treated. An original method of computing the confidence interval for the case of two-variable expression was proposed and implemented. The graphical representation of the expression of two binomial variables for which the variation domain of one of the variable depend on the other variable was a real problem because the most of the software used interpolation in graphical representation and the surface maps were quadratic instead of triangular. Based on an original algorithm, a module was implements in PHP in order to represent graphically the triangular surface plots. All the implementation described above was uses in computing the confidence intervals and estimating their performance for binomial distributions sample sizes and variable.
Energy Technology Data Exchange (ETDEWEB)
Atkins, T J; Duck, F A; Tooley, M A [Department of Medical Physics and Bioengineering, Royal United Hospital, Combe Park, Bath BA1 3NG (United Kingdom); Humphrey, V F, E-mail: timothy.atkins@nhs.net [Institute of Sound and Vibration Research, University of Southampton, Southampton SO17 1BJ (United Kingdom)
2011-02-01
The response of two coaxially aligned weakly focused ultrasonic transducers, typical of those employed for measuring the attenuation of small samples using the immersion method, has been investigated. The effects of the sample size on transmission measurements have been analyzed by integrating the sound pressure distribution functions of the radiator and receiver over different limits to determine the size of the region that contributes to the system response. The results enable the errors introduced into measurements of attenuation to be estimated as a function of sample size. A theoretical expression has been used to examine how the transducer separation affects the receiver output. The calculations are compared with an experimental study of the axial response of three unpaired transducers in water. The separation of each transducer pair giving the maximum response was determined, and compared with the field characteristics of the individual transducers. The optimum transducer separation, for accurate estimation of sample properties, was found to fall between the sum of the focal distances and the sum of the geometric focal lengths as this reduced diffraction errors.
10 CFR Appendix to Part 474 - Sample Petroleum-Equivalent Fuel Economy Calculations
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Sample Petroleum-Equivalent Fuel Economy Calculations..., DEVELOPMENT, AND DEMONSTRATION PROGRAM; PETROLEUM-EQUIVALENT FUEL ECONOMY CALCULATION Pt. 474, App. Appendix to Part 474—Sample Petroleum-Equivalent Fuel Economy Calculations Example 1: An electric vehicle...
Comparing Server Energy Use and Efficiency Using Small Sample Sizes
Energy Technology Data Exchange (ETDEWEB)
Coles, Henry C.; Qin, Yong; Price, Phillip N.
2014-11-01
This report documents a demonstration that compared the energy consumption and efficiency of a limited sample size of server-type IT equipment from different manufacturers by measuring power at the server power supply power cords. The results are specific to the equipment and methods used. However, it is hoped that those responsible for IT equipment selection can used the methods described to choose models that optimize energy use efficiency. The demonstration was conducted in a data center at Lawrence Berkeley National Laboratory in Berkeley, California. It was performed with five servers of similar mechanical and electronic specifications; three from Intel and one each from Dell and Supermicro. Server IT equipment is constructed using commodity components, server manufacturer-designed assemblies, and control systems. Server compute efficiency is constrained by the commodity component specifications and integration requirements. The design freedom, outside of the commodity component constraints, provides room for the manufacturer to offer a product with competitive efficiency that meets market needs at a compelling price. A goal of the demonstration was to compare and quantify the server efficiency for three different brands. The efficiency is defined as the average compute rate (computations per unit of time) divided by the average energy consumption rate. The research team used an industry standard benchmark software package to provide a repeatable software load to obtain the compute rate and provide a variety of power consumption levels. Energy use when the servers were in an idle state (not providing computing work) were also measured. At high server compute loads, all brands, using the same key components (processors and memory), had similar results; therefore, from these results, it could not be concluded that one brand is more efficient than the other brands. The test results show that the power consumption variability caused by the key components as a
Sandlin, Doral R.; Swanson, Stephen Mark
1990-01-01
The creation of a computer module used to calculate the size of the horizontal control surfaces of a conceptual aircraft design is discussed. The control surface size is determined by first calculating the size needed to rotate the aircraft during takeoff, and, second, by determining if the calculated size is large enough to maintain stability of the aircraft throughout any specified mission. The tail size needed to rotate during takeoff is calculated from a summation of forces about the main landing gear of the aircraft. The stability of the aircraft is determined from a summation of forces about the center of gravity during different phases of the aircraft's flight. Included in the horizontal control surface analysis are: downwash effects on an aft tail, upwash effects on a forward canard, and effects due to flight in close proximity to the ground. Comparisons of production aircraft with numerical models show good accuracy for control surface sizing. A modified canard design verified the accuracy of the module for canard configurations. Added to this stability and control module is a subroutine that determines one of the three design variables, for a stable vectored thrust aircraft. These include forward thrust nozzle position, aft thrust nozzle angle, and forward thrust split.
Sample Size Determination for Regression Models Using Monte Carlo Methods in R
Beaujean, A. Alexander
2014-01-01
A common question asked by researchers using regression models is, What sample size is needed for my study? While there are formulae to estimate sample sizes, their assumptions are often not met in the collected data. A more realistic approach to sample size determination requires more information such as the model of interest, strength of the…
Limitations of mRNA amplification from small-size cell samples
Directory of Open Access Journals (Sweden)
Myklebost Ola
2005-10-01
Full Text Available Abstract Background Global mRNA amplification has become a widely used approach to obtain gene expression profiles from limited material. An important concern is the reliable reflection of the starting material in the results obtained. This is especially important with extremely low quantities of input RNA where stochastic effects due to template dilution may be present. This aspect remains under-documented in the literature, as quantitative measures of data reliability are most often lacking. To address this issue, we examined the sensitivity levels of each transcript in 3 different cell sample sizes. ANOVA analysis was used to estimate the overall effects of reduced input RNA in our experimental design. In order to estimate the validity of decreasing sample sizes, we examined the sensitivity levels of each transcript by applying a novel model-based method, TransCount. Results From expression data, TransCount provided estimates of absolute transcript concentrations in each examined sample. The results from TransCount were used to calculate the Pearson correlation coefficient between transcript concentrations for different sample sizes. The correlations were clearly transcript copy number dependent. A critical level was observed where stochastic fluctuations became significant. The analysis allowed us to pinpoint the gene specific number of transcript templates that defined the limit of reliability with respect to number of cells from that particular source. In the sample amplifying from 1000 cells, transcripts expressed with at least 121 transcripts/cell were statistically reliable and for 250 cells, the limit was 1806 transcripts/cell. Above these thresholds, correlation between our data sets was at acceptable values for reliable interpretation. Conclusion These results imply that the reliability of any amplification experiment must be validated empirically to justify that any gene exists in sufficient quantity in the input material. This
Non-uniform sampled scalar diffraction calculation using non-uniform fast Fourier transform
Shimobaba, Tomoyoshi; Oikawa, Minoru; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Ito, Tomoyoshi
2013-01-01
Scalar diffraction calculations such as the angular spectrum method (ASM) and Fresnel diffraction, are widely used in the research fields of optics, X-rays, electron beams, and ultrasonics. It is possible to accelerate the calculation using fast Fourier transform (FFT); unfortunately, acceleration of the calculation of non-uniform sampled planes is limited due to the property of the FFT that imposes uniform sampling. In addition, it gives rise to wasteful sampling data if we calculate a plane having locally low and high spatial frequencies. In this paper, we developed non-uniform sampled ASM and Fresnel diffraction to improve the problem using the non-uniform FFT.
Bill, Anthony; Henderson, Sally; Penman, John
2010-01-01
Two test items that examined high school students' beliefs of sample size for large populations using the context of opinion polls conducted prior to national and state elections were developed. A trial of the two items with 21 male and 33 female Year 9 students examined their naive understanding of sample size: over half of students chose a…
Theory of Finite Size Effects for Electronic Quantum Monte Carlo Calculations of Liquids and Solids
Holzmann, Markus; Morales, Miguel A; Tubmann, Norm M; Ceperley, David M; Pierleoni, Carlo
2016-01-01
Concentrating on zero temperature Quantum Monte Carlo calculations of electronic systems, we give a general description of the theory of finite size extrapolations of energies to the thermodynamic limit based on one and two-body correlation functions. We introduce new effective procedures, such as using the potential and wavefunction split-up into long and short range functions to simplify the method and we discuss how to treat backflow wavefunctions. Then we explicitly test the accuracy of our method to correct finite size errors on example hydrogen and helium many-body systems and show that the finite size bias can be drastically reduced for even small systems.
Influence of macroinvertebrate sample size on bioassessment of streams
Vlek, H.E.; Sporka, F.; Krno, I.
2006-01-01
In order to standardise biological assessment of surface waters in Europe, a standardised method for sampling, sorting and identification of benthic macroinvertebrates in running waters was developed during the AQEM project. The AQEM method has proved to be relatively time-consuming. Hence, this stu
Finite-sample-size effects on convection in mushy layers
Zhong, Jin-Qiang; Wells, Andrew J; Wettlaufer, John S
2012-01-01
We report theoretical and experimental investigations of the flow instability responsible for the mushy-layer mode of convection and the formation of chimneys, drainage channels devoid of solid, during steady-state solidification of aqueous ammonium chloride. Under certain growth conditions a state of steady mushy-layer growth with no flow is unstable to the onset of convection, resulting in the formation of chimneys. We present regime diagrams to quantify the state of the flow as a function of the initial liquid concentration, the porous-medium Rayleigh number, and the sample width. For a given liquid concentration, increasing both the porous-medium Rayleigh number and the sample width caused the system to change from a stable state of no flow to a different state with the formation of chimneys. Decreasing the concentration ratio destabilized the system and promoted the formation of chimneys. As the initial liquid concentration increased, onset of convection and formation of chimneys occurred at larger value...
Sample size reduction in groundwater surveys via sparse data assimilation
Hussain, Z.
2013-04-01
In this paper, we focus on sparse signal recovery methods for data assimilation in groundwater models. The objective of this work is to exploit the commonly understood spatial sparsity in hydrodynamic models and thereby reduce the number of measurements to image a dynamic groundwater profile. To achieve this we employ a Bayesian compressive sensing framework that lets us adaptively select the next measurement to reduce the estimation error. An extension to the Bayesian compressive sensing framework is also proposed which incorporates the additional model information to estimate system states from even lesser measurements. Instead of using cumulative imaging-like measurements, such as those used in standard compressive sensing, we use sparse binary matrices. This choice of measurements can be interpreted as randomly sampling only a small subset of dug wells at each time step, instead of sampling the entire grid. Therefore, this framework offers groundwater surveyors a significant reduction in surveying effort without compromising the quality of the survey. © 2013 IEEE.
Enhanced Z-LDA for Small Sample Size Training in Brain-Computer Interface Systems
Directory of Open Access Journals (Sweden)
Dongrui Gao
2015-01-01
Full Text Available Background. Usually the training set of online brain-computer interface (BCI experiment is small. For the small training set, it lacks enough information to deeply train the classifier, resulting in the poor classification performance during online testing. Methods. In this paper, on the basis of Z-LDA, we further calculate the classification probability of Z-LDA and then use it to select the reliable samples from the testing set to enlarge the training set, aiming to mine the additional information from testing set to adjust the biased classification boundary obtained from the small training set. The proposed approach is an extension of previous Z-LDA and is named enhanced Z-LDA (EZ-LDA. Results. We evaluated the classification performance of LDA, Z-LDA, and EZ-LDA on simulation and real BCI datasets with different sizes of training samples, and classification results showed EZ-LDA achieved the best classification performance. Conclusions. EZ-LDA is promising to deal with the small sample size training problem usually existing in online BCI system.
Williams Test Required Sample Size For Determining The Minimum Effective Dose
Directory of Open Access Journals (Sweden)
Mustafa Agah TEKINDAL
2016-04-01
of groups has quite a big influence on sample size. Researchers may calculate the test’s power based on the recommended sample sizes prior to experimental designs.
Presentation of coefficient of variation for bioequivalence sample-size calculation .
Lee, Yi Lin; Mak, Wen Yao; Looi, Irene; Wong, Jia Woei; Yuen, Kah Hay
2017-03-03
The current study aimed to further contribute information on intrasubject coefficient of variation (CV) from 43 bioequivalence studies conducted by our center. Consistent with Yuen et al. (2001), current work also attempted to evaluate the effect of different parameters (AUC0-t, AUC0-∞, and Cmax) used in the estimation of the study power. Furthermore, we have estimated the number of subjects required for each study by looking at the values of intrasubject CV of AUC0-∞ and have also taken into consideration the minimum sample-size requirement set by the US FDA. A total of 37 immediate-release and 6 extended-release formulations from 28 different active pharmaceutical ingredients (APIs) were evaluated. Out of the total number of studies conducted, 10 studies did not achieve satisfactory statistical power on two or more parameters; 4 studies consistently scored poorly across all three parameters. In general, intrasubject CV values calculated from Cmax were more variable compared to either AUC0-t and AUC0-∞. 20 out of 43 studies did not achieve more than 80% power when the value was calculated from Cmax value, compared to only 11 (AUC0-∞) and 8 (AUC0-t) studies. This finding is consistent with Steinijans et al. (1995) [2] and Yuen et al. (2001) [3]. In conclusion, the CV values obtained from AUC0-t and AUC0-∞ were similar, while those derived from Cmax were consistently more variable. Hence, CV derived from AUC instead of Cmax should be used in sample-size calculation to achieve a sufficient, yet practical, test power. .
Progressive prediction method for failure data with small sample size
Institute of Scientific and Technical Information of China (English)
WANG Zhi-hua; FU Hui-min; LIU Cheng-rui
2011-01-01
The small sample prediction problem which commonly exists in reliability analysis was discussed with the progressive prediction method in this paper.The modeling and estimation procedure,as well as the forecast and confidence limits formula of the progressive auto regressive（PAR） method were discussed in great detail.PAR model not only inherits the simple linear features of auto regressive（AR） model,but also has applicability for nonlinear systems.An application was illustrated for predicting the future fatigue failure for Tantalum electrolytic capacitors.Forecasting results of PAR model were compared with auto regressive moving average（ARMA） model,and it can be seen that the PAR method can be considered good and shows a promise for future applications.
A Note on Sample Size and Solution Propriety for Confirmatory Factor Analytic Models
Jackson, Dennis L.; Voth, Jennifer; Frey, Marc P.
2013-01-01
Determining an appropriate sample size for use in latent variable modeling techniques has presented ongoing challenges to researchers. In particular, small sample sizes are known to present concerns over sampling error for the variances and covariances on which model estimation is based, as well as for fit indexes and convergence failures. The…
Evaluation of design flood estimates with respect to sample size
Kobierska, Florian; Engeland, Kolbjorn
2016-04-01
Estimation of design floods forms the basis for hazard management related to flood risk and is a legal obligation when building infrastructure such as dams, bridges and roads close to water bodies. Flood inundation maps used for land use planning are also produced based on design flood estimates. In Norway, the current guidelines for design flood estimates give recommendations on which data, probability distribution, and method to use dependent on length of the local record. If less than 30 years of local data is available, an index flood approach is recommended where the local observations are used for estimating the index flood and regional data are used for estimating the growth curve. For 30-50 years of data, a 2 parameter distribution is recommended, and for more than 50 years of data, a 3 parameter distribution should be used. Many countries have national guidelines for flood frequency estimation, and recommended distributions include the log Pearson II, generalized logistic and generalized extreme value distributions. For estimating distribution parameters, ordinary and linear moments, maximum likelihood and Bayesian methods are used. The aim of this study is to r-evaluate the guidelines for local flood frequency estimation. In particular, we wanted to answer the following questions: (i) Which distribution gives the best fit to the data? (ii) Which estimation method provides the best fit to the data? (iii) Does the answer to (i) and (ii) depend on local data availability? To answer these questions we set up a test bench for local flood frequency analysis using data based cross-validation methods. The criteria were based on indices describing stability and reliability of design flood estimates. Stability is used as a criterion since design flood estimates should not excessively depend on the data sample. The reliability indices describe to which degree design flood predictions can be trusted.
Draxler, Clemens; Alexandrowicz, Rainer W
2015-12-01
This paper refers to the exponential family of probability distributions and the conditional maximum likelihood (CML) theory. It is concerned with the determination of the sample size for three groups of tests of linear hypotheses, known as the fundamental trinity of Wald, score, and likelihood ratio tests. The main practical purpose refers to the special case of tests of the class of Rasch models. The theoretical background is discussed and the formal framework for sample size calculations is provided, given a predetermined deviation from the model to be tested and the probabilities of the errors of the first and second kinds.
Nitschke, Naomi; Atkovska, Kalina; Hub, Jochen S.
2016-09-01
Molecular dynamics simulations are capable of predicting the permeability of lipid membranes for drug-like solutes, but the calculations have remained prohibitively expensive for high-throughput studies. Here, we analyze simple measures for accelerating potential of mean force (PMF) calculations of membrane permeation, namely, (i) using smaller simulation systems, (ii) simulating multiple solutes per system, and (iii) using shorter cutoffs for the Lennard-Jones interactions. We find that PMFs for membrane permeation are remarkably robust against alterations of such parameters, suggesting that accurate PMF calculations are possible at strongly reduced computational cost. In addition, we evaluated the influence of the definition of the membrane center of mass (COM), used to define the transmembrane reaction coordinate. Membrane-COM definitions based on all lipid atoms lead to artifacts due to undulations and, consequently, to PMFs dependent on membrane size. In contrast, COM definitions based on a cylinder around the solute lead to size-independent PMFs, down to systems of only 16 lipids per monolayer. In summary, compared to popular setups that simulate a single solute in a membrane of 128 lipids with a Lennard-Jones cutoff of 1.2 nm, the measures applied here yield a speedup in sampling by factor of ˜40, without reducing the accuracy of the calculated PMF.
Sampling bee communities using pan traps: alternative methods increase sample size
Monitoring of the status of bee populations and inventories of bee faunas require systematic sampling. Efficiency and ease of implementation has encouraged the use of pan traps to sample bees. Efforts to find an optimal standardized sampling method for pan traps have focused on pan trap color. Th...
Distribution of the two-sample t-test statistic following blinded sample size re-estimation.
Lu, Kaifeng
2016-05-01
We consider the blinded sample size re-estimation based on the simple one-sample variance estimator at an interim analysis. We characterize the exact distribution of the standard two-sample t-test statistic at the final analysis. We describe a simulation algorithm for the evaluation of the probability of rejecting the null hypothesis at given treatment effect. We compare the blinded sample size re-estimation method with two unblinded methods with respect to the empirical type I error, the empirical power, and the empirical distribution of the standard deviation estimator and final sample size. We characterize the type I error inflation across the range of standardized non-inferiority margin for non-inferiority trials, and derive the adjusted significance level to ensure type I error control for given sample size of the internal pilot study. We show that the adjusted significance level increases as the sample size of the internal pilot study increases. Copyright © 2016 John Wiley & Sons, Ltd.
XAFSmass: a program for calculating the optimal mass of XAFS samples
Klementiev, K.; Chernikov, R.
2016-05-01
We present a new implementation of the XAFSmass program that calculates the optimal mass of XAFS samples. It has several improvements as compared to the old Windows based program XAFSmass: 1) it is truly platform independent, as provided by Python language, 2) it has an improved parser of chemical formulas that enables parentheses and nested inclusion-to-matrix weight percentages. The program calculates the absorption edge height given the total optical thickness, operates with differently determined sample amounts (mass, pressure, density or sample area) depending on the aggregate state of the sample and solves the inverse problem of finding the elemental composition given the experimental absorption edge jump and the chemical formula.
DEFF Research Database (Denmark)
Gerke, Oke; Poulsen, Mads Hvid; Bouchelouche, Kirsten
2009-01-01
PURPOSE: For certain cancer indications, the current patient evaluation strategy is a perfect but locally restricted gold standard procedure. If positron emission tomography/computed tomography (PET/CT) can be shown to be reliable within the gold standard region and if it can be argued that PET....../CT also performs well in adjacent areas, then sample sizes in accuracy studies can be reduced. PROCEDURES: Traditional standard power calculations for demonstrating sensitivities of both 80% and 90% are shown. The argument is then described in general terms and demonstrated by an ongoing study...... of metastasized prostate cancer. RESULTS: An added value in accuracy of PET/CT in adjacent areas can outweigh a downsized target level of accuracy in the gold standard region, justifying smaller sample sizes. CONCLUSIONS: If PET/CT provides an accuracy benefit in adjacent regions, then sample sizes can be reduced...
Energy Technology Data Exchange (ETDEWEB)
Nasrabadi, M.N. [Department of Nuclear Engineering, Faculty of Modern Sciences and Technologies, University of Isfahan, Isfahan 81746-73441 (Iran, Islamic Republic of)], E-mail: mnnasrabadi@ast.ui.ac.ir; Mohammadi, A. [Department of Physics, Payame Noor University (PNU), Kohandej, Isfahan (Iran, Islamic Republic of); Jalali, M. [Isfahan Nuclear Science and Technology Research Institute (NSTRT), Reactor and Accelerators Research and Development School, Atomic Energy Organization of Iran (Iran, Islamic Republic of)
2009-07-15
In this paper bulk sample prompt gamma neutron activation analysis (BSPGNAA) was applied to aqueous sample analysis using a relative method. For elemental analysis of an unknown bulk sample, gamma self-shielding coefficient was required. Gamma self-shielding coefficient of unknown samples was estimated by an experimental method and also by MCNP code calculation. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the gamma self-shielding within the sample volume is required.
Calculation of the mean circle size does not circumvent the bottleneck of crowding.
Banno, Hayaki; Saiki, Jun
2012-10-22
Visually, we can extract a statistical summary of sets of elements efficiently. However, our visual system has a severe limitation in that the ability to recognize an object is remarkably impaired when it is surrounded by other objects. The goal of this study was to investigate whether the crowding effect obstructs the calculation of the mean size of objects. First, we verified that the crowding effect occurs when comparing the sizes of circles (Experiment 1). Next, we manipulated the distances between circles and measured the sensitivity when circles were on or off the limitation of crowding (Experiment 2). Participants were asked to compare the mean sizes of the circles in the left and right visual fields and to judge which was larger. Participants' sensitivity to mean size difference was lower when the circles were located in the nearer distance. Finally, we confirmed that crowding is responsible for the observed results by showing that displays without a crowded object eliminated the effects (Experiment 3). Our results indicate that the statistical information of size does not circumvent the bottleneck of crowding.
Purity calculation method for event samples with two the same particles
Kuzmin, Valentin
2016-01-01
We present a method of the two dimensional background calculation for an analysis of events with two the same particles observing by a detector of high energy physics. Usual two dimensional integration is replaced by an approximation of a specially constructed one-dimensional function. The value of the signal events is found by the subtraction of the background from the value of all selected events. It allows to calculate the purity value of the selected events sample. The procedure does not require a hypothesis about background and signal shapes. The nice performance of the purity calculation method is shown on Monte Carlo examples of double J/psi samples.
Thermomagnetic behavior of magnetic susceptibility – heating rate and sample size effects
Directory of Open Access Journals (Sweden)
Diana eJordanova
2016-01-01
Full Text Available Thermomagnetic analysis of magnetic susceptibility k(T was carried out for a number of natural powder materials from soils, baked clay and anthropogenic dust samples using fast (11oC/min and slow (6.5oC/min heating rates available in the furnace of Kappabridge KLY2 (Agico. Based on the additional data for mineralogy, grain size and magnetic properties of the studied samples, behaviour of k(T cycles and the observed differences in the curves for fast and slow heating rate are interpreted in terms of mineralogical transformations and Curie temperatures (Tc. The effect of different sample size is also explored, using large volume and small volume of powder material. It is found that soil samples show enhanced information on mineralogical transformations and appearance of new strongly magnetic phases when using fast heating rate and large sample size. This approach moves the transformation at higher temperature, but enhances the amplitude of the signal of newly created phase. Large sample size gives prevalence of the local micro- environment, created by evolving gases, released during transformations. The example from archeological brick reveals the effect of different sample sizes on the observed Curie temperatures on heating and cooling curves, when the magnetic carrier is substituted magnetite (Mn0.2Fe2.70O4. Large sample size leads to bigger differences in Tcs on heating and cooling, while small sample size results in similar Tcs for both heating rates.
Heo, Yongju; Park, Jiyeon; Lim, Sung-Il; Hur, Hor-Gil; Kim, Daesung; Park, Kihong
2010-08-01
Size-resolved bacterial concentrations in atmospheric aerosols sampled by using a six stage viable impactor at rice field, sanitary landfill, and waste incinerator sites were determined. Culture-based and Polymerase Chain Reaction (PCR) methods were used to identify the airborne bacteria. The culturable bacteria concentration in total suspended particles (TSP) was found to be the highest (848 Colony Forming Unit (CFU)/m(3)) at the sanitary landfill sampling site, while the rice field sampling site has the lowest (125 CFU/m(3)). The closed landfill would be the main source of the observed bacteria concentration at the sanitary landfill. The rice field sampling site was fully covered by rice grain with wetted conditions before harvest and had no significant contribution to the airborne bacteria concentration. This might occur because the dry conditions favor suspension of soil particles and this area had limited personnel and vehicle flow. The respirable fraction calculated by particles less than 3.3 mum was highest (26%) at the sanitary landfill sampling site followed by waste incinerator (19%) and rice field (10%), which showed a lower level of respiratory fraction compared to previous literature values. We identified 58 species in 23 genera of culturable bacteria, and the Microbacterium, Staphylococcus, and Micrococcus were the most abundant genera at the sanitary landfill, waste incinerator, and rice field sites, respectively. An antibiotic resistant test for the above bacteria (Micrococcus sp., Microbacterium sp., and Staphylococcus sp.) showed that the Staphylococcus sp. had the strongest resistance to both antibiotics (25.0% resistance for 32 microg ml(-1) of Chloramphenicol and 62.5% resistance for 4 microg ml(-1) of Gentamicin).
Johnson, Kenneth L.; White, K. Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. In this paper, the results of empirical tests intended to assess the accuracy of acceptance sampling plan calculators implemented for six variable distributions are presented.
Additive scales in degenerative disease - calculation of effect sizes and clinical judgment
Directory of Open Access Journals (Sweden)
Riepe Matthias W
2011-12-01
Full Text Available Abstract Background The therapeutic efficacy of an intervention is often assessed in clinical trials by scales measuring multiple diverse activities that are added to produce a cumulative global score. Medical communities and health care systems subsequently use these data to calculate pooled effect sizes to compare treatments. This is done because major doubt has been cast over the clinical relevance of statistically significant findings relying on p values with the potential to report chance findings. Hence in an aim to overcome this pooling the results of clinical studies into a meta-analyses with a statistical calculus has been assumed to be a more definitive way of deciding of efficacy. Methods We simulate the therapeutic effects as measured with additive scales in patient cohorts with different disease severity and assess the limitations of an effect size calculation of additive scales which are proven mathematically. Results We demonstrate that the major problem, which cannot be overcome by current numerical methods, is the complex nature and neurobiological foundation of clinical psychiatric endpoints in particular and additive scales in general. This is particularly relevant for endpoints used in dementia research. 'Cognition' is composed of functions such as memory, attention, orientation and many more. These individual functions decline in varied and non-linear ways. Here we demonstrate that with progressive diseases cumulative values from multidimensional scales are subject to distortion by the limitations of the additive scale. The non-linearity of the decline of function impedes the calculation of effect sizes based on cumulative values from these multidimensional scales. Conclusions Statistical analysis needs to be guided by boundaries of the biological condition. Alternatively, we suggest a different approach avoiding the error imposed by over-analysis of cumulative global scores from additive scales.
PIXE-PIGE analysis of size-segregated aerosol samples from remote areas
Calzolai, G.; Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F.; Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R.
2014-01-01
The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification.
Light propagation in tissues: effect of finite size of tissue sample
Melnik, Ivan S.; Dets, Sergiy M.; Rusina, Tatyana V.
1995-12-01
Laser beam propagation inside tissues with different lateral dimensions has been considered. Scattering and anisotropic properties of tissue critically determine spatial fluence distribution and predict sizes of tissue specimens when deviations of this distribution can be neglected. Along the axis of incident beam the fluence rate weakly depends on sample size whereas its relative increase (more than 20%) towards the lateral boundaries. The finite sizes were considered to be substantial only for samples with sizes comparable with the diameter of the laser beam. Interstitial irradiance patterns simulated by Monte Carlo method were compared with direct measurements in human brain specimens.
König, Gerhard; Miller, Benjamin T; Boresch, Stefan; Wu, Xiongwu; Brooks, Bernard R
2012-10-09
One of the key requirements for the accurate calculation of free energy differences is proper sampling of conformational space. Especially in biological applications, molecular dynamics simulations are often confronted with rugged energy surfaces and high energy barriers, leading to insufficient sampling and, in turn, poor convergence of the free energy results. In this work, we address this problem by employing enhanced sampling methods. We explore the possibility of using self-guided Langevin dynamics (SGLD) to speed up the exploration process in free energy simulations. To obtain improved free energy differences from such simulations, it is necessary to account for the effects of the bias due to the guiding forces. We demonstrate how this can be accomplished for the Bennett's acceptance ratio (BAR) and the enveloping distribution sampling (EDS) methods. While BAR is considered among the most efficient methods available for free energy calculations, the EDS method developed by Christ and van Gunsteren is a promising development that reduces the computational costs of free energy calculations by simulating a single reference state. To evaluate the accuracy of both approaches in connection with enhanced sampling, EDS was implemented in CHARMM. For testing, we employ benchmark systems with analytical reference results and the mutation of alanine to serine. We find that SGLD with reweighting can provide accurate results for BAR and EDS where conventional molecular dynamics simulations fail. In addition, we compare the performance of EDS with other free energy methods. We briefly discuss the implications of our results and provide practical guidelines for conducting free energy simulations with SGLD.
A NONPARAMETRIC PROCEDURE OF THE SAMPLE SIZE DETERMINATION FOR SURVIVAL RATE TEST
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
Objective This paper proposes a nonparametric procedure of the sample size determination for survival rate test. Methods Using the classical asymptotic normal procedure yields the required homogenetic effective sample size and using the inverse operation with the prespecified value of the survival function of censoring times yields the required sample size. Results It is matched with the rate test for censored data, does not involve survival distributions, and reduces to its classical counterpart when there is no censoring. The observed power of the test coincides with the prescribed power under usual clinical conditions. Conclusion It can be used for planning survival studies of chronic diseases.
Bice, K.; Clement, S. C.
1981-01-01
X-ray diffraction and spectroscopy were used to investigate the mineralogical and chemical properties of the Calvert, Ball Old Mine, Ball Martin, and Jordan Sediments. The particle size distribution and index of refraction of each sample were determined. The samples are composed primarily of quartz, kaolinite, and illite. The clay minerals are most abundant in the finer particle size fractions. The chemical properties of the four samples are similar. The Calvert sample is most notably different in that it contains a relatively high amount of iron. The dominant particle size fraction in each sample is silt, with lesser amounts of clay and sand. The indices of refraction of the sediments are the same with the exception of the Calvert sample which has a slightly higher value.
Frictional behaviour of sandstone: A sample-size dependent triaxial investigation
Roshan, Hamid; Masoumi, Hossein; Regenauer-Lieb, Klaus
2017-01-01
Frictional behaviour of rocks from the initial stage of loading to final shear displacement along the formed shear plane has been widely investigated in the past. However the effect of sample size on such frictional behaviour has not attracted much attention. This is mainly related to the limitations in rock testing facilities as well as the complex mechanisms involved in sample-size dependent frictional behaviour of rocks. In this study, a suite of advanced triaxial experiments was performed on Gosford sandstone samples at different sizes and confining pressures. The post-peak response of the rock along the formed shear plane has been captured for the analysis with particular interest in sample-size dependency. Several important phenomena have been observed from the results of this study: a) the rate of transition from brittleness to ductility in rock is sample-size dependent where the relatively smaller samples showed faster transition toward ductility at any confining pressure; b) the sample size influences the angle of formed shear band and c) the friction coefficient of the formed shear plane is sample-size dependent where the relatively smaller sample exhibits lower friction coefficient compared to larger samples. We interpret our results in terms of a thermodynamics approach in which the frictional properties for finite deformation are viewed as encompassing a multitude of ephemeral slipping surfaces prior to the formation of the through going fracture. The final fracture itself is seen as a result of the self-organisation of a sufficiently large ensemble of micro-slip surfaces and therefore consistent in terms of the theory of thermodynamics. This assumption vindicates the use of classical rock mechanics experiments to constrain failure of pressure sensitive rocks and the future imaging of these micro-slips opens an exciting path for research in rock failure mechanisms.
Monotonicity in the Sample Size of the Length of Classical Confidence Intervals
Kagan, Abram M
2012-01-01
It is proved that the average length of standard confidence intervals for parameters of gamma and normal distributions monotonically decrease with the sample size. The proofs are based on fine properties of the classical gamma function.
Guo, Jiin-Huarng; Chen, Hubert J; Luh, Wei-Ming
2011-11-01
The allocation of sufficient participants into different experimental groups for various research purposes under given constraints is an important practical problem faced by researchers. We address the problem of sample size determination between two independent groups for unequal and/or unknown variances when both the power and the differential cost are taken into consideration. We apply the well-known Welch approximate test to derive various sample size allocation ratios by minimizing the total cost or, equivalently, maximizing the statistical power. Two types of hypotheses including superiority/non-inferiority and equivalence of two means are each considered in the process of sample size planning. A simulation study is carried out and the proposed method is validated in terms of Type I error rate and statistical power. As a result, the simulation study reveals that the proposed sample size formulas are very satisfactory under various variances and sample size allocation ratios. Finally, a flowchart, tables, and figures of several sample size allocations are presented for practical reference.
A margin based approach to determining sample sizes via tolerance bounds.
Energy Technology Data Exchange (ETDEWEB)
Newcomer, Justin T.; Freeland, Katherine Elizabeth
2013-09-01
This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.
Sample size choices for XRCT scanning of highly unsaturated soil mixtures
Directory of Open Access Journals (Sweden)
Smith Jonathan C.
2016-01-01
Full Text Available Highly unsaturated soil mixtures (clay, sand and gravel are used as building materials in many parts of the world, and there is increasing interest in understanding their mechanical and hydraulic behaviour. In the laboratory, x-ray computed tomography (XRCT is becoming more widely used to investigate the microstructures of soils, however a crucial issue for such investigations is the choice of sample size, especially concerning the scanning of soil mixtures where there will be a range of particle and void sizes. In this paper we present a discussion (centred around a new set of XRCT scans on sample sizing for scanning of samples comprising soil mixtures, where a balance has to be made between realistic representation of the soil components and the desire for high resolution scanning, We also comment on the appropriateness of differing sample sizes in comparison to sample sizes used for other geotechnical testing. Void size distributions for the samples are presented and from these some hypotheses are made as to the roles of inter- and intra-aggregate voids in the mechanical behaviour of highly unsaturated soils.
Shrinkage anisotropy characteristics from soil structure and initial sample/layer size
Chertkov, V Y
2014-01-01
The objective of this work is a physical prediction of such soil shrinkage anisotropy characteristics as variation with drying of (i) different sample/layer sizes and (ii) the shrinkage geometry factor. With that, a new presentation of the shrinkage anisotropy concept is suggested through the sample/layer size ratios. The work objective is reached in two steps. First, the relations are derived between the indicated soil shrinkage anisotropy characteristics and three different shrinkage curves of a soil relating to: small samples (without cracking at shrinkage), sufficiently large samples (with internal cracking), and layers of similar thickness. Then, the results of a recent work with respect to the physical prediction of the three shrinkage curves are used. These results connect the shrinkage curves with the initial sample size/layer thickness as well as characteristics of soil texture and structure (both inter- and intra-aggregate) as physical parameters. The parameters determining the reference shrinkage c...
Shih, Weichung Joe; Li, Gang; Wang, Yining
2016-03-01
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one.
González-Vacarezza, N; Abad-Santos, F; Carcas-Sansuan, A; Dorado, P; Peñas-Lledó, E; Estévez-Carrizo, F; Llerena, A
2013-10-01
In bioequivalence studies, intra-individual variability (CV(w)) is critical in determining sample size. In particular, highly variable drugs may require enrollment of a greater number of subjects. We hypothesize that a strategy to reduce pharmacokinetic CV(w), and hence sample size and costs, would be to include subjects with decreased metabolic enzyme capacity for the drug under study. Therefore, two mirtazapine studies, two-way, two-period crossover design (n=68) were re-analysed to calculate the total CV(w) and the CV(w)s in three different CYP2D6 genotype groups (0, 1 and ≥ 2 active genes). The results showed that a 29.2 or 15.3% sample size reduction would have been possible if the recruitment had been of individuals carrying just 0 or 0 plus 1 CYP2D6 active genes, due to the lower CV(w). This suggests that there may be a role for pharmacogenetics in the design of bioequivalence studies to reduce sample size and costs, thus introducing a new paradigm for the biopharmaceutical evaluation of drug products.
A normative inference approach for optimal sample sizes in decisions from experience.
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
"Decisions from experience" (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the "sampling paradigm," which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the "optimal" sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.
A normative inference approach for optimal sample sizes in decisions from experience
Directory of Open Access Journals (Sweden)
Dirk eOstwald
2015-09-01
Full Text Available Decisions from experience (DFE refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experienced-based choice is the sampling paradigm, which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the optimal sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical manuscript, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for decisions from experience. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE.
Scheuerell, Mark D
2016-01-01
Stock-recruitment models have been used for decades in fisheries management as a means of formalizing the expected number of offspring that recruit to a fishery based on the number of parents. In particular, Ricker's stock recruitment model is widely used due to its flexibility and ease with which the parameters can be estimated. After model fitting, the spawning stock size that produces the maximum sustainable yield (S MSY) to a fishery, and the harvest corresponding to it (U MSY), are two of the most common biological reference points of interest to fisheries managers. However, to date there has been no explicit solution for either reference point because of the transcendental nature of the equation needed to solve for them. Therefore, numerical or statistical approximations have been used for more than 30 years. Here I provide explicit formulae for calculating both S MSY and U MSY in terms of the productivity and density-dependent parameters of Ricker's model.
Finch, W. Holmes; Finch, Maria E. Hernandez
2016-01-01
Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates…
Page sample size in web accessibility testing: how many pages is enough?
Velleman, Eric; Geest, van der Thea
2013-01-01
Various countries and organizations use a different sampling approach and sample size of web pages in accessibility conformance tests. We are conducting a systematic analysis to determine how many pages is enough for testing whether a website is compliant with standard accessibility guidelines. This
Norm Block Sample Sizes: A Review of 17 Individually Administered Intelligence Tests
Norfolk, Philip A.; Farmer, Ryan L.; Floyd, Randy G.; Woods, Isaac L.; Hawkins, Haley K.; Irby, Sarah M.
2015-01-01
The representativeness, recency, and size of norm samples strongly influence the accuracy of inferences drawn from their scores. Inadequate norm samples may lead to inflated or deflated scores for individuals and poorer prediction of developmental and academic outcomes. The purpose of this study was to apply Kranzler and Floyd's method for…
Ifoulis, A A; Savopoulou-Soultani, M
2006-10-01
The purpose of this research was to quantify the spatial pattern and develop a sampling program for larvae of Lobesia botrana Denis and Schiffermüller (Lepidoptera: Tortricidae), an important vineyard pest in northern Greece. Taylor's power law and Iwao's patchiness regression were used to model the relationship between the mean and the variance of larval counts. Analysis of covariance was carried out, separately for infestation and injury, with combined second and third generation data, for vine and half-vine sample units. Common regression coefficients were estimated to permit use of the sampling plan over a wide range of conditions. Optimum sample sizes for infestation and injury, at three levels of precision, were developed. An investigation of a multistage sampling plan with a nested analysis of variance showed that if the goal of sampling is focusing on larval infestation, three grape clusters should be sampled in a half-vine; if the goal of sampling is focusing on injury, then two grape clusters per half-vine are recommended.
Energy Technology Data Exchange (ETDEWEB)
Garcia-Arribas, A. [Departamento de Electricidad y Electronica, Universidad del Pais Vasco, Apartado 644, 48080 Bilbao (Spain)], E-mail: alf@we.lc.ehu.es; Barandiaran, J.M.; Cos, D. de [Departamento de Electricidad y Electronica, Universidad del Pais Vasco, Apartado 644, 48080 Bilbao (Spain)
2008-07-15
The impedance values of magnetic thin films and magnetic/conductor/magnetic sandwiched structures with different widths are computed using the finite element method (FEM). The giant magneto-impedance (GMI) is calculated from the difference of the impedance values obtained with high and low permeability of the magnetic material. The results depend considerably on the width of the sample, demonstrating that edge effects are decisive for the GMI performance. It is shown that, besides the usual skin effect that is responsible for GMI, an 'unexpected' increase of the current density takes place at the lateral edge of the sample. In magnetic thin films this effect is dominant when the permeability is low. In the trilayers, it is combined with the lack of shielding of the central conductor at the edge. The resulting effects on GMI are shown to be large for both kinds of samples. The conclusions of this study are of great importance for the successful design of miniaturized GMI devices.
Constrained statistical inference: sample-size tables for ANOVA and regression.
Vanbrabant, Leonard; Van De Schoot, Rens; Rosseel, Yves
2014-01-01
Researchers in the social and behavioral sciences often have clear expectations about the order/direction of the parameters in their statistical model. For example, a researcher might expect that regression coefficient β1 is larger than β2 and β3. The corresponding hypothesis is H: β1 > {β2, β3} and this is known as an (order) constrained hypothesis. A major advantage of testing such a hypothesis is that power can be gained and inherently a smaller sample size is needed. This article discusses this gain in sample size reduction, when an increasing number of constraints is included into the hypothesis. The main goal is to present sample-size tables for constrained hypotheses. A sample-size table contains the necessary sample-size at a pre-specified power (say, 0.80) for an increasing number of constraints. To obtain sample-size tables, two Monte Carlo simulations were performed, one for ANOVA and one for multiple regression. Three results are salient. First, in an ANOVA the needed sample-size decreases with 30-50% when complete ordering of the parameters is taken into account. Second, small deviations from the imposed order have only a minor impact on the power. Third, at the maximum number of constraints, the linear regression results are comparable with the ANOVA results. However, in the case of fewer constraints, ordering the parameters (e.g., β1 > β2) results in a higher power than assigning a positive or a negative sign to the parameters (e.g., β1 > 0).
Sample size for collecting germplasms – a polyploid model with mixed mating system
Indian Academy of Sciences (India)
R L Sapra; Prem Narain; S V S Chauhan; S K Lal; B B Singh
2003-03-01
The present paper discusses a general expression for determining the minimum sample size (plants) for a given number of seeds or vice versa for capturing multiple allelic diversity. The model considers sampling from a large 2 k-ploid population under a broad range of mating systems. Numerous expressions/results developed for germplasm collection/regeneration for diploid populations by earlier workers can be directly deduced from our general expression by assigning appropriate values of the corresponding parameters. A seed factor which influences the plant sample size has also been isolated to aid the collectors in selecting the appropriate combination of number of plants and seeds per plant. When genotypic multiplicity of seeds is taken into consideration, a sample size of even less than 172 plants can conserve diversity of 20 alleles from 50,000 polymorphic loci with a very large probability of conservation (0.9999) in most of the cases.
The impact of different sampling rates and calculation time intervals on ROTI values
Directory of Open Access Journals (Sweden)
Jacobsen Knut Stanley
2014-01-01
Full Text Available The ROTI (Rate of TEC index is a commonly used measure of ionospheric irregularities level. The algorithm to calculate ROTI is easily implemented, and is the same from paper to paper. However, the sample rate of the GNSS data used, and the time interval over which a value of ROTI is calculated, varies from paper to paper. When comparing ROTI values from different studies, this must be taken into account. This paper aims to show what these differences are, to increase the awareness of this issue. We have investigated the effect of different parameters for the calculation of ROTI values, using one year of data from 8 receivers at latitudes ranging from 59° N to 79° N. We have found that the ROTI values calculated using different parameter choices are strongly positively correlated. However, the ROTI values are quite different. The effect of a lower sample rate is to lower the ROTI value, due to the loss of high-frequency parts of the ROT spectrum, while the effect of a longer calculation time interval is to remove or reduce short-lived peaks due to the inherent smoothing effect. The ratio of ROTI values based on data of different sampling rate is examined in relation to the ROT power spectrum. Of relevance to statistical studies, we find that the median level of ROTI depends strongly on sample rate, strongly on latitude at auroral latitudes, and weakly on time interval. Thus, a baseline “quiet” or “noisy” level for one location or choice or parameters may not be valid for another location or choice of parameters.
Calculating of river water quality sampling frequency by the analytic hierarchy process (AHP).
Do, Huu Tuan; Lo, Shang-Lien; Phan Thi, Lan Anh
2013-01-01
River water quality sampling frequency is an important aspect of the river water quality monitoring network. A suitable sampling frequency for each station as well as for the whole network will provide a measure of the real water quality status for the water quality managers as well as the decision makers. The analytic hierarchy process (AHP) is an effective method for decision analysis and calculation of weighting factors based on multiple criteria to solve complicated problems. This study introduces a new procedure to design river water quality sampling frequency by applying the AHP. We introduce and combine weighting factors of variables with the relative weights of stations to select the sampling frequency for each station, monthly and yearly. The new procedure was applied for Jingmei and Xindian rivers, Taipei, Taiwan. The results showed that sampling frequency should be increased at high weighted stations while decreased at low weighted stations. In addition, a detailed monitoring plan for each station and each month could be scheduled from the output results. Finally, the study showed that the AHP is a suitable method to design a system for sampling frequency as it could combine multiple weights and multiple levels for stations and variables to calculate a final weight for stations, variables, and months.
Unconstrained Enhanced Sampling for Free Energy Calculations of Biomolecules: A Review.
Miao, Yinglong; McCammon, J Andrew
Free energy calculations are central to understanding the structure, dynamics and function of biomolecules. Yet insufficient sampling of biomolecular configurations is often regarded as one of the main sources of error. Many enhanced sampling techniques have been developed to address this issue. Notably, enhanced sampling methods based on biasing collective variables (CVs), including the widely used umbrella sampling, adaptive biasing force and metadynamics, have been discussed in a recent excellent review (Abrams and Bussi, Entropy, 2014). Here, we aim to review enhanced sampling methods that do not require predefined system-dependent CVs for biomolecular simulations and as such do not suffer from the hidden energy barrier problem as encountered in the CV-biasing methods. These methods include, but are not limited to, replica exchange/parallel tempering, self-guided molecular/Langevin dynamics, essential energy space random walk and accelerated molecular dynamics. While it is overwhelming to describe all details of each method, we provide a summary of the methods along with the applications and offer our perspectives. We conclude with challenges and prospects of the unconstrained enhanced sampling methods for accurate biomolecular free energy calculations.
Sleeth, Darrah K
2013-05-01
In 2010, the American Conference of Governmental Industrial Hygienists (ACGIH) formally changed its Threshold Limit Value (TLV) for beryllium from a 'total' particulate sample to an inhalable particulate sample. This change may have important implications for workplace air sampling of beryllium. A history of particle size-selective sampling methods, with a special focus on beryllium, will be provided. The current state of the science on inhalable sampling will also be presented, including a look to the future at what new methods or technology may be on the horizon. This includes new sampling criteria focused on particle deposition in the lung, proposed changes to the existing inhalable convention, as well as how the issues facing beryllium sampling may help drive other changes in sampling technology.
Free Energy Calculations using a Swarm-Enhanced Sampling Molecular Dynamics Approach.
Burusco, Kepa K; Bruce, Neil J; Alibay, Irfan; Bryce, Richard A
2015-10-26
Free energy simulations are an established computational tool in modelling chemical change in the condensed phase. However, sampling of kinetically distinct substates remains a challenge to these approaches. As a route to addressing this, we link the methods of thermodynamic integration (TI) and swarm-enhanced sampling molecular dynamics (sesMD), where simulation replicas interact cooperatively to aid transitions over energy barriers. We illustrate the approach by using alchemical alkane transformations in solution, comparing them with the multiple independent trajectory TI (IT-TI) method. Free energy changes for transitions computed by using IT-TI grew increasingly inaccurate as the intramolecular barrier was heightened. By contrast, swarm-enhanced sampling TI (sesTI) calculations showed clear improvements in sampling efficiency, leading to more accurate computed free energy differences, even in the case of the highest barrier height. The sesTI approach, therefore, has potential in addressing chemical change in systems where conformations exist in slow exchange.
Efficient calculation of SAMPL4 hydration free energies using OMEGA, SZYBKI, QUACPAC, and Zap TK.
Ellingson, Benjamin A; Geballe, Matthew T; Wlodek, Stanislaw; Bayly, Christopher I; Skillman, A Geoffrey; Nicholls, Anthony
2014-03-01
Several submissions for the SAMPL4 hydration free energy set were calculated using OpenEye tools, including many that were among the top performing submissions. All of our best submissions used AM1BCC charges and Poisson-Boltzmann solvation. Three submissions used a single conformer for calculating the hydration free energy and all performed very well with mean unsigned errors ranging from 0.94 to 1.08 kcal/mol. These calculations were very fast, only requiring 0.5-2.0 s per molecule. We observed that our two single-conformer methodologies have different types of failure cases and that these differences could be exploited for determining when the methods are likely to have substantial errors.
Study on Calculation Methods for Sampling Frequency of Acceleration Signals in Gear System
Directory of Open Access Journals (Sweden)
Feibin Zhang
2013-01-01
Full Text Available The vibration acceleration signal mechanisms in normal and defect gears are studied. An improved bending-torsion vibration model is established, in which the effect of time-varying meshing stiffness and damping, torsional stiffness for transmission shaft, elastic bearing support, the driving motor, and external load are taken into consideration. Then, vibration signals are simulated based on the model under diverse sampling frequencies. The influences of input shaft's rotating frequency, the teeth number, and module of gears are investigated by the analysis of the simulation signals. Finally, formulas are proposed to calculate the acceleration signal bandwidth and the critical and recommended sampling frequencies of the gear system. The compatibility of the formulas is discussed when there is a crack in the tooth root. The calculation results agree well with the experiments.
Information-based sample size re-estimation in group sequential design for longitudinal trials.
Zhou, Jing; Adewale, Adeniyi; Shentu, Yue; Liu, Jiajun; Anderson, Keaven
2014-09-28
Group sequential design has become more popular in clinical trials because it allows for trials to stop early for futility or efficacy to save time and resources. However, this approach is less well-known for longitudinal analysis. We have observed repeated cases of studies with longitudinal data where there is an interest in early stopping for a lack of treatment effect or in adapting sample size to correct for inappropriate variance assumptions. We propose an information-based group sequential design as a method to deal with both of these issues. Updating the sample size at each interim analysis makes it possible to maintain the target power while controlling the type I error rate. We will illustrate our strategy with examples and simulations and compare the results with those obtained using fixed design and group sequential design without sample size re-estimation.
Sample size requirements for indirect association studies of gene-environment interactions (G x E).
Hein, Rebecca; Beckmann, Lars; Chang-Claude, Jenny
2008-04-01
Association studies accounting for gene-environment interactions (G x E) may be useful for detecting genetic effects. Although current technology enables very dense marker spacing in genetic association studies, the true disease variants may not be genotyped. Thus, causal genes are searched for by indirect association using genetic markers in linkage disequilibrium (LD) with the true disease variants. Sample sizes needed to detect G x E effects in indirect case-control association studies depend on the true genetic main effects, disease allele frequencies, whether marker and disease allele frequencies match, LD between loci, main effects and prevalence of environmental exposures, and the magnitude of interactions. We explored variables influencing sample sizes needed to detect G x E, compared these sample sizes with those required to detect genetic marginal effects, and provide an algorithm for power and sample size estimations. Required sample sizes may be heavily inflated if LD between marker and disease loci decreases. More than 10,000 case-control pairs may be required to detect G x E. However, given weak true genetic main effects, moderate prevalence of environmental exposures, as well as strong interactions, G x E effects may be detected with smaller sample sizes than those needed for the detection of genetic marginal effects. Moreover, in this scenario, rare disease variants may only be detectable when G x E is included in the analyses. Thus, the analysis of G x E appears to be an attractive option for the detection of weak genetic main effects of rare variants that may not be detectable in the analysis of genetic marginal effects only.
Directory of Open Access Journals (Sweden)
Su Huaizhi
2012-01-01
Full Text Available The flexibility coefficient is popularly used to implement the macroevaluation of shape, safety, and economy for arch dam. However, the description of flexibility coefficient has not drawn a widely consensus all the time. Based on a large number of relative instance data, the relationship between influencing factor and flexibility coefficient is analyzed by means of partial least-squares regression. The partial least-squares regression equation of flexibility coefficient in certain height range between 30 m and 70 m is established. Regressive precision and equation stability are further investigated. The analytical model of statistical flexibility coefficient is provided. The flexibility coefficient criterion is determined preliminarily to evaluate the shape of low- and medium-sized arch dam. A case study is finally presented to illustrate the potential engineering application. According to the analysis result of partial least-squares regression, it is shown that there is strong relationship between flexibility coefficient and average thickness of dam, thickness-height ratio of crown cantilever, arc height ratio, and dam height, but the effect of rise-span ratio is little relatively. The considered factors in the proposed model are more comprehensive, and the applied scope is clearer than that of the traditional calculation methods. It is more suitable for the analogy analysis in engineering design and the safety evaluation for arch dam.
Speckle-suppression in hologram calculation using ray-sampling plane.
Utsugi, Takeru; Yamaguchi, Masahiro
2014-07-14
Speckle noise is an important issue in electro-holographic displays. We propose a new method for suppressing speckle noise in a computer-generated hologram (CGH) for 3D display. In our previous research, we proposed a method for CGH calculation using ray-sampling plane (RS-plane), which enables the application of advanced ray-based rendering techniques to the calculation of hologram that can reconstruct a deep 3D scene in high resolution. Conventional techniques for effective speckle suppression, which utilizes the time-multiplexing of sparse object points, can suppress the speckle noise with high resolution, but it cannot be applied to the CGH calculation using RS-plane because the CGH calculated using RS-plane does not utilize point sources on an object surface. Then, we propose the method to define the point sources from light-ray information and apply the speckle suppression technique using sparse point sources to CGH calculation using RS-plane. The validity of the proposed method was verified by numerical simulations.
Estimation of grain size in asphalt samples using digital image analysis
Källén, Hanna; Heyden, Anders; Lindh, Per
2014-09-01
Asphalt is made of a mixture of stones of different sizes and a binder called bitumen, the size distribution of the stones is determined by the recipe of the asphalt. One quality check of asphalt is to see if the real size distribution of asphalt samples is consistent with the recipe. This is usually done by first extracting the binder using methylenchloride and the sieving the stones and see how much that pass every sieve size. Methylenchloride is highly toxic and it is desirable to find the size distribution in some other way. In this paper we find the size distribution by slicing up the asphalt sample and using image analysis techniques to analyze the cross-sections. First the stones are segmented from the background, bitumen, and then rectangles are fit to the detected stones. We then estimate the sizes of the stones by using the width of the rectangle. The result is compared with both the recipe for the asphalt and with the result from the standard analysis method, and our method shows good correlation with those.
Hoyle, Rick H; Gottfredson, Nisha C
2015-10-01
When the goal of prevention research is to capture in statistical models some measure of the dynamic complexity in structures and processes implicated in problem behavior and its prevention, approaches such as multilevel modeling (MLM) and structural equation modeling (SEM) are indicated. Yet the assumptions that must be satisfied if these approaches are to be used responsibly raise concerns regarding their use in prevention research involving smaller samples. In this article, we discuss in nontechnical terms the role of sample size in MLM and SEM and present findings from the latest simulation work on the performance of each approach at sample sizes typical of prevention research. For each statistical approach, we draw from extant simulation studies to establish lower bounds for sample size (e.g., MLM can be applied with as few as ten groups comprising ten members with normally distributed data, restricted maximum likelihood estimation, and a focus on fixed effects; sample sizes as small as N = 50 can produce reliable SEM results with normally distributed data and at least three reliable indicators per factor) and suggest strategies for making the best use of the modeling approach when N is near the lower bound.
Nagaya, Yasunobu
2014-06-01
The methods to calculate the kinetics parameters of βeff and Λ with the differential operator sampling have been reviewed. The comparison of the results obtained with the differential operator sampling and iterated fission probability approaches has been performed. It is shown that the differential operator sampling approach gives the same results as the iterated fission probability approach within the statistical uncertainty. In addition, the prediction accuracy of the evaluated nuclear data library JENDL-4.0 for the measured βeff/Λ and βeff values is also examined. It is shown that JENDL-4.0 gives a good prediction except for the uranium-233 systems. The present results imply the need for revisiting the uranium-233 nuclear data evaluation and performing the detailed sensitivity analysis.
Sheehan, Sara; Harris, Kelley; Song, Yun S
2013-07-01
Throughout history, the population size of modern humans has varied considerably due to changes in environment, culture, and technology. More accurate estimates of population size changes, and when they occurred, should provide a clearer picture of human colonization history and help remove confounding effects from natural selection inference. Demography influences the pattern of genetic variation in a population, and thus genomic data of multiple individuals sampled from one or more present-day populations contain valuable information about the past demographic history. Recently, Li and Durbin developed a coalescent-based hidden Markov model, called the pairwise sequentially Markovian coalescent (PSMC), for a pair of chromosomes (or one diploid individual) to estimate past population sizes. This is an efficient, useful approach, but its accuracy in the very recent past is hampered by the fact that, because of the small sample size, only few coalescence events occur in that period. Multiple genomes from the same population contain more information about the recent past, but are also more computationally challenging to study jointly in a coalescent framework. Here, we present a new coalescent-based method that can efficiently infer population size changes from multiple genomes, providing access to a new store of information about the recent past. Our work generalizes the recently developed sequentially Markov conditional sampling distribution framework, which provides an accurate approximation of the probability of observing a newly sampled haplotype given a set of previously sampled haplotypes. Simulation results demonstrate that we can accurately reconstruct the true population histories, with a significant improvement over the PSMC in the recent past. We apply our method, called diCal, to the genomes of multiple human individuals of European and African ancestry to obtain a detailed population size change history during recent times.
Got Power? A Systematic Review of Sample Size Adequacy in Health Professions Education Research
Cook, David A.; Hatala, Rose
2015-01-01
Many education research studies employ small samples, which in turn lowers statistical power. We re-analyzed the results of a meta-analysis of simulation-based education to determine study power across a range of effect sizes, and the smallest effect that could be plausibly excluded. We systematically searched multiple databases through May 2011,…
Fan, Xitao; Wang, Lin; Thompson, Bruce
1999-01-01
A Monte Carlo simulation study investigated the effects on 10 structural equation modeling fit indexes of sample size, estimation method, and model specification. Some fit indexes did not appear to be comparable, and it was apparent that estimation method strongly influenced almost all fit indexes examined, especially for misspecified models. (SLD)
Sideridis, Georgios; Simos, Panagiotis; Papanicolaou, Andrew; Fletcher, Jack
2014-01-01
The present study assessed the impact of sample size on the power and fit of structural equation modeling applied to functional brain connectivity hypotheses. The data consisted of time-constrained minimum norm estimates of regional brain activity during performance of a reading task obtained with magnetoencephalography. Power analysis was first…
Analysis of variograms with various sample sizes from a multispectral image
Variogram plays a crucial role in remote sensing application and geostatistics. It is very important to estimate variogram reliably from sufficient data. In this study, the analysis of variograms with various sample sizes of remotely sensed data was conducted. A 100x100-pixel subset was chosen from ...
The Influence of Virtual Sample Size on Confidence and Causal-Strength Judgments
Liljeholm, Mimi; Cheng, Patricia W.
2009-01-01
The authors investigated whether confidence in causal judgments varies with virtual sample size--the frequency of cases in which the outcome is (a) absent before the introduction of a generative cause or (b) present before the introduction of a preventive cause. Participants were asked to evaluate the influence of various candidate causes on an…
Size Distributions and Characterization of Native and Ground Samples for Toxicology Studies
McKay, David S.; Cooper, Bonnie L.; Taylor, Larry A.
2010-01-01
This slide presentation shows charts and graphs that review the particle size distribution and characterization of natural and ground samples for toxicology studies. There are graphs which show the volume distribution versus the number distribution for natural occurring dust, jet mill ground dust, and ball mill ground dust.
Kelley, Ken
2007-11-01
The accuracy in parameter estimation approach to sample size planning is developed for the coefficient of variation, where the goal of the method is to obtain an accurate parameter estimate by achieving a sufficiently narrow confidence interval. The first method allows researchers to plan sample size so that the expected width of the confidence interval for the population coefficient of variation is sufficiently narrow. A modification allows a desired degree of assurance to be incorporated into the method, so that the obtained confidence interval will be sufficiently narrow with some specified probability (e.g., 85% assurance that the 95 confidence interval width will be no wider than to units). Tables of necessary sample size are provided for a variety of scenarios that may help researchers planning a study where the coefficient of variation is of interest plan an appropriate sample size in order to have a sufficiently narrow confidence interval, optionally with somespecified assurance of the confidence interval being sufficiently narrow. Freely available computer routines have been developed that allow researchers to easily implement all of the methods discussed in the article.
Enzymatic Kinetic Isotope Effects from First-Principles Path Sampling Calculations.
Varga, Matthew J; Schwartz, Steven D
2016-04-12
In this study, we develop and test a method to determine the rate of particle transfer and kinetic isotope effects in enzymatic reactions, specifically yeast alcohol dehydrogenase (YADH), from first-principles. Transition path sampling (TPS) and normal mode centroid dynamics (CMD) are used to simulate these enzymatic reactions without knowledge of their reaction coordinates and with the inclusion of quantum effects, such as zero-point energy and tunneling, on the transferring particle. Though previous studies have used TPS to calculate reaction rate constants in various model and real systems, it has not been applied to a system as large as YADH. The calculated primary H/D kinetic isotope effect agrees with previously reported experimental results, within experimental error. The kinetic isotope effects calculated with this method correspond to the kinetic isotope effect of the transfer event itself. The results reported here show that the kinetic isotope effects calculated from first-principles, purely for barrier passage, can be used to predict experimental kinetic isotope effects in enzymatic systems.
Directory of Open Access Journals (Sweden)
John M Lachin
Full Text Available Preservation of β-cell function as measured by stimulated C-peptide has recently been accepted as a therapeutic target for subjects with newly diagnosed type 1 diabetes. In recently completed studies conducted by the Type 1 Diabetes Trial Network (TrialNet, repeated 2-hour Mixed Meal Tolerance Tests (MMTT were obtained for up to 24 months from 156 subjects with up to 3 months duration of type 1 diabetes at the time of study enrollment. These data provide the information needed to more accurately determine the sample size needed for future studies of the effects of new agents on the 2-hour area under the curve (AUC of the C-peptide values. The natural log(x, log(x+1 and square-root (√x transformations of the AUC were assessed. In general, a transformation of the data is needed to better satisfy the normality assumptions for commonly used statistical tests. Statistical analysis of the raw and transformed data are provided to estimate the mean levels over time and the residual variation in untreated subjects that allow sample size calculations for future studies at either 12 or 24 months of follow-up and among children 8-12 years of age, adolescents (13-17 years and adults (18+ years. The sample size needed to detect a given relative (percentage difference with treatment versus control is greater at 24 months than at 12 months of follow-up, and differs among age categories. Owing to greater residual variation among those 13-17 years of age, a larger sample size is required for this age group. Methods are also described for assessment of sample size for mixtures of subjects among the age categories. Statistical expressions are presented for the presentation of analyses of log(x+1 and √x transformed values in terms of the original units of measurement (pmol/ml. Analyses using different transformations are described for the TrialNet study of masked anti-CD20 (rituximab versus masked placebo. These results provide the information needed to
Flux calculations in an inhomogeneous Universe: weighting a flux-limited galaxy sample
Koers, Hylke B J
2009-01-01
Many astrophysical problems arising within the context of ultra-high energy cosmic rays, very-high energy gamma rays or neutrinos, require calculation of the flux produced by sources tracing the distribution of galaxies in the Universe. We discuss a simple weighting scheme, an application of the method introduced by Lynden-Bell in 1971, that allows the calculation of the flux sky map directly from a flux-limited galaxy catalog without cutting a volume-limited subsample. Using this scheme, the galaxy distribution can be modeled up to large scales while representing the distribution in the nearby Universe with maximum accuracy. We consider fluctuations in the flux map arising from the finiteness of the galaxy sample. We show how these fluctuations are reduced by the weighting scheme and discuss how the remaining fluctuations limit the applicability of the method.
Factors Influencing Sample Size for Internal Audit Evidence Collection in the Public Sector in Kenya
Directory of Open Access Journals (Sweden)
Kamau Charles Guandaru
2017-01-01
Full Text Available The internal audit department has a role of providing objective assurance and consulting services designed to add value and improve an organization’s operations. In performing this role the internal auditors are required to provide an auditor’s opinion which is supported by sufficient and reliable audit evidence. Since auditors are not in a position to examine 100% of the records and transactions, they are required to sample a few and make conclusions on the basis of the sample selected. The literature suggests several factors which affects the sample size for audit purposes of the internal auditors in the public sector in Kenya. This research collected data from 32 public sector internal auditors. The research carried out simple regression and correlation analysis on the data collected so as to test hypotheses and make conclusions on the factors affecting the sample size for audit purposes of the internal auditors in the public sector in Kenya. The study found out that that materiality of audit issue, type of information available, source of information, degree of risk of misstatement and auditor skills and independence are some of the factors influencing the sample size determination for the purposes of internal audit evidence collection in public sector in Kenya.
DEFF Research Database (Denmark)
Andreasen, Jo Bønding; Pistor-Riebold, Thea Unger; Knudsen, Ingrid Hell;
2014-01-01
count remained stable using a 3.6 mL tube during the entire observation period of 120 min (p=0.74), but decreased significantly after 60 min when using tubes smaller than 3.6 mL (pblood sampling tubes. Therefore, 1.8 mL tubes should...... be preferred for RoTEM® analyses in order to minimise the volume of blood drawn. With regard to platelet aggregation analysed by impedance aggregometry tubes of different size cannot be used interchangeably. If platelet count is determined later than 10 min after blood sampling using tubes containing citrate......Background: To minimise the volume of blood used for diagnostic procedures, especially in children, we investigated whether the size of sample tubes affected whole blood coagulation analyses. Methods: We included 20 healthy individuals for rotational thromboelastometry (RoTEM®) analyses...
Size selective isocyanate aerosols personal air sampling using porous plastic foams
Energy Technology Data Exchange (ETDEWEB)
Cong Khanh Huynh; Trinh Vu Duc, E-mail: chuynh@hospvd.c [Institut Universitaire Romand de Sante au Travail (IST), 21 rue du Bugnon - CH-1011 Lausanne (Switzerland)
2009-02-01
As part of a European project (SMT4-CT96-2137), various European institutions specialized in occupational hygiene (BGIA, HSL, IOM, INRS, IST, Ambiente e Lavoro) have established a program of scientific collaboration to develop one or more prototypes of European personal samplers for the collection of simultaneous three dust fractions: inhalable, thoracic and respirable. These samplers based on existing sampling heads (IOM, GSP and cassettes) use Polyurethane Plastic Foam (PUF) according to their porosity to support sampling and separator size of the particles. In this study, the authors present an original application of size selective personal air sampling using chemical impregnated PUF to perform isocyanate aerosols capturing and derivatizing in industrial spray-painting shops.
{sup 10}Be measurements at MALT using reduced-size samples of bulk sediments
Energy Technology Data Exchange (ETDEWEB)
Horiuchi, Kazuho, E-mail: kh@cc.hirosaki-u.ac.jp [Graduate School of Science and Technology, Hirosaki University, 3, Bunkyo-chou, Hirosaki, Aomori 036-8561 (Japan); Oniyanagi, Itsumi [Graduate School of Science and Technology, Hirosaki University, 3, Bunkyo-chou, Hirosaki, Aomori 036-8561 (Japan); Wasada, Hiroshi [Institute of Geology and Paleontology, Graduate school of Science, Tohoku University, 6-3, Aramaki Aza-Aoba, Aoba-ku, Sendai 980-8578 (Japan); Matsuzaki, Hiroyuki [MALT, School of Engineering, University of Tokyo, 2-11-16, Yayoi, Bunkyo-ku, Tokyo 113-0032 (Japan)
2013-01-15
In order to establish {sup 10}Be measurements on reduced-size (1-10 mg) samples of bulk sediments, we investigated four different pretreatment designs using lacustrine and marginal-sea sediments and the AMS system of the Micro Analysis Laboratory, Tandem accelerator (MALT) at University of Tokyo. The {sup 10}Be concentrations obtained from the samples of 1-10 mg agreed within a precision of 3-5% with the values previously determined using corresponding ordinary-size ({approx}200 mg) samples and the same AMS system. This fact demonstrates reliable determinations of {sup 10}Be with milligram levels of recent bulk sediments at MALT. On the other hand, a clear decline of the BeO{sup -} beam with tens of micrograms of {sup 9}Be carrier suggests that the combination of ten milligrams of sediments and a few hundred micrograms of the {sup 9}Be carrier is more convenient at this stage.
Energy Technology Data Exchange (ETDEWEB)
Nagy, Tibor; Vikár, Anna; Lendvay, György, E-mail: lendvay.gyorgy@ttk.mta.hu [Institute of Materials and Environmental Chemistry, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Magyar tudósok körútja 2., H-1117 Budapest (Hungary)
2016-01-07
The quasiclassical trajectory (QCT) method is an efficient and important tool for studying the dynamics of bimolecular reactions. In this method, the motion of the atoms is simulated classically, and the only quantum effect considered is that the initial vibrational states of reactant molecules are semiclassically quantized. A sensible expectation is that the initial ensemble of classical molecular states generated this way should be stationary, similarly to the quantum state it is supposed to represent. The most widely used method for sampling the vibrational phase space of polyatomic molecules is based on the normal mode approximation. In the present work, it is demonstrated that normal mode sampling provides a nonstationary ensemble even for a simple molecule like methane, because real potential energy surfaces are anharmonic in the reactant domain. The consequences were investigated for reaction CH{sub 4} + H → CH{sub 3} + H{sub 2} and its various isotopologs and were found to be dramatic. Reaction probabilities and cross sections obtained from QCT calculations oscillate periodically as a function of the initial distance of the colliding partners and the excitation functions are erratic. The reason is that in the nonstationary ensemble of initial states, the mean bond length of the breaking C–H bond oscillates in time with the frequency of the symmetric stretch mode. We propose a simple method, one-period averaging, in which reactivity parameters are calculated by averaging over an entire period of the mean C–H bond length oscillation, which removes the observed artifacts and provides the physically most reasonable reaction probabilities and cross sections when the initial conditions for QCT calculations are generated by normal mode sampling.
PIXE–PIGE analysis of size-segregated aerosol samples from remote areas
Energy Technology Data Exchange (ETDEWEB)
Calzolai, G., E-mail: calzolai@fi.infn.it [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Chiari, M.; Lucarelli, F.; Nava, S.; Taccetti, F. [Department of Physics and Astronomy, University of Florence and National Institute of Nuclear Physics (INFN), Via G. Sansone 1, 50019 Sesto Fiorentino (Italy); Becagli, S.; Frosini, D.; Traversi, R.; Udisti, R. [Department of Chemistry, University of Florence, Via della Lastruccia 3, 50019 Sesto Fiorentino (Italy)
2014-01-01
The chemical characterization of size-segregated samples is helpful to study the aerosol effects on both human health and environment. The sampling with multi-stage cascade impactors (e.g., Small Deposit area Impactor, SDI) produces inhomogeneous samples, with a multi-spot geometry and a non-negligible particle stratification. At LABEC (Laboratory of nuclear techniques for the Environment and the Cultural Heritage), an external beam line is fully dedicated to PIXE–PIGE analysis of aerosol samples. PIGE is routinely used as a sidekick of PIXE to correct the underestimation of PIXE in quantifying the concentration of the lightest detectable elements, like Na or Al, due to X-ray absorption inside the individual aerosol particles. In this work PIGE has been used to study proper attenuation correction factors for SDI samples: relevant attenuation effects have been observed also for stages collecting smaller particles, and consequent implications on the retrieved aerosol modal structure have been evidenced.
Profit based phase II sample size determination when adaptation by design is adopted
Martini, D.
2014-01-01
Background. Adaptation by design consists in conservatively estimating the phase III sample size on the basis of phase II data, and can be applied in almost all therapeutic areas; it is based on the assumption that the effect size of the drug is the same in phase II and phase III trials, that is a very common scenario assumed in product development. Adaptation by design reduces the probability on underpowered experiments and can improve the overall success probability of phase II and III tria...
The role of the upper sample size limit in two-stage bioequivalence designs.
Karalis, Vangelis
2013-11-01
Two-stage designs (TSDs) are currently recommended by the regulatory authorities for bioequivalence (BE) assessment. The TSDs presented until now rely on an assumed geometric mean ratio (GMR) value of the BE metric in stage I in order to avoid inflation of type I error. In contrast, this work proposes a more realistic TSD design where sample re-estimation relies not only on the variability of stage I, but also on the observed GMR. In these cases, an upper sample size limit (UL) is introduced in order to prevent inflation of type I error. The aim of this study is to unveil the impact of UL on two TSD bioequivalence approaches which are based entirely on the interim results. Monte Carlo simulations were used to investigate several different scenarios of UL levels, within-subject variability, different starting number of subjects, and GMR. The use of UL leads to no inflation of type I error. As UL values increase, the % probability of declaring BE becomes higher. The starting sample size and the variability of the study affect type I error. Increased UL levels result in higher total sample sizes of the TSD which are more pronounced for highly variable drugs.
Resampling: An improvement of importance sampling in varying population size models.
Merle, C; Leblois, R; Rousset, F; Pudlo, P
2017-04-01
Sequential importance sampling algorithms have been defined to estimate likelihoods in models of ancestral population processes. However, these algorithms are based on features of the models with constant population size, and become inefficient when the population size varies in time, making likelihood-based inferences difficult in many demographic situations. In this work, we modify a previous sequential importance sampling algorithm to improve the efficiency of the likelihood estimation. Our procedure is still based on features of the model with constant size, but uses a resampling technique with a new resampling probability distribution depending on the pairwise composite likelihood. We tested our algorithm, called sequential importance sampling with resampling (SISR) on simulated data sets under different demographic cases. In most cases, we divided the computational cost by two for the same accuracy of inference, in some cases even by one hundred. This study provides the first assessment of the impact of such resampling techniques on parameter inference using sequential importance sampling, and extends the range of situations where likelihood inferences can be easily performed.
Effect of sample size on the fluid flow through a single fractured granitoid
Institute of Scientific and Technical Information of China (English)
Kunal Kumar Singh; Devendra Narain Singh; Ranjith Pathegama Gamage
2016-01-01
Most of deep geological engineered structures, such as rock caverns, nuclear waste disposal repositories, metro rail tunnels, multi-layer underground parking, are constructed within hard crystalline rocks because of their high quality and low matrix permeability. In such rocks, fluid flows mainly through fractures. Quantification of fractures along with the behavior of the fluid flow through them, at different scales, becomes quite important. Earlier studies have revealed the influence of sample size on the confining stressepermeability relationship and it has been demonstrated that permeability of the fractured rock mass decreases with an increase in sample size. However, most of the researchers have employed numerical simulations to model fluid flow through the fracture/fracture network, or laboratory investigations on intact rock samples with diameter ranging between 38 mm and 45 cm and the diameter-to-length ratio of 1:2 using different experimental methods. Also, the confining stress, s3, has been considered to be less than 30 MPa and the effect of fracture roughness has been ignored. In the present study, an extension of the previous studies on “laboratory simulation of flow through single fractured granite” was conducted, in which consistent fluid flow experiments were performed on cy-lindrical samples of granitoids of two different sizes (38 mm and 54 mm in diameters), containing a“rough walled single fracture”. These experiments were performed under varied confining pressure (s3 ¼ 5e40 MPa), fluid pressure (fp ? 25 MPa), and fracture roughness. The results indicate that a nonlinear relationship exists between the discharge, Q, and the effective confining pressure, sef ., and Q decreases with an increase in sef .. Also, the effects of sample size and fracture roughness do not persist when sef . ? 20 MPa. It is expected that such a study will be quite useful in correlating and extrapolating the laboratory scale investigations to in-situ scale and
Institute of Scientific and Technical Information of China (English)
SONG; Ming-zhe; WEI; Ke-xin; HOU; Jin-bing; WANG; Hong-yu; GAO; Fei; NI; Ning
2015-01-01
The Bragg-Gray cavity theory(B-G theory)provided a theoretical basis for the analytical calculation of the energy response for ionization chamber.It was widely used in the theoretical calculation of the ionization chamber detector and the tissue equivalent detector.However,the B-G
Optimizing stream water mercury sampling for calculation of fish bioaccumulation factors
Riva-Murray, Karen; Bradley, Paul M.; Journey, Celeste A.; Brigham, Mark E.; Scudder Eikenberry, Barbara C.; Knightes, Christopher; Button, Daniel T.
2013-01-01
Mercury (Hg) bioaccumulation factors (BAFs) for game fishes are widely employed for monitoring, assessment, and regulatory purposes. Mercury BAFs are calculated as the fish Hg concentration (Hgfish) divided by the water Hg concentration (Hgwater) and, consequently, are sensitive to sampling and analysis artifacts for fish and water. We evaluated the influence of water sample timing, filtration, and mercury species on the modeled relation between game fish and water mercury concentrations across 11 streams and rivers in five states in order to identify optimum Hgwater sampling approaches. Each model included fish trophic position, to account for a wide range of species collected among sites, and flow-weighted Hgwater estimates. Models were evaluated for parsimony, using Akaike’s Information Criterion. Better models included filtered water methylmercury (FMeHg) or unfiltered water methylmercury (UMeHg), whereas filtered total mercury did not meet parsimony requirements. Models including mean annual FMeHg were superior to those with mean FMeHg calculated over shorter time periods throughout the year. FMeHg models including metrics of high concentrations (80th percentile and above) observed during the year performed better, in general. These higher concentrations occurred most often during the growing season at all sites. Streamflow was significantly related to the probability of achieving higher concentrations during the growing season at six sites, but the direction of influence varied among sites. These findings indicate that streamwater Hg collection can be optimized by evaluating site-specific FMeHg - UMeHg relations, intra-annual temporal variation in their concentrations, and streamflow-Hg dynamics.
Energy Technology Data Exchange (ETDEWEB)
Kim, Chung Ho; O, Joo Hyun; Chung, Yong An; Yoo, Le Ryung; Sohn, Hyung Sun; Kim, Sung Hoon; Chung, Soo Kyo; Lee, Hyoung Koo [Catholic University of Korea, Seoul (Korea, Republic of)
2006-02-15
To determine appropriate sampling frequency and time of multiple blood sampling dual exponential method with {sup 99m}Tc-DTPA for calculating glomerular filtration rate (GFR). Thirty four patients were included in this study. Three mCi of {sup 99m}Tc-DTPA was intravenously injected and blood sampling at 9 different times, 5 ml each, were done. Using the radioactivity of serum, measured by gamma counter, the GFR was calculated using dual exponential method and corrected with the body surface area. Using spontaneously chosen 2 data points of serum radioactivity, 15 collections of 2-sample GFR were calculated. And 10 collections of 3-sample GFR and 12 collections of 4-sample GFR were also calculated. Using the 9-sample GFR as a reference value, degree of agreement was analyzed with Kendall's {tau} correlation coefficients, mean difference and standard deviation. Although some of the 2-sample GFR showed high correlation coefficient, over or underestimation had evolved as the renal function change. The 10-120-240 min 3-sample GFR showed a high correlation coefficient {tau} =0.93), minimal difference (Mean{+-}SD= -1.784{+-}3.972), and no over or underestimation as the renal function changed. Th 4-sample GFR showed no better accuracy than the 3-sample GFR. Int the wide spectrum or renal function, the 10-120-240 min 3-sample GFR could be the best choice for estimating the patients' renal function.
Sample sizing of biological materials analyzed by energy dispersion X-ray fluorescence
Energy Technology Data Exchange (ETDEWEB)
Paiva, Jose D.S.; Franca, Elvis J.; Magalhaes, Marcelo R.L.; Almeida, Marcio E.S.; Hazin, Clovis A., E-mail: dan-paiva@hotmail.com, E-mail: ejfranca@cnen.gov.br, E-mail: marcelo_rlm@hotmail.com, E-mail: maensoal@yahoo.com.br, E-mail: chazin@cnen.gov.b [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)
2013-07-01
Analytical portions used in chemical analyses are usually less than 1g. Errors resulting from the sampling are barely evaluated, since this type of study is a time-consuming procedure, with high costs for the chemical analysis of large number of samples. The energy dispersion X-ray fluorescence - EDXRF is a non-destructive and fast analytical technique with the possibility of determining several chemical elements. Therefore, the aim of this study was to provide information on the minimum analytical portion for quantification of chemical elements in biological matrices using EDXRF. Three species were sampled in mangroves from the Pernambuco, Brazil. Tree leaves were washed with distilled water, oven-dried at 60 deg C and milled until 0.5 mm particle size. Ten test-portions of approximately 500 mg for each species were transferred to vials sealed with polypropylene film. The quality of the analytical procedure was evaluated from the reference materials IAEA V10 Hay Powder, SRM 2976 Apple Leaves. After energy calibration, all samples were analyzed under vacuum for 100 seconds for each group of chemical elements. The voltage used was 15 kV and 50 kV for chemical elements of atomic number lower than 22 and the others, respectively. For the best analytical conditions, EDXRF was capable of estimating the sample size uncertainty for further determination of chemical elements in leaves. (author)
A Complete Sample of Megaparsec Size Double Radio Sources from SUMSS
Saripalli, L; Subramanian, R; Boyce, E
2005-01-01
We present a complete sample of megaparsec-size double radio sources compiled from the Sydney University Molonglo Sky Survey (SUMSS). Almost complete redshift information has been obtained for the sample. The sample has the following defining criteria: Galactic latitude |b| > 12.5 deg, declination 5 arcmin. All the sources have projected linear size larger than 0.7 Mpc (assuming H_o = 71 km/s/Mpc). The sample is chosen from a region of the sky covering 2100 square degrees. In this paper, we present 843-MHz radio images of the extended radio morphologies made using the Molonglo Observatory Synthesis Telescope (MOST), higher resolution radio observations of any compact radio structures using the Australia Telescope Compact Array (ATCA), and low resolution optical spectra of the host galaxies from the 2.3-m Australian National University (ANU) telescope at Siding Spring Observatory. The sample presented here is the first in the southern hemisphere and significantly enhances the database of known giant radio sou...
Vanessa Colombo-Corbi; Maria José Dellamano-Oliveira; Armando Augusto Henriques Vieira
2011-01-01
Glycolytic activities of eight enzymes in size-fractionated water samples from a eutrophic tropical reservoir are presented in this study, including enzymes assayed for the first time in a freshwater environment. Among these enzymes, rhamnosidase, arabinosidase and fucosidase presented high activity in the free-living fraction, while glucosidase, mannosidase and galactosidase exhibited high activity in the attached fraction. The low activity registered for rhamnosidase, arabinosidase and fuco...
Jha, Anjani K.
Particulate materials are routinely handled in large quantities by industries such as, agriculture, electronic, ceramic, chemical, cosmetic, fertilizer, food, nutraceutical, pharmaceutical, power, and powder metallurgy. These industries encounter segregation due to the difference in physical and mechanical properties of particulates. The general goal of this research was to study percolation segregation in multi-size and multi-component particulate mixtures, especially measurement, sampling, and modeling. A second generation primary segregation shear cell (PSSC-II), an industrial vibrator, a true cubical triaxial tester, and two samplers (triers) were used as primary test apparatuses for quantifying segregation and flowability; furthermore, to understand and propose strategies to mitigate segregation in particulates. Toward this end, percolation segregation in binary, ternary, and quaternary size mixtures for two particulate types: urea (spherical) and potash (angular) were studied. Three coarse size ranges 3,350-4,000 mum (mean size = 3,675 mum), 2,800-3,350 mum (3,075 mum), and 2,360-2,800 mum (2,580 mum) and three fines size ranges 2,000-2,360 mum (2,180 mum), 1,700-2,000 mum (1,850 mum), and 1,400-1,700 mum (1,550 mum) for angular-shaped and spherical-shaped were selected for tests. Since the fines size 1,550 mum of urea was not available in sufficient quantity; therefore, it was not included in tests. Percolation segregation in fertilizer bags was tested also at two vibration frequencies of 5 Hz and 7Hz. The segregation and flowability of binary mixtures of urea under three equilibrium relative humidities (40%, 50%, and 60%) were also tested. Furthermore, solid fertilizer sampling was performed to compare samples obtained from triers of opening widths 12.7 mm and 19.1 mm and to determine size segregation in blend fertilizers. Based on experimental results, the normalized segregation rate (NSR) of binary mixtures was dependent on size ratio, mixing ratio
Kitao, Akio; Harada, Ryuhei; Nishihara, Yasutaka; Tran, Duy Phuoc
2016-12-01
Parallel Cascade Selection Molecular Dynamics (PaCS-MD) was proposed as an efficient conformational sampling method to investigate conformational transition pathway of proteins. In PaCS-MD, cycles of (i) selection of initial structures for multiple independent MD simulations and (ii) conformational sampling by independent MD simulations are repeated until the convergence of the sampling. The selection is conducted so that protein conformation gradually approaches a target. The selection of snapshots is a key to enhance conformational changes by increasing the probability of rare event occurrence. Since the procedure of PaCS-MD is simple, no modification of MD programs is required; the selections of initial structures and the restart of the next cycle in the MD simulations can be handled with relatively simple scripts with straightforward implementation. Trajectories generated by PaCS-MD were further analyzed by the Markov state model (MSM), which enables calculation of free energy landscape. The combination of PaCS-MD and MSM is reported in this work.
A contemporary decennial global Landsat sample of changing agricultural field sizes
White, Emma; Roy, David
2014-05-01
Agriculture has caused significant human induced Land Cover Land Use (LCLU) change, with dramatic cropland expansion in the last century and significant increases in productivity over the past few decades. Satellite data have been used for agricultural applications including cropland distribution mapping, crop condition monitoring, crop production assessment and yield prediction. Satellite based agricultural applications are less reliable when the sensor spatial resolution is small relative to the field size. However, to date, studies of agricultural field size distributions and their change have been limited, even though this information is needed to inform the design of agricultural satellite monitoring systems. Moreover, the size of agricultural fields is a fundamental description of rural landscapes and provides an insight into the drivers of rural LCLU change. In many parts of the world field sizes may have increased. Increasing field sizes cause a subsequent decrease in the number of fields and therefore decreased landscape spatial complexity with impacts on biodiversity, habitat, soil erosion, plant-pollinator interactions, and impacts on the diffusion of herbicides, pesticides, disease pathogens, and pests. The Landsat series of satellites provide the longest record of global land observations, with 30m observations available since 1982. Landsat data are used to examine contemporary field size changes in a period (1980 to 2010) when significant global agricultural changes have occurred. A multi-scale sampling approach is used to locate global hotspots of field size change by examination of a recent global agricultural yield map and literature review. Nine hotspots are selected where significant field size change is apparent and where change has been driven by technological advancements (Argentina and U.S.), abrupt societal changes (Albania and Zimbabwe), government land use and agricultural policy changes (China, Malaysia, Brazil), and/or constrained by
Estimating the Size of a Large Network and its Communities from a Random Sample
Chen, Lin; Crawford, Forrest W
2016-01-01
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that correctly estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhausti...
A Web-based Simulator for Sample Size and Power Estimation in Animal Carcinogenicity Studies
Directory of Open Access Journals (Sweden)
Hojin Moon
2002-12-01
Full Text Available A Web-based statistical tool for sample size and power estimation in animal carcinogenicity studies is presented in this paper. It can be used to provide a design with sufficient power for detecting a dose-related trend in the occurrence of a tumor of interest when competing risks are present. The tumors of interest typically are occult tumors for which the time to tumor onset is not directly observable. It is applicable to rodent tumorigenicity assays that have either a single terminal sacrifice or multiple (interval sacrifices. The design is achieved by varying sample size per group, number of sacrifices, number of sacrificed animals at each interval, if any, and scheduled time points for sacrifice. Monte Carlo simulation is carried out in this tool to simulate experiments of rodent bioassays because no closed-form solution is available. It takes design parameters for sample size and power estimation as inputs through the World Wide Web. The core program is written in C and executed in the background. It communicates with the Web front end via a Component Object Model interface passing an Extensible Markup Language string. The proposed statistical tool is illustrated with an animal study in lung cancer prevention research.
Directory of Open Access Journals (Sweden)
Shengyu eJiang
2016-02-01
Full Text Available Likert types of rating scales in which a respondent chooses a response from an ordered set of response options are used to measure a wide variety of psychological, educational, and medical outcome variables. The most appropriate item response theory model for analyzing and scoring these instruments when they provide scores on multiple scales is the multidimensional graded response model (MGRM. A simulation study was conducted to investigate the variables that might affect item parameter recovery for the MGRM. Data were generated based on different sample sizes, test lengths, and scale intercorrelations. Parameter estimates were obtained through the flexiMIRT software. The quality of parameter recovery was assessed by the correlation between true and estimated parameters as well as bias and root- mean-square-error. Results indicated that for the vast majority of cases studied a sample size of N = 500 provided accurate parameter estimates, except for tests with 240 items when 1,000 examinees were necessary to obtain accurate parameter estimates. Increasing sample size beyond N = 1,000 did not increase the accuracy of MGRM parameter estimates.
SAMPLE SIZE DETERMINATION IN NON-RADOMIZED SURVIVAL STUDIES WITH NON-CENSORED AND CENSORED DATA
Directory of Open Access Journals (Sweden)
S FAGHIHZADEH
2003-06-01
Full Text Available Introduction: In survival analysis, determination of sufficient sample size to achieve suitable statistical power is important .In both parametric and non-parametric methods of classic statistics, randomn selection of samples is a basic condition. practically, in most clinical trials and health surveys randomn allocation is impossible. Fixed - effect multiple linear regression analysis covers this need and this feature could be extended to survival regression analysis. This paper is the result of sample size determination in non-randomnized surval analysis with censored and non -censored data. Methods: In non-randomnized survival studies, linear regression with fixed -effect variable could be used. In fact such a regression is conditional expectation of dependent variable, conditioned on independent variable. Likelihood fuction with exponential hazard constructed by considering binary variable for allocation of each subject to one of two comparing groups, stating the variance of coefficient of fixed - effect independent variable by determination coefficient , sample size determination formulas are obtained with both censored and non-cencored data. So estimation of sample size is not based on the relation of a single independent variable but it could be attain the required power for a test adjusted for effect of the other explanatory covariates. Since the asymptotic distribution of the likelihood estimator of parameter is normal, we obtained the variance of the regression coefficient estimator formula then by stating the variance of regression coefficient of fixed-effect variable, by determination coefficient we derived formulas for determination of sample size in both censored and non-censored data. Results: In no-randomnized survival analysis ,to compare hazard rates of two groups without censored data, we obtained an estimation of determination coefficient ,risk ratio and proportion of membership to each group and their variances from
Paper coatings with multi-scale roughness evaluated at different sampling sizes
Energy Technology Data Exchange (ETDEWEB)
Samyn, Pieter, E-mail: Pieter.Samyn@UGent.be [Ghent University - Department of Textiles, Technologiepark 907, B-9052 Zwijnaarde (Belgium); Van Erps, Juergen; Thienpont, Hugo [Vrije Universiteit Brussels - Department of Applied Physics and Photonics, Pleinlaan 2, B-1050 Brussels (Belgium); Schoukens, Gustaaf [Ghent University - Department of Textiles, Technologiepark 907, B-9052 Zwijnaarde (Belgium)
2011-04-15
Papers have a complex hierarchical structure and the end-user functionalities such as hydrophobicity are controlled by a finishing layer. The application of an organic nanoparticle coating and drying of the aqueous dispersion results in an unique surface morphology with microscale domains that are internally patterned with nanoparticles. Better understanding of the multi-scale surface roughness patterns is obtained by monitoring the topography with non-contact profilometry (NCP) and atomic force microscopy (AFM) at different sampling areas ranging from 2000 {mu}m x 2000 {mu}m to 0.5 {mu}m x 0.5 {mu}m. The statistical roughness parameters are uniquely related to each other over the different measuring techniques and sampling sizes, as they are purely statistically determined. However, they cannot be directly extrapolated over the different sampling areas as they represent transitions at the nano-, micro-to-nano and microscale level. Therefore, the spatial roughness parameters including the correlation length and the specific frequency bandwidth should be taken into account for each measurement, which both allow for direct correlation of roughness data at different sampling sizes.
Subspace Leakage Analysis and Improved DOA Estimation With Small Sample Size
Shaghaghi, Mahdi; Vorobyov, Sergiy A.
2015-06-01
Classical methods of DOA estimation such as the MUSIC algorithm are based on estimating the signal and noise subspaces from the sample covariance matrix. For a small number of samples, such methods are exposed to performance breakdown, as the sample covariance matrix can largely deviate from the true covariance matrix. In this paper, the problem of DOA estimation performance breakdown is investigated. We consider the structure of the sample covariance matrix and the dynamics of the root-MUSIC algorithm. The performance breakdown in the threshold region is associated with the subspace leakage where some portion of the true signal subspace resides in the estimated noise subspace. In this paper, the subspace leakage is theoretically derived. We also propose a two-step method which improves the performance by modifying the sample covariance matrix such that the amount of the subspace leakage is reduced. Furthermore, we introduce a phenomenon named as root-swap which occurs in the root-MUSIC algorithm in the low sample size region and degrades the performance of the DOA estimation. A new method is then proposed to alleviate this problem. Numerical examples and simulation results are given for uncorrelated and correlated sources to illustrate the improvement achieved by the proposed methods. Moreover, the proposed algorithms are combined with the pseudo-noise resampling method to further improve the performance.
Effect of sample size on the fluid flow through a single fractured granitoid
Directory of Open Access Journals (Sweden)
Kunal Kumar Singh
2016-06-01
Full Text Available Most of deep geological engineered structures, such as rock caverns, nuclear waste disposal repositories, metro rail tunnels, multi-layer underground parking, are constructed within hard crystalline rocks because of their high quality and low matrix permeability. In such rocks, fluid flows mainly through fractures. Quantification of fractures along with the behavior of the fluid flow through them, at different scales, becomes quite important. Earlier studies have revealed the influence of sample size on the confining stress–permeability relationship and it has been demonstrated that permeability of the fractured rock mass decreases with an increase in sample size. However, most of the researchers have employed numerical simulations to model fluid flow through the fracture/fracture network, or laboratory investigations on intact rock samples with diameter ranging between 38 mm and 45 cm and the diameter-to-length ratio of 1:2 using different experimental methods. Also, the confining stress, σ3, has been considered to be less than 30 MPa and the effect of fracture roughness has been ignored. In the present study, an extension of the previous studies on “laboratory simulation of flow through single fractured granite” was conducted, in which consistent fluid flow experiments were performed on cylindrical samples of granitoids of two different sizes (38 mm and 54 mm in diameters, containing a “rough walled single fracture”. These experiments were performed under varied confining pressure (σ3 = 5–40 MPa, fluid pressure (fp ≤ 25 MPa, and fracture roughness. The results indicate that a nonlinear relationship exists between the discharge, Q, and the effective confining pressure, σeff., and Q decreases with an increase in σeff.. Also, the effects of sample size and fracture roughness do not persist when σeff. ≥ 20 MPa. It is expected that such a study will be quite useful in correlating and extrapolating the laboratory
Saccenti, Edoardo; Timmerman, Marieke E
2016-08-01
Sample size determination is a fundamental step in the design of experiments. Methods for sample size determination are abundant for univariate analysis methods, but scarce in the multivariate case. Omics data are multivariate in nature and are commonly investigated using multivariate statistical methods, such as principal component analysis (PCA) and partial least-squares discriminant analysis (PLS-DA). No simple approaches to sample size determination exist for PCA and PLS-DA. In this paper we will introduce important concepts and offer strategies for (minimally) required sample size estimation when planning experiments to be analyzed using PCA and/or PLS-DA.
Laczo, Roxanne M; Sackett, Paul R; Bobko, Philip; Cortina, José M
2005-07-01
The authors discuss potential confusion in conducting primary studies and meta-analyses on the basis of differences between groups. First, the authors show that a formula for the sampling error of the standardized mean difference (d) that is based on equal group sample sizes can produce substantially biased results if applied with markedly unequal group sizes. Second, the authors show that the same concerns are present when primary analyses or meta-analyses are conducted with point-biserial correlations, as the point-biserial correlation (r) is a transformation of d. Third, the authors examine the practice of correcting a point-biserial r for unequal sample sizes and note that such correction would also increase the sampling error of the corrected r. Correcting rs for unequal sample sizes, but using the standard formula for sampling error in uncorrected r, can result in bias. The authors offer a set of recommendations for conducting meta-analyses of group differences.
Adjustable virtual pore-size filter for automated sample preparation using acoustic radiation force
Energy Technology Data Exchange (ETDEWEB)
Jung, B; Fisher, K; Ness, K; Rose, K; Mariella, R
2008-05-22
We present a rapid and robust size-based separation method for high throughput microfluidic devices using acoustic radiation force. We developed a finite element modeling tool to predict the two-dimensional acoustic radiation force field perpendicular to the flow direction in microfluidic devices. Here we compare the results from this model with experimental parametric studies including variations of the PZT driving frequencies and voltages as well as various particle sizes and compressidensities. These experimental parametric studies also provide insight into the development of an adjustable 'virtual' pore-size filter as well as optimal operating conditions for various microparticle sizes. We demonstrated the separation of Saccharomyces cerevisiae and MS2 bacteriophage using acoustic focusing. The acoustic radiation force did not affect the MS2 viruses, and their concentration profile remained unchanged. With optimized design of our microfluidic flow system we were able to achieve yields of > 90% for the MS2 with > 80% of the S. cerevisiae being removed in this continuous-flow sample preparation device.
Efficient adaptive designs with mid-course sample size adjustment in clinical trials
Bartroff, Jay
2011-01-01
Adaptive designs have been proposed for clinical trials in which the nuisance parameters or alternative of interest are unknown or likely to be misspecified before the trial. Whereas most previous works on adaptive designs and mid-course sample size re-estimation have focused on two-stage or group sequential designs in the normal case, we consider here a new approach that involves at most three stages and is developed in the general framework of multiparameter exponential families. Not only does this approach maintain the prescribed type I error probability, but it also provides a simple but asymptotically efficient sequential test whose finite-sample performance, measured in terms of the expected sample size and power functions, is shown to be comparable to the optimal sequential design, determined by dynamic programming, in the simplified normal mean case with known variance and prespecified alternative, and superior to the existing two-stage designs and also to adaptive group sequential designs when the al...
Autoregressive Prediction with Rolling Mechanism for Time Series Forecasting with Small Sample Size
Directory of Open Access Journals (Sweden)
Zhihua Wang
2014-01-01
Full Text Available Reasonable prediction makes significant practical sense to stochastic and unstable time series analysis with small or limited sample size. Motivated by the rolling idea in grey theory and the practical relevance of very short-term forecasting or 1-step-ahead prediction, a novel autoregressive (AR prediction approach with rolling mechanism is proposed. In the modeling procedure, a new developed AR equation, which can be used to model nonstationary time series, is constructed in each prediction step. Meanwhile, the data window, for the next step ahead forecasting, rolls on by adding the most recent derived prediction result while deleting the first value of the former used sample data set. This rolling mechanism is an efficient technique for its advantages of improved forecasting accuracy, applicability in the case of limited and unstable data situations, and requirement of little computational effort. The general performance, influence of sample size, nonlinearity dynamic mechanism, and significance of the observed trends, as well as innovation variance, are illustrated and verified with Monte Carlo simulations. The proposed methodology is then applied to several practical data sets, including multiple building settlement sequences and two economic series.
Evaluating the performance of species richness estimators: sensitivity to sample grain size
DEFF Research Database (Denmark)
Hortal, Joaquín; Borges, Paulo A. V.; Gaspar, Clara
2006-01-01
different sampling units on species richness estimations. 2. Estimated species richness scores depended both on the estimator considered and on the grain size used to aggregate data. However, several estimators (ACE, Chao1, Jackknife1 and 2 and Bootstrap) were precise in spite of grain variations. Weibull...... that species richness estimations coming from small grain sizes can be directly compared and other estimators could give more precise results in those cases. We propose a decision framework based on our results and on the literature to assess which estimator should be used to compare species richness scores......Fifteen species richness estimators (three asymptotic based on species accumulation curves, 11 nonparametric, and one based in the species-area relationship) were compared by examining their performance in estimating the total species richness of epigean arthropods in the Azorean Laurisilva forests...
Energy Technology Data Exchange (ETDEWEB)
Sipola, Petri [Kuopio University Hospital, Department of Clinical Radiology, Kuopio (Finland); University of Eastern Finland, Institute of Clinical Medicine, Faculty of Health Sciences, Kuopio (Finland); Niemitukia, Lea H. [Kuopio University Hospital, Department of Clinical Radiology, Kuopio (Finland); Hyttinen, Mika M. [University of Eastern Finland, Institute of Biomedicine, Anatomy, Kuopio (Finland); Arokoski, Jari P.A. [Kuopio University Hospital, Department of Physical and Rehabilitation Medicine, Kuopio (Finland)
2011-04-15
To determine the number of participants required in controlled clinical trials investigating the progression of osteoarthritis (OA) of the hip as evaluated by the joint space width (JSW) on radiographs and to evaluate the reproducibility of the JSW measurement methods. Anteroposterior radiographs of hip were taken from 13 healthy volunteers and from 18 subjects with radiographic hip OA. The reproducibility of the JSW was determined from four segments using digital caliper measurements performed on film radiographs and using semiautomatic computerized image analysis of digitized images. Pearson correlation coefficient, coefficient of variability [CV (%)], and sample size values were calculated. It was found that 20 was a typical number of patients for a sufficiently powered study. The highest sample size was found in subjects with OA in the lateral segment. The reproducibility of the semiautomatic computerized method was not significantly better than the digital caliper method. The number of study subjects required to detect a significant joint space narrowing in follow-up studies is influenced by the baseline hip joint OA severity. The JSW measurements with computerized image analysis did not improve the reproducibility and thus performing JSW measurements with a digital caliper is acceptable. (orig.)
An In Situ Method for Sizing Insoluble Residues in Precipitation and Other Aqueous Samples.
Axson, Jessica L; Creamean, Jessie M; Bondy, Amy L; Capracotta, Sonja S; Warner, Katy Y; Ault, Andrew P
2015-01-01
Particles are frequently incorporated into clouds or precipitation, influencing climate by acting as cloud condensation or ice nuclei, taking up coatings during cloud processing, and removing species through wet deposition. Many of these particles, particularly ice nuclei, can remain suspended within cloud droplets/crystals as insoluble residues. While previous studies have measured the soluble or bulk mass of species within clouds and precipitation, no studies to date have determined the number concentration and size distribution of insoluble residues in precipitation or cloud water using in situ methods. Herein, for the first time we demonstrate that Nanoparticle Tracking Analysis (NTA) is a powerful in situ method for determining the total number concentration, number size distribution, and surface area distribution of insoluble residues in precipitation, both of rain and melted snow. The method uses 500 μL or less of liquid sample and does not require sample modification. Number concentrations for the insoluble residues in aqueous precipitation samples ranged from 2.0-3.0(±0.3)×10(8) particles cm(-3), while surface area ranged from 1.8(±0.7)-3.2(±1.0)×10(7) μm(2) cm(-3). Number size distributions peaked between 133-150 nm, with both single and multi-modal character, while surface area distributions peaked between 173-270 nm. Comparison with electron microscopy of particles up to 10 μm show that, by number, > 97% residues are <1 μm in diameter, the upper limit of the NTA. The range of concentration and distribution properties indicates that insoluble residue properties vary with ambient aerosol concentrations, cloud microphysics, and meteorological dynamics. NTA has great potential for studying the role that insoluble residues play in critical atmospheric processes.
A Bayesian cost-benefit approach to the determination of sample size in clinical trials.
Kikuchi, Takashi; Pezeshk, Hamid; Gittins, John
2008-01-15
Current practice for sample size computations in clinical trials is largely based on frequentist or classical methods. These methods have the drawback of requiring a point estimate of the variance of the treatment effect and are based on arbitrary settings of type I and II errors. They also do not directly address the question of achieving the best balance between the cost of the trial and the possible benefits from using the new treatment, and fail to consider the important fact that the number of users depends on the evidence for improvement compared with the current treatment. Our approach, Behavioural Bayes (or BeBay for short), assumes that the number of patients switching to the new medical treatment depends on the strength of the evidence that is provided by clinical trials, and takes a value between zero and the number of potential patients. The better a new treatment, the more the number of patients who want to switch to it and the more the benefit is obtained. We define the optimal sample size to be the sample size that maximizes the expected net benefit resulting from a clinical trial. Gittins and Pezeshk (Drug Inf. Control 2000; 34:355-363; The Statistician 2000; 49(2):177-187) used a simple form of benefit function and assumed paired comparisons between two medical treatments and that the variance of the treatment effect is known. We generalize this setting, by introducing a logistic benefit function, and by extending the more usual unpaired case, without assuming the variance to be known.
Zheng, Yi; Hu, Junqiang; Lin, Qiao
2012-01-01
Electrohydrodynamic (EHD) generation, a commonly used method in BioMEMS, plays a significant role in the pulsed-release drug delivery system for a decade. In this paper, an EHD based drug delivery system is well designed, which can be used to generate a single drug droplet as small as 2.83 nL in 8.5 ms with a total device of 2x2x3 mm^3, and an external supplied voltage of 1500 V. Theoretically, we derive the expressions for the size and the formation time of a droplet generated by EHD method, while taking into account the drug supply rate, properties of liquid, gap between electrodes, nozzle size, and charged droplet neutralization. This work proves a repeatable, stable and controllable droplet generation and delivery system based on EHD method.
DEFF Research Database (Denmark)
Stevens, Thomas; Lu, HY
2009-01-01
Understanding loess sedimentation rates is crucial for constraining past atmospheric dust dynamics, regional climatic change and local depositional environments. However, the derivation of loess sedimentation rates is complicated by the lack of available methods for independent calculation......) the influences on sediment grain-size and accumulation; and (ii) their relationship through time and across the depositional region. This uncertainty has led to the widespread use of assumptions concerning the relationship between sedimentation rate and grain-size in order to derive age models and climate...
Origin of sample size effect: Stochastic dislocation formation in crystalline metals at small scales
Huang, Guan-Rong; Huang, J. C.; Tsai, W. Y.
2016-12-01
In crystalline metals at small scales, the dislocation density will be increased by stochastic events of dislocation network, leading to a universal power law for various material structures. In this work, we develop a model obeyed by a probability distribution of dislocation density to describe the dislocation formation in terms of a chain reaction. The leading order terms of steady-state of probability distribution gives physical and quantitative insight to the scaling exponent n values in the power law of sample size effect. This approach is found to be consistent with experimental n values in a wide range.
Magnetic response and critical current properties of mesoscopic-size YBCO superconducting samples
Energy Technology Data Exchange (ETDEWEB)
Lisboa-Filho, P N [UNESP - Universidade Estadual Paulista, Grupo de Materiais Avancados, Departamento de Fisica, Bauru (Brazil); Deimling, C V; Ortiz, W A, E-mail: plisboa@fc.unesp.b [Grupo de Supercondutividade e Magnetismo, Departamento de Fisica, Universidade Federal de Sao Carlos, Sao Carlos (Brazil)
2010-01-15
In this contribution superconducting specimens of YBa{sub 2}Cu{sub 3}O{sub 7-{delta}} were synthesized by a modified polymeric precursor method, yielding a ceramic powder with particles of mesoscopic-size. Samples of this powder were then pressed into pellets and sintered under different conditions. The critical current density was analyzed by isothermal AC-susceptibility measurements as a function of the excitation field, as well as with isothermal DC-magnetization runs at different values of the applied field. Relevant features of the magnetic response could be associated to the microstructure of the specimens and, in particular, to the superconducting intra- and intergranular critical current properties.
Pore size distribution calculation from 1H NMR signal and N2 adsorption-desorption techniques
Hassan, Jamal
2012-09-01
The pore size distribution (PSD) of nano-material MCM-41 is determined using two different approaches: N2 adsorption-desorption and 1H NMR signal of water confined in silica nano-pores of MCM-41. The first approach is based on the recently modified Kelvin equation [J.V. Rocha, D. Barrera, K. Sapag, Top. Catal. 54(2011) 121-134] which deals with the known underestimation in pore size distribution for the mesoporous materials such as MCM-41 by introducing a correction factor to the classical Kelvin equation. The second method employs the Gibbs-Thompson equation, using NMR, for melting point depression of liquid in confined geometries. The result shows that both approaches give similar pore size distribution to some extent, and also the NMR technique can be considered as an alternative direct method to obtain quantitative results especially for mesoporous materials. The pore diameter estimated for the nano-material used in this study was about 35 and 38 Å for the modified Kelvin and NMR methods respectively. A comparison between these methods and the classical Kelvin equation is also presented.
Garamszegi, László Z; Møller, Anders P
2010-11-01
Comparative analyses aim to explain interspecific variation in phenotype among taxa. In this context, phylogenetic approaches are generally applied to control for similarity due to common descent, because such phylogenetic relationships can produce spurious similarity in phenotypes (known as phylogenetic inertia or bias). On the other hand, these analyses largely ignore potential biases due to within-species variation. Phylogenetic comparative studies inherently assume that species-specific means from intraspecific samples of modest sample size are biologically meaningful. However, within-species variation is often significant, because measurement errors, within- and between-individual variation, seasonal fluctuations, and differences among populations can all reduce the repeatability of a trait. Although simulations revealed that low repeatability can increase the type I error in a phylogenetic study, researchers only exercise great care in accounting for similarity in phenotype due to common phylogenetic descent, while problems posed by intraspecific variation are usually neglected. A meta-analysis of 194 comparative analyses all adjusting for similarity due to common phylogenetic descent revealed that only a few studies reported intraspecific repeatabilities, and hardly any considered or partially dealt with errors arising from intraspecific variation. This is intriguing, because the meta-analytic data suggest that the effect of heterogeneous sampling can be as important as phylogenetic bias, and thus they should be equally controlled in comparative studies. We provide recommendations about how to handle such effects of heterogeneous sampling.
Bolton tooth size ratio among qatari population sample: An odontometric study
Hashim, Hayder A; AL-Sayed, Najah; AL-Hussain, Hashim
2017-01-01
Objectives: To establish the overall and anterior Bolton ratio among a sample of Qatari population and to investigate whether there is a difference between males and females, as well as to compare the result obtained by Bolton. Materials and Methods: The current study consisted of 100 orthodontic study participants (50 males and 50 females) with different malocclusions and age ranging between 15 and 20 years. An electronic digital caliper was used to measure the mesiodistal tooth width of all maxillary and mandibular permanent teeth except second and third molars. The Student's t-test was used to compare tooth-size ratios between males and females and between the results of the present study and Bolton's result. Results: The anterior and overall ratio in Qatari individuals were 78.6 ± 3.4 and 91.8 ± 3.1, respectively. The tooth size ratios were slightly greater in males than that in females, however, the differences were not statistically significant (P > 0.05). There were no significant differences in the overall ratio between Qatari individuals and Bolton's results (P > 0.05), whereas statistical significant differences were observed in anterior ratio (P = 0.007). Conclusions: Within the limitation of the limitations of the present study, definite conclusion was difficult to establish. Thus, a further study with a large sample in each malocclusion group is required. PMID:28197399
Johnson, David R; Bachan, Lauren K
2013-08-01
In a recent article, Regan, Lakhanpal, and Anguiano (2012) highlighted the lack of evidence for different relationship outcomes between arranged and love-based marriages. Yet the sample size (n = 58) used in the study is insufficient for making such inferences. This reply discusses and demonstrates how small sample sizes reduce the utility of this research.
Directory of Open Access Journals (Sweden)
Smedslund Geir
2013-02-01
Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.
Sample size requirements and analysis of tag recoveries for paired releases of lake trout
Elrod, Joseph H.; Frank, Anthony
1990-01-01
A simple chi-square test can be used to analyze recoveries from a paired-release experiment to determine whether differential survival occurs between two groups of fish. The sample size required for analysis is a function of (1) the proportion of fish stocked, (2) the expected proportion at recovery, (3) the level of significance (a) at which the null hypothesis is tested, and (4) the power (1-I?) of the statistical test. Detection of a 20% change from a stocking ratio of 50:50 requires a sample of 172 (I?=0.10; 1-I?=0.80) to 459 (I?=0.01; 1-I?=0.95) fish. Pooling samples from replicate pairs is sometimes an appropriate way to increase statistical precision without increasing numbers stocked or sampling intensity. Summing over time is appropriate if catchability or survival of the two groups of fish does not change relative to each other through time. Twelve pairs of identical groups of yearling lake trout Salvelinus namaycush were marked with coded wire tags and stocked into Lake Ontario. Recoveries of fish at ages 2-8 showed differences of 1-14% from the initial stocking ratios. Mean tag recovery rates were 0.217%, 0.156%, 0.128%, 0.121%, 0.093%, 0.042%, and 0.016% for ages 2-8, respectively. At these rates, stocking 12,100-29,700 fish per group would yield samples of 172-459 fish at ages 2-8 combined.
A cold finger cooling system for the efficient graphitisation of microgram-sized carbon samples
Yang, Bin; Smith, A. M.; Hua, Quan
2013-01-01
At ANSTO, we use the Bosch reaction to convert sample CO2 to graphite for production of our radiocarbon AMS targets. Key to the efficient graphitisation of ultra-small samples are the type of iron catalyst used and the effective trapping of water vapour during the reaction. Here we report a simple liquid nitrogen cooling system that enables us to rapidly adjust the temperature of the cold finger in our laser-heated microfurnace. This has led to an improvement in the graphitisation of microgram-sized carbon samples. This simple system uses modest amounts of liquid nitrogen (typically <200 mL/h during graphitisation) and is compact and reliable. We have used it to produce over 120 AMS targets containing between 5 and 20 μg of carbon, with conversion efficiencies for 5 μg targets ranging from 80% to 100%. In addition, this cooling system has been adapted for use with our conventional graphitisation reactors and has also improved their performance.
What about N? A methodological study of sample-size reporting in focus group studies
Directory of Open Access Journals (Sweden)
Glenton Claire
2011-03-01
Full Text Available Abstract Background Focus group studies are increasingly published in health related journals, but we know little about how researchers use this method, particularly how they determine the number of focus groups to conduct. The methodological literature commonly advises researchers to follow principles of data saturation, although practical advise on how to do this is lacking. Our objectives were firstly, to describe the current status of sample size in focus group studies reported in health journals. Secondly, to assess whether and how researchers explain the number of focus groups they carry out. Methods We searched PubMed for studies that had used focus groups and that had been published in open access journals during 2008, and extracted data on the number of focus groups and on any explanation authors gave for this number. We also did a qualitative assessment of the papers with regard to how number of groups was explained and discussed. Results We identified 220 papers published in 117 journals. In these papers insufficient reporting of sample sizes was common. The number of focus groups conducted varied greatly (mean 8.4, median 5, range 1 to 96. Thirty seven (17% studies attempted to explain the number of groups. Six studies referred to rules of thumb in the literature, three stated that they were unable to organize more groups for practical reasons, while 28 studies stated that they had reached a point of saturation. Among those stating that they had reached a point of saturation, several appeared not to have followed principles from grounded theory where data collection and analysis is an iterative process until saturation is reached. Studies with high numbers of focus groups did not offer explanations for number of groups. Too much data as a study weakness was not an issue discussed in any of the reviewed papers. Conclusions Based on these findings we suggest that journals adopt more stringent requirements for focus group method
Mori, Taizo; Hegmann, Torsten
2016-10-01
Size, shape, overall composition, and surface functionality largely determine the properties and applications of metal nanoparticles. Aside from well-defined metal clusters, their composition is often estimated assuming a quasi-spherical shape of the nanoparticle core. With decreasing diameter of the assumed circumscribed sphere, particularly in the range of only a few nanometers, the estimated nanoparticle composition increasingly deviates from the real composition, leading to significant discrepancies between anticipated and experimentally observed composition, properties, and characteristics. We here assembled a compendium of tables, models, and equations for thiol-protected gold nanoparticles that will allow experimental scientists to more accurately estimate the composition of their gold nanoparticles using TEM image analysis data. The estimates obtained from following the routines described here will then serve as a guide for further analytical characterization of as-synthesized gold nanoparticles by other bulk (thermal, structural, chemical, and compositional) and surface characterization techniques. While the tables, models, and equations are dedicated to gold nanoparticles, the composition of other metal nanoparticle cores with face-centered cubic lattices can easily be estimated simply by substituting the value for the radius of the metal atom of interest.
Sample Size Dependence of Second Magnetization Peak in Type-II Superconductors
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
We show that the second magnetization peak (SMP), i. e., an increase in the magnetization hysteresis loop width in type-II superconductors,vanishes for samples smaller than a critical size. We argue that the SMP is not related to the critical current enhancement but can be well explained within a framework of the thermomagnetic flux-jump instability theory, where flux jumps reduce the absolute irreversible magnetization relative to the isothermal critical state value at low enough magnetic fields. The recovering of the isothermal critical state with increasing field leads to the SMP. The low-field SMP takes place in both low-Tc conventional and high-Tc unconventional superconductors. Our results show that the restoration of the isothermal critical state is responsible for the SMP occurrence in both cases.
Directory of Open Access Journals (Sweden)
Vanessa Colombo-Corbi
2011-06-01
Full Text Available Glycolytic activities of eight enzymes in size-fractionated water samples from a eutrophic tropical reservoir are presented in this study, including enzymes assayed for the first time in a freshwater environment. Among these enzymes, rhamnosidase, arabinosidase and fucosidase presented high activity in the free-living fraction, while glucosidase, mannosidase and galactosidase exhibited high activity in the attached fraction. The low activity registered for rhamnosidase, arabinosidase and fucosidase in the attached fraction seemed contribute to the integrity of the aggregate and based on this fact, a protective role for these structures was proposed. The presented enzyme profiles and the differences in the relative activities probably reflected the organic matter composition as well as the metabolic requirements of the bacterial community, suggesting that bacteria attached to particulate matter had phenotypic traits distinct from those of free-living bacteria.
Analyzing insulin samples by size-exclusion chromatography: a column degradation study.
Teska, Brandon M; Kumar, Amit; Carpenter, John F; Wempe, Michael F
2015-04-01
Investigating insulin analogs and probing their intrinsic stability at physiological temperature, we observed significant degradation in the size-exclusion chromatography (SEC) signal over a moderate number of insulin sample injections, which generated concerns about the quality of the separations. Therefore, our research goal was to identify the cause(s) for the observed signal degradation and attempt to mitigate the degradation in order to extend SEC column lifespan. In these studies, we used multiangle light scattering, nuclear magnetic resonance, and gas chromatography-mass spectrometry methods to evaluate column degradation. The results from these studies illustrate: (1) that zinc ions introduced by the insulin product produced the observed column performance issues; and (2) that including ethylenediaminetetraacetic acid, a zinc chelator, in the mobile phase helped to maintain column performance.
Directory of Open Access Journals (Sweden)
Christian Damgaard
2011-12-01
Full Text Available Increasingly, the survival rates in experimental ecology are presented using odds ratios or log response ratios, but the use of ratio metrics has a problem when all the individuals have either died or survived in only one replicate. In the empirical ecological literature, the problem often has been ignored or circumvented by different, more or less ad hoc approaches. Here, it is argued that the best summary statistic for communicating ecological results of frequency data in studies with small unbalanced samples may be the mean of the posterior distribution of the survival rate. The developed approach may be particularly useful when effect size indexes, such as odds ratios, are needed to compare frequency data between treatments, sites or studies.
Thrane, Susan; Cohen, Susan M
2014-12-01
The objective of this study was to calculate the effect of Reiki therapy for pain and anxiety in randomized clinical trials. A systematic search of PubMed, ProQuest, Cochrane, PsychInfo, CINAHL, Web of Science, Global Health, and Medline databases was conducted using the search terms pain, anxiety, and Reiki. The Center for Reiki Research also was examined for articles. Studies that used randomization and a control or usual care group, used Reiki therapy in one arm of the study, were published in 2000 or later in peer-reviewed journals in English, and measured pain or anxiety were included. After removing duplicates, 49 articles were examined and 12 articles received full review. Seven studies met the inclusion criteria: four articles studied cancer patients, one examined post-surgical patients, and two analyzed community dwelling older adults. Effect sizes were calculated for all studies using Cohen's d statistic. Effect sizes for within group differences ranged from d = 0.24 for decrease in anxiety in women undergoing breast biopsy to d = 2.08 for decreased pain in community dwelling adults. The between group differences ranged from d = 0.32 for decrease of pain in a Reiki versus rest intervention for cancer patients to d = 4.5 for decrease in pain in community dwelling adults. Although the number of studies is limited, based on the size Cohen's d statistics calculated in this review, there is evidence to suggest that Reiki therapy may be effective for pain and anxiety. Continued research using Reiki therapy with larger sample sizes, consistently randomized groups, and standardized treatment protocols is recommended.
Directory of Open Access Journals (Sweden)
W. Holmes Finch
2016-05-01
Full Text Available Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates exhibit very high variance and can therefore not be trusted, or because the statistical algorithm cannot converge on parameter estimates at all. There exist an alternative set of model estimation procedures, known collectively as regularization methods, which can be used in such circumstances, and which have been shown through simulation research to yield accurate parameter estimates. The purpose of this paper is to describe, for those unfamiliar with them, the most popular of these regularization methods, the lasso, and to demonstrate its use on an actual high dimensional dataset involving adults with autism, using the R software language. Results of analyses involving relating measures of executive functioning with a full scale intelligence test score are presented, and implications of using these models are discussed.
Weighted piecewise LDA for solving the small sample size problem in face verification.
Kyperountas, Marios; Tefas, Anastasios; Pitas, Ioannis
2007-03-01
A novel algorithm that can be used to boost the performance of face-verification methods that utilize Fisher's criterion is presented and evaluated. The algorithm is applied to similarity, or matching error, data and provides a general solution for overcoming the "small sample size" (SSS) problem, where the lack of sufficient training samples causes improper estimation of a linear separation hyperplane between the classes. Two independent phases constitute the proposed method. Initially, a set of weighted piecewise discriminant hyperplanes are used in order to provide a more accurate discriminant decision than the one produced by the traditional linear discriminant analysis (LDA) methodology. The expected classification ability of this method is investigated throughout a series of simulations. The second phase defines proper combinations for person-specific similarity scores and describes an outlier removal process that further enhances the classification ability. The proposed technique has been tested on the M2VTS and XM2VTS frontal face databases. Experimental results indicate that the proposed framework greatly improves the face-verification performance.
Energy Technology Data Exchange (ETDEWEB)
Christen, Hans M [ORNL; Okubo, Isao [ORNL; Rouleau, Christopher M [ORNL; Jellison Jr, Gerald Earle [ORNL; Puretzky, Alexander A [ORNL; Geohegan, David B [ORNL; Lowndes, Douglas H [ORNL
2005-01-01
Parallel (multi-sample) approaches, such as discrete combinatorial synthesis or continuous compositional-spread (CCS), can significantly increase the rate of materials discovery and process optimization. Here we review our generalized CCS method, based on pulsed-laser deposition, in which the synchronization between laser firing and substrate translation (behind a fixed slit aperture) yields the desired variations of composition and thickness. In situ alloying makes this approach applicable to the non-equilibrium synthesis of metastable phases. Deposition on a heater plate with a controlled spatial temperature variation can additionally be used for growth-temperature-dependence studies. Composition and temperature variations are controlled on length scales large enough to yield sample sizes sufficient for conventional characterization techniques (such as temperature-dependent measurements of resistivity or magnetic properties). This technique has been applied to various experimental studies, and we present here the results for the growth of electro-optic materials (Sr{sub x}Ba{sub 1-x}Nb{sub 2}O{sub 6}) and magnetic perovskites (Sr{sub 1-x}Ca{sub x}RuO{sub 3}), and discuss the application to the understanding and optimization of catalysts used in the synthesis of dense forests of carbon nanotubes.
Christen, Hans M.; Ohkubo, Isao; Rouleau, Christopher M.; Jellison, Gerald E., Jr.; Puretzky, Alex A.; Geohegan, David B.; Lowndes, Douglas H.
2005-01-01
Parallel (multi-sample) approaches, such as discrete combinatorial synthesis or continuous compositional-spread (CCS), can significantly increase the rate of materials discovery and process optimization. Here we review our generalized CCS method, based on pulsed-laser deposition, in which the synchronization between laser firing and substrate translation (behind a fixed slit aperture) yields the desired variations of composition and thickness. In situ alloying makes this approach applicable to the non-equilibrium synthesis of metastable phases. Deposition on a heater plate with a controlled spatial temperature variation can additionally be used for growth-temperature-dependence studies. Composition and temperature variations are controlled on length scales large enough to yield sample sizes sufficient for conventional characterization techniques (such as temperature-dependent measurements of resistivity or magnetic properties). This technique has been applied to various experimental studies, and we present here the results for the growth of electro-optic materials (SrxBa1-xNb2O6) and magnetic perovskites (Sr1-xCaxRuO3), and discuss the application to the understanding and optimization of catalysts used in the synthesis of dense forests of carbon nanotubes.
McCarthy, K.
2008-01-01
Semipermeable membrane devices (SPMDs) were deployed in the Columbia Slough, near Portland, Oregon, on three separate occasions to measure the spatial and seasonal distribution of dissolved polycyclic aromatic hydrocarbons (PAHs) and organochlorine compounds (OCs) in the slough. Concentrations of PAHs and OCs in SPMDs showed spatial and seasonal differences among sites and indicated that unusually high flows in the spring of 2006 diluted the concentrations of many of the target contaminants. However, the same PAHs - pyrene, fluoranthene, and the alkylated homologues of phenanthrene, anthracene, and fluorene - and OCs - polychlorinated biphenyls, pentachloroanisole, chlorpyrifos, dieldrin, and the metabolites of dichlorodiphenyltrichloroethane (DDT) - predominated throughout the system during all three deployment periods. The data suggest that storm washoff may be a predominant source of PAHs in the slough but that OCs are ubiquitous, entering the slough by a variety of pathways. Comparison of SPMDs deployed on the stream bed with SPMDs deployed in the overlying water column suggests that even for the very hydrophobic compounds investigated, bed sediments may not be a predominant source in this system. Perdeuterated phenanthrene (phenanthrene-d10). spiked at a rate of 2 ??g per SPMD, was shown to be a reliable performance reference compound (PRC) under the conditions of these deployments. Post-deployment concentrations of the PRC revealed differences in sampling conditions among sites and between seasons, but indicate that for SPMDs deployed throughout the main slough channel, differences in sampling rates were small enough to make site-to-site comparisons of SPMD concentrations straightforward. ?? Springer Science+Business Media B.V. 2007.
Energy Technology Data Exchange (ETDEWEB)
Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2012-02-15
Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and
Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic
2016-05-30
Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.
Tsukakoshi, Yoshiki; Yasui, Akemi
2011-11-01
To give a quantitative guide to sample size allocation for developing sampling designs for a food composition survey, we discuss sampling strategies that consider the importance of each food; namely, consumption or production, variability of composition, and the restrictions within the available resources for sample collection and analysis are considered., Here we consider two strategies: 'proportional' and 'Neyman' are discussed. Both of these incorporate consumed quantity of foods, and we review some available statistics for allocation issues. The Neyman optimal strategy allocates less sample size for starch than proportional, because the former incorporates variability in the composition. Those strategies improved accuracy in dietary nutrient intake more than equal sample size allocation. Those strategies will be useful as we often face sample size allocation problems, wherein we decide whether to sample 'five white potatoes and five taros or nine white and one taros'. Allocating sufficient sample size for important foodstuffs is essential in assuring data quality. Nevertheless, the food composition table should be as comprehensive as possible.
Yang, Mingjun; Yang, Lijiang; Gao, Yiqin; Hu, Hao
2014-07-28
Umbrella sampling is an efficient method for the calculation of free energy changes of a system along well-defined reaction coordinates. However, when there exist multiple parallel channels along the reaction coordinate or hidden barriers in directions perpendicular to the reaction coordinate, it is difficult for conventional umbrella sampling to reach convergent sampling within limited simulation time. Here, we propose an approach to combine umbrella sampling with the integrated tempering sampling method. The umbrella sampling method is applied to chemically more relevant degrees of freedom that possess significant barriers. The integrated tempering sampling method is used to facilitate the sampling of other degrees of freedom which may possess statistically non-negligible barriers. The combined method is applied to two model systems, butane and ACE-NME molecules, and shows significantly improved sampling efficiencies as compared to standalone conventional umbrella sampling or integrated tempering sampling approaches. Further analyses suggest that the enhanced performance of the new method come from the complemented advantages of umbrella sampling with a well-defined reaction coordinate and integrated tempering sampling in orthogonal space. Therefore, the combined approach could be useful in the simulation of biomolecular processes, which often involves sampling of complex rugged energy landscapes.
Energy Technology Data Exchange (ETDEWEB)
Park, Justin C.; Li, Jonathan G.; Arhjoul, Lahcen; Yan, Guanghua; Lu, Bo; Fan, Qiyong; Liu, Chihray, E-mail: liucr@ufl.edu [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32610-0385 (United States)
2015-04-15
Purpose: The use of sophisticated dose calculation procedure in modern radiation therapy treatment planning is inevitable in order to account for complex treatment fields created by multileaf collimators (MLCs). As a consequence, independent volumetric dose verification is time consuming, which affects the efficiency of clinical workflow. In this study, the authors present an efficient adaptive beamlet-based finite-size pencil beam (AB-FSPB) dose calculation algorithm that minimizes the computational procedure while preserving the accuracy. Methods: The computational time of finite-size pencil beam (FSPB) algorithm is proportional to the number of infinitesimal and identical beamlets that constitute an arbitrary field shape. In AB-FSPB, dose distribution from each beamlet is mathematically modeled such that the sizes of beamlets to represent an arbitrary field shape no longer need to be infinitesimal nor identical. As a result, it is possible to represent an arbitrary field shape with combinations of different sized and minimal number of beamlets. In addition, the authors included the model parameters to consider MLC for its rounded edge and transmission. Results: Root mean square error (RMSE) between treatment planning system and conventional FSPB on a 10 × 10 cm{sup 2} square field using 10 × 10, 2.5 × 2.5, and 0.5 × 0.5 cm{sup 2} beamlet sizes were 4.90%, 3.19%, and 2.87%, respectively, compared with RMSE of 1.10%, 1.11%, and 1.14% for AB-FSPB. This finding holds true for a larger square field size of 25 × 25 cm{sup 2}, where RMSE for 25 × 25, 2.5 × 2.5, and 0.5 × 0.5 cm{sup 2} beamlet sizes were 5.41%, 4.76%, and 3.54% in FSPB, respectively, compared with RMSE of 0.86%, 0.83%, and 0.88% for AB-FSPB. It was found that AB-FSPB could successfully account for the MLC transmissions without major discrepancy. The algorithm was also graphical processing unit (GPU) compatible to maximize its computational speed. For an intensity modulated radiation therapy (
Cliff´s Delta Calculator: A non-parametric effect size program for two groups of observations
Directory of Open Access Journals (Sweden)
Guillermo Macbeth
2011-05-01
Full Text Available The Cliff´s Delta statistic is an effect size measure that quantifies the amount of difference between two non-parametric variables beyond p-values interpretation. This measure can be understood as a useful complementary analysis for the corresponding hypothesis testing. During the last two decades the use of effect size measures has been strongly encouraged by methodologists and leading institutions of behavioral sciences. The aim of this contribution is to introduce the Cliff´s Delta Calculator software that performs such analysis and offers some interpretation tips. Differences and similarities with the parametric case are analysed and illustrated. The implementation of this free program is fully described and compared with other calculators. Alternative algorithmic approaches are mathematically analysed and a basic linear algebra proof of its equivalence is formally presented. Two worked examples in cognitive psychology are commented. A visual interpretation of Cliff´s Delta is suggested. Availability, installation and applications of the program are presented and discussed.
Energy Technology Data Exchange (ETDEWEB)
Federico Jimenez-Cruz; Georgina C. Laredo [Instituto Mexicano del Petroleo, Mexico (Mexico). Programa de Tratamiento de Crudo Maya
2004-11-01
A good approach of the critical molecular dimensions of 35 linear and branched C5-C8 paraffins by DFT quantum chemical calculations at B3LYP/6-31G{asterisk}{asterisk} level of theory in gas phase is described. In this context, we found that either the determined molecular width or width-height average values can be used as critical measures in the analysis for selection of molecular sieves materials, depending on their pore size and shape. The molecular width values for linear and monosubstituted paraffins are 4.2 and 5.5 {angstrom}, respectively. In the case of disubstituted paraffins, the values are 5.5 for 2,3-, 2,4-, 2,5- and 3,4-disubstituted and for 2,2- and 3,3-disubstituted are 6.7-7.1 {angstrom}. The values for ethyl-substituted are 6.1-6.7 {angstrom} and for trisubstituted isoparaffins are 6.7. In order to select a porous material for selective separation of isoparaffins and paraffins, the zeolite diffusivity can be correlated with the critical diameter of the paraffins according to the geometry-limited diffusion concept and the effective minimum dimensions of the molecules. The calculated values of CPK molecular volume of the titled paraffins showed a good discrimination between the number of carbons and molecular size. 25 refs., 4 figs., 2 tabs.
Brus, D.J.; Nieuwenhuizen, W.; Koomen, A.J.M.
2006-01-01
Seventy-two squares of 100 ha were selected by stratified random sampling with probabilities proportional to size (pps) to survey landscape changes in the period 1996-2003. The area of the plots times the urbanization pressure was used as a size measure. The central question of this study is whether
Munk, Ole Lajord; Keiding, Susanne; Bass, Ludvik
2008-01-01
The authors developed a transmission-dispersion model to estimate dispersion in blood sampling systems and to calculate dispersion-free input functions needed for kinetic analysis. Transport of molecules through catheters was considered in two parts: a central part with convective transmission of molecules and a stagnant layer that molecules may enter and leave. The authors measured dispersion caused by automatic and manual blood sampling using three PET tracers that distribute differently in...
Gu, Xuejun; Li, Jinsheng; Jia, Xun; Jiang, Steve B
2011-01-01
Targeting at developing an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite size pencil beam (FSPB) algorithm with a 3D-density correction method on GPU. This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework [Gu et al. Phys. Med. Biol. 54 6287-97, 2009]. Dosimetric evaluations against MCSIM Monte Carlo dose calculations are conducted on 10 IMRT treatment plans with heterogeneous treatment regions (5 head-and-neck cases and 5 lung cases). For head and neck cases, when cavities exist near the target, the improvement with the 3D-density correction over the conventional FSPB algorithm is significant. However, when there are high-density dental filling materials in beam paths, the improvement is small and the accuracy of the new algorithm is still unsatisfactory. On the other hand, significant improvement of dose calculation accuracy is observed in all lung cases. Especially when the target is in the m...
Directory of Open Access Journals (Sweden)
Tamer Dawod
2015-01-01
Full Text Available Purpose: This work investigated the accuracy of prowess treatment planning system (TPS in dose calculation in a homogenous phantom for symmetric and asymmetric field sizes using collapse cone convolution / superposition algorithm (CCCS. Methods: The measurements were carried out at source-to-surface distance (SSD set to 100 cm for 6 and 10 MV photon beams. Data for a full set of measurements for symmetric fields and asymmetric fields, including inplane and crossplane profiles at various depths and percentage depth doses (PDDs were obtained during measurements on the linear accelerator.Results: The results showed that the asymmetric collimation dose lead to significant errors (up to approximately 7% in dose calculations if changes in primary beam intensity and beam quality. It is obvious that the most difference in the isodose curves was found in buildup and the penumbra regions. Conclusion: The results showed that the dose calculation using Prowess TPS based on CCCS algorithm is generally in excellent agreement with measurements.
Clanton, U. S.; Fletcher, C. R.
1976-01-01
The paper describes a Monte Carlo model for simulation of two-dimensional representations of thin sections of some of the more common igneous rock textures. These representations are extrapolated to three dimensions to develop a volume of 'rock'. The model (here applied to a medium-grained high-Ti basalt) can be used to determine a statistically significant sample for a lunar rock or to predict the probable errors in the oxide contents that can occur during the analysis of a sample that is not representative of the parent rock.
Efficient free energy calculations by combining two complementary tempering sampling methods.
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-14
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
40 CFR 90.426 - Dilute emission sampling calculations-gasoline fueled engines.
2010-07-01
... hydrocarbons, i.e., the molecular weight of the hydrocarbon molecule divided by the number of carbon atoms in... weight of carbon=12.01 MH = Molecular weight of hydrogen=1.008 MO = Molecular weight of oxygen=16.00 α... calculated based on the assumption that the fuel used has a hydrogen to carbon ratio of 1:1.85. For...
About 100 countries have established regulatory limits for aflatoxin in food and feeds. Because these limits vary widely among regulating countries, the Codex Committee on Food Additives and Contaminants (CCFAC) began work in 2004 to harmonize aflatoxin limits and sampling plans for aflatoxin in alm...
Mielke, Steven L; Truhlar, Donald G
2016-01-21
Using Feynman path integrals, a molecular partition function can be written as a double integral with the inner integral involving all closed paths centered at a given molecular configuration, and the outer integral involving all possible molecular configurations. In previous work employing Monte Carlo methods to evaluate such partition functions, we presented schemes for importance sampling and stratification in the molecular configurations that constitute the path centroids, but we relied on free-particle paths for sampling the path integrals. At low temperatures, the path sampling is expensive because the paths can travel far from the centroid configuration. We now present a scheme for importance sampling of whole Feynman paths based on harmonic information from an instantaneous normal mode calculation at the centroid configuration, which we refer to as harmonically guided whole-path importance sampling (WPIS). We obtain paths conforming to our chosen importance function by rejection sampling from a distribution of free-particle paths. Sample calculations on CH4 demonstrate that at a temperature of 200 K, about 99.9% of the free-particle paths can be rejected without integration, and at 300 K, about 98% can be rejected. We also show that it is typically possible to reduce the overhead associated with the WPIS scheme by sampling the paths using a significantly lower-order path discretization than that which is needed to converge the partition function.
Tango, Toshiro
2017-02-13
Tango (Biostatistics 2016) proposed a new repeated measures design called the S:T repeated measures design, combined with generalized linear mixed-effects models and sample size calculations for a test of the average treatment effect that depend not only on the number of subjects but on the number of repeated measures before and after randomization per subject used for analysis. The main advantages of the proposed design combined with the generalized linear mixed-effects models are (1) it can easily handle missing data by applying the likelihood-based ignorable analyses under the missing at random assumption and (2) it may lead to a reduction in sample size compared with the simple pre-post design. In this article, we present formulas for calculating power and sample sizes for a test of the average treatment effect allowing for missing data within the framework of the S:T repeated measures design with a continuous response variable combined with a linear mixed-effects model. Examples are provided to illustrate the use of these formulas.
Energy Technology Data Exchange (ETDEWEB)
Han, C; Schultheiss, T [City of Hope National Medical Center, Duarte, CA (United States)
2015-06-15
Purpose: In this study, we aim to evaluate the effect of dose grid size on the accuracy of calculated dose for small lesions in intracranial stereotactic radiosurgery (SRS), and to verify dose calculation accuracy with radiochromic film dosimetry. Methods: 15 intracranial lesions from previous SRS patients were retrospectively selected for this study. The planning target volume (PTV) ranged from 0.17 to 2.3 cm{sup 3}. A commercial treatment planning system was used to generate SRS plans using the volumetric modulated arc therapy (VMAT) technique using two arc fields. Two convolution-superposition-based dose calculation algorithms (Anisotropic Analytical Algorithm and Acuros XB algorithm) were used to calculate volume dose distribution with dose grid size ranging from 1 mm to 3 mm with 0.5 mm step size. First, while the plan monitor units (MU) were kept constant, PTV dose variations were analyzed. Second, with 95% of the PTV covered by the prescription dose, variations of the plan MUs as a function of dose grid size were analyzed. Radiochomic films were used to compare the delivered dose and profile with the calculated dose distribution with different dose grid sizes. Results: The dose to the PTV, in terms of the mean dose, maximum, and minimum dose, showed steady decrease with increasing dose grid size using both algorithms. With 95% of the PTV covered by the prescription dose, the total MU increased with increasing dose grid size in most of the plans. Radiochromic film measurements showed better agreement with dose distributions calculated with 1-mm dose grid size. Conclusion: Dose grid size has significant impact on calculated dose distribution in intracranial SRS treatment planning with small target volumes. Using the default dose grid size could lead to under-estimation of delivered dose. A small dose grid size should be used to ensure calculation accuracy and agreement with QA measurements.
Hyperfine electric parameters calculation in Si samples implanted with {sup 57}Mn→{sup 57}Fe
Energy Technology Data Exchange (ETDEWEB)
Abreu, Y., E-mail: yabreu@ceaden.edu.cu [Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Calle 30 No. 502 e/5ta y 7ma Ave., 11300 Miramar, Playa, La Habana (Cuba); Cruz, C.M.; Piñera, I.; Leyva, A.; Cabal, A.E. [Centro de Aplicaciones Tecnológicas y Desarrollo Nuclear (CEADEN), Calle 30 No. 502 e/5ta y 7ma Ave., 11300 Miramar, Playa, La Habana (Cuba); Van Espen, P. [Departement Chemie, Universiteit Antwerpen, Middelheimcampus, G.V.130, Groenenborgerlaan 171, 2020 Antwerpen (Belgium); Van Remortel, N. [Departement Fysica, Universiteit Antwerpen, Middelheimcampus, G.U.236, Groenenborgerlaan 171, 2020 Antwerpen (Belgium)
2014-07-15
Nowadays the electronic structure calculations allow the study of complex systems determining the hyperfine parameters measured at a probe atom, including the presence of crystalline defects. The hyperfine electric parameters have been measured by Mössbauer spectroscopy in silicon materials implanted with {sup 57}Mn→{sup 57}Fe ions, observing four main contributions to the spectra. Nevertheless, some ambiguities still remain in the {sup 57}Fe Mössbauer spectra interpretation in this case, regarding the damage configurations and its evolution with annealing. In the present work several implantation environments are evaluated and the {sup 57}Fe hyperfine parameters are calculated. The observed correlation among the studied local environments and the experimental observations is presented, and a tentative microscopic description of the behavior and thermal evolution of the characteristic defects local environments of the probe atoms concerning the location of vacancies and interstitial Si in the neighborhood of {sup 57}Fe ions in substitutional and interstitial sites is proposed.
Tang, Yongqiang
2015-01-01
A sample size formula is derived for negative binomial regression for the analysis of recurrent events, in which subjects can have unequal follow-up time. We obtain sharp lower and upper bounds on the required size, which is easy to compute. The upper bound is generally only slightly larger than the required size, and hence can be used to approximate the sample size. The lower and upper size bounds can be decomposed into two terms. The first term relies on the mean number of events in each group, and the second term depends on two factors that measure, respectively, the extent of between-subject variability in event rates, and follow-up time. Simulation studies are conducted to assess the performance of the proposed method. An application of our formulae to a multiple sclerosis trial is provided.
Directory of Open Access Journals (Sweden)
A. Martín Andrés
2015-01-01
Full Text Available The Mantel-Haenszel test is the most frequent asymptotic test used for analyzing stratified 2 × 2 tables. Its exact alternative is the test of Birch, which has recently been reconsidered by Jung. Both tests have a conditional origin: Pearson’s chi-squared test and Fisher’s exact test, respectively. But both tests have the same drawback that the result of global test (the stratified test may not be compatible with the result of individual tests (the test for each stratum. In this paper, we propose to carry out the global test using a multiple comparisons method (MC method which does not have this disadvantage. By refining the method (MCB method an alternative to the Mantel-Haenszel and Birch tests may be obtained. The new MC and MCB methods have the advantage that they may be applied from an unconditional view, a methodology which until now has not been applied to this problem. We also propose some sample size calculation methods.
40 CFR Appendix III to Part 600 - Sample Fuel Economy Label Calculation
2010-07-01
... engine. These four car lines are: Ajax Boredom III Dodo Castor (Station Wagon) A. A car line is defined... A3 0.3000 at 3,500 lb 15.9020 0.7000 at 4,000 lb 13.8138 Dodo M4 0.4000 at 3,500 lb 16.1001 0.6000 at... type MPG is calculated as follows: ER27DE06.085 Similarly, Ajax and Dodo 3.0 liter, 6 cylinder,...
DEFF Research Database (Denmark)
Kostoulas, P.; Nielsen, Søren Saxmose; Browne, W. J.;
2013-01-01
and power when applied to these groups. We propose the use of the variance partition coefficient (VPC), which measures the clustering of infection/disease for individuals with a common risk profile. Sample size estimates are obtained separately for those groups that exhibit markedly different heterogeneity......SUMMARY Disease cases are often clustered within herds or generally groups that share common characteristics. Sample size formulae must adjust for the within-cluster correlation of the primary sampling units. Traditionally, the intra-cluster correlation coefficient (ICC), which is an average...
Esseiva, Pierre; Anglada, Frederic; Dujourdy, Laurence; Taroni, Franco; Margot, Pierre; Pasquier, Eric Du; Dawson, Michael; Roux, Claude; Doble, Philip
2005-08-15
Artificial neural networks (ANNs) were utilised to validate illicit drug classification in the profiling method used at "Institut de Police Scientifique" of the University of Lausanne (IPS). This method established links between samples using a combination of principal component analysis (PCA) and calculation of a correlation value between samples. Heroin seizures sent to the IPS laboratory were analysed using gas chromatography (GC) to separate the major alkaloids present in illicit heroin. Statistical analysis was then performed on 3371 samples. Initially, PCA was performed as a preliminary screen to identify samples of a similar chemical profile. A correlation value was then calculated for each sample previously identified with PCA. This correlation value was used to determine links between drug samples. These links were then recorded in an Ibase((R)) database. From this database the notion of "chemical class" arises, where samples with similar chemical profiles are grouped together. Currently, about 20 "chemical classes" have been identified. The normalised peak areas of six target compounds were then used to train an ANN to classify each sample into its appropriate class. Four hundred and sixty-eight samples were used as a training data set. Sixty samples were treated as blinds and 370 as non-linked samples. The results show that in 96% of cases the neural network attributed the seizure to the right "chemical class". The application of a neural network was found to be a useful tool to validate the classification of new drug seizures in existing chemical classes. This tool should be increasingly used in such situations involving profile comparisons and classifications.
Gibertini, Michael; Nations, Kari R; Whitaker, John A
2012-03-01
The high failure rate of antidepressant trials has spurred exploration of the factors that affect trial sensitivity. In the current analysis, Food and Drug Administration antidepressant drug registration trial data compiled by Turner et al. is extended to include the most recently approved antidepressants. The expanded dataset is examined to further establish the likely population effect size (ES) for monoaminergic antidepressants and to demonstrate the relationship between observed ES and sample size in trials on compounds with proven efficacy. Results indicate that the overall underlying ES for antidepressants is approximately 0.30, and that the variability in observed ES across trials is related to the sample size of the trial. The current data provide a unique real-world illustration of an often underappreciated statistical truism: that small N trials are more likely to mislead than to inform, and that by aligning sample size to the population ES, risks of both erroneously high and low effects are minimized. The results in the current study make this abstract concept concrete and will help drug developers arrive at informed gate decisions with greater confidence and fewer risks, improving the odds of success for future antidepressant trials.
Directory of Open Access Journals (Sweden)
Hiroshi Nishiura
Full Text Available BACKGROUND: Seroepidemiological studies before and after the epidemic wave of H1N1-2009 are useful for estimating population attack rates with a potential to validate early estimates of the reproduction number, R, in modeling studies. METHODOLOGY/PRINCIPAL FINDINGS: Since the final epidemic size, the proportion of individuals in a population who become infected during an epidemic, is not the result of a binomial sampling process because infection events are not independent of each other, we propose the use of an asymptotic distribution of the final size to compute approximate 95% confidence intervals of the observed final size. This allows the comparison of the observed final sizes against predictions based on the modeling study (R = 1.15, 1.40 and 1.90, which also yields simple formulae for determining sample sizes for future seroepidemiological studies. We examine a total of eleven published seroepidemiological studies of H1N1-2009 that took place after observing the peak incidence in a number of countries. Observed seropositive proportions in six studies appear to be smaller than that predicted from R = 1.40; four of the six studies sampled serum less than one month after the reported peak incidence. The comparison of the observed final sizes against R = 1.15 and 1.90 reveals that all eleven studies appear not to be significantly deviating from the prediction with R = 1.15, but final sizes in nine studies indicate overestimation if the value R = 1.90 is used. CONCLUSIONS: Sample sizes of published seroepidemiological studies were too small to assess the validity of model predictions except when R = 1.90 was used. We recommend the use of the proposed approach in determining the sample size of post-epidemic seroepidemiological studies, calculating the 95% confidence interval of observed final size, and conducting relevant hypothesis testing instead of the use of methods that rely on a binomial proportion.
Importance of Sample Size for the Estimation of Repeater F Waves in Amyotrophic Lateral Sclerosis
Directory of Open Access Journals (Sweden)
Jia Fang
2015-01-01
Full Text Available Background: In amyotrophic lateral sclerosis (ALS, repeater F waves are increased. Accurate assessment of repeater F waves requires an adequate sample size. Methods: We studied the F waves of left ulnar nerves in ALS patients. Based on the presence or absence of pyramidal signs in the left upper limb, the ALS patients were divided into two groups: One group with pyramidal signs designated as P group and the other without pyramidal signs designated as NP group. The Index repeating neurons (RN and Index repeater F waves (Freps were compared among the P, NP and control groups following 20 and 100 stimuli respectively. For each group, the Index RN and Index Freps obtained from 20 and 100 stimuli were compared. Results: In the P group, the Index RN (P = 0.004 and Index Freps (P = 0.001 obtained from 100 stimuli were significantly higher than from 20 stimuli. For F waves obtained from 20 stimuli, no significant differences were identified between the P and NP groups for Index RN (P = 0.052 and Index Freps (P = 0.079; The Index RN (P < 0.001 and Index Freps (P < 0.001 of the P group were significantly higher than the control group; The Index RN (P = 0.002 of the NP group was significantly higher than the control group. For F waves obtained from 100 stimuli, the Index RN (P < 0.001 and Index Freps (P < 0.001 of the P group were significantly higher than the NP group; The Index RN (P < 0.001 and Index Freps (P < 0.001 of the P and NP groups were significantly higher than the control group. Conclusions: Increased repeater F waves reflect increased excitability of motor neuron pool and indicate upper motor neuron dysfunction in ALS. For an accurate evaluation of repeater F waves in ALS patients especially those with moderate to severe muscle atrophy, 100 stimuli would be required.
Energy Technology Data Exchange (ETDEWEB)
Leifer, R. Z. [Environmental Measurements Lab. (EML), New York, NY (United States); Jacob, E. M. [Environmental Measurements Lab. (EML), New York, NY (United States); Marschke, S. F. [Environmental Measurements Lab. (EML), New York, NY (United States); Pranitis, D. M. [Environmental Measurements Lab. (EML), New York, NY (United States); Jaw, H-R. Kristina [Environmental Measurements Lab. (EML), New York, NY (United States)
2000-03-01
A rotating drum impactor was co-located with a high volume air sampler for ~ 1 y at the fence line of the U. S. Department of Energy’s Fernald Environmental Management Project site. Data on the size distribution of uranium bearing atmospheric aerosols from 0.065 mm to 100 mm in diameter were obtained and used to compute dose using several different models. During most of the year, the mass of ^{238}U above 15 mm exceeded 70% of the total uranium mass from all particulates. Above 4.3 µm, the ^{238}U mass exceeded 80% of the total uranium mass from all particulates. During any sampling period the size distribution was bimodal. In the winter/spring period, the modes appeared at 0.29 µm and 3.2 µm. During the summer period, the lower mode shifted up to ~ 0.45 mm. In the fall/winter, the upper mode shifted to ~ 1.7 µm, while the lower mode stayed at 0.45 mm. These differences reflect the changes in site activities. Thorium concentrations were comparable to the uranium concentrations during the late spring and summer period and decreased to ~25% of the ^{238}U concentration in the late summer. The thorium size distribution trend also differed from the uranium trend. The current calculational method used to demonstrate compliance with regulations assumes that the airborne particulates are characterized by an activity median diameter of 1 µm. This assumption results in an overestimate of the dose to offsite receptors by as much as a factor of seven relative to values derived using the latest ICRP 66 lung model with more appropriate particle sizes. Further evaluation of the size distribution for each radionuclide would substantially improve the dose estimates.
DEFF Research Database (Denmark)
Yao, Bing-Yin; Zhou, Rong-Can; Fan, Chang-Xin;
2010-01-01
) images in scanning electron microscope (SEM). The smaller Laves phase particle size results in higher creep strength and longer creep exposure time at the same conditions. DICTRA software was used to model the growth and coarsening behavior of Laves phase in the three P92 steels. Good agreements were......The growth of Laves phase particles in three kinds of P92 steels were investigated. Laves phase particles can be easily separated and distinguished from the matrix and other particles by atom number contrast using comparisons of the backscatter electrons (BSE) images and the secondary electrons (SE...... attained between measurements in SEM and modeling by DICTRA. Ostwald ripening should be used for the coarsening calculation of Laves phase in P92 steels for time longer than 20000 h and 50000 h at 650°C and 600°C, respectively. © 2010 Chin. Soc. for Elec. Eng....
Simple and efficient way of speeding up transmission calculations with k-point sampling
Directory of Open Access Journals (Sweden)
Jesper Toft Falkenberg
2015-07-01
Full Text Available The transmissions as functions of energy are central for electron or phonon transport in the Landauer transport picture. We suggest a simple and computationally “cheap” post-processing scheme to interpolate transmission functions over k-points to get smooth well-converged average transmission functions. This is relevant for data obtained using typical “expensive” first principles calculations where the leads/electrodes are described by periodic boundary conditions. We show examples of transport in graphene structures where a speed-up of an order of magnitude is easily obtained.
A guide for calculation of spot size to determine power density for free fiber irradiation of tissue
Tate, Lloyd P., Jr.; Blikslager, Anthony T.
2005-04-01
Transendoscopic laser treatment for upper airway disorders has been performed in the horse for over twenty years. Endoscopic laser transmission utilizing flexible fiber optics is limited to certain discreet wavelengths. Initially, the laser of choice was the Nd: YAG laser (1064nm), but in the early 1990's, diode lasers (810nm, 980nm) were introduced to veterinary medicine and are currently used in private practice and universities. Precise application of laser irradiation is dependent on the user knowing the laser's output as well as the amount of energy that is delivered to tissue. Knowledge of dosimetry is important to the veterinarian for keeping accurate medical records by being able to describe very specific treatment regimes. The applied energy is best described as power density or energy density. Calculation of this energy is dependent upon the users ability to determine the laser's spot size when irradiating tissue in a non-contact mode. The charts derived from this study provide the veterinarian the ability to estimate spot size for a number of commonly used lasers with the fiber positioned at various distances from the target.
Evaluation of 1H NMR relaxometry for the assessment of pore size distribution in soil samples
Jaeger, F.; Bowe, S.; As, van H.; Schaumann, G.E.
2009-01-01
1H NMR relaxometry is used in earth science as a non-destructive and time-saving method to determine pore size distributions (PSD) in porous media with pore sizes ranging from nm to mm. This is a broader range than generally reported for results from X-ray computed tomography (X-ray CT) scanning, wh
Santos-Martins, Diogo; Fernandes, Pedro Alexandrino; Ramos, Maria João
2016-11-01
In the context of SAMPL5, we submitted blind predictions of the cyclohexane/water distribution coefficient (D) for a series of 53 drug-like molecules. Our method is purely empirical and based on the additive contribution of each solute atom to the free energy of solvation in water and in cyclohexane. The contribution of each atom depends on the atom type and on the exposed surface area. Comparatively to similar methods in the literature, we used a very small set of atomic parameters: only 10 for solvation in water and 1 for solvation in cyclohexane. As a result, the method is protected from overfitting and the error in the blind predictions could be reasonably estimated. Moreover, this approach is fast: it takes only 0.5 s to predict the distribution coefficient for all 53 SAMPL5 compounds, allowing its application in virtual screening campaigns. The performance of our approach (submission 49) is modest but satisfactory in view of its efficiency: the root mean square error (RMSE) was 3.3 log D units for the 53 compounds, while the RMSE of the best performing method (using COSMO-RS) was 2.1 (submission 16). Our method is implemented as a Python script available at https://github.com/diogomart/SAMPL5-DC-surface-empirical.
Directory of Open Access Journals (Sweden)
Badawi Mohamed S.
2015-01-01
Full Text Available When using gamma ray spectrometry for radioactivity analysis of environmental samples (such as soil, sediment or ash of a living organism, relevant linear attenuation coefficients should be known - in order to calculate self-absorption in the sample bulk. This parameter is additionally important since the unidentified samples are normally different in composition and density from the reference ones (the latter being e. g. liquid sources, commonly used for detection efficiency calibration in radioactivity monitoring. This work aims at introducing a numerical simulation method for calculation of linear attenuation coefficients without the use of a collimator. The method is primarily based on calculations of the effective solid angles - compound parameters accounting for the emission and detection probabilities, as well as for the source-to-detector geometrical configuration. The efficiency transfer principle and average path lengths through the samples themselves are employed, too. The results obtained are compared with those from the NIST-XCOM data base; close agreement confirms the validity of the numerical simulation method approach.
Directory of Open Access Journals (Sweden)
Qing Wang
2016-05-01
Full Text Available Free energy calculations of the potential of mean force (PMF based on the combination of targeted molecular dynamics (TMD simulations and umbrella samplings as a function of physical coordinates have been applied to explore the detailed pathways and the corresponding free energy profiles for the conformational transition processes of the butane molecule and the 35-residue villin headpiece subdomain (HP35. The accurate PMF profiles for describing the dihedral rotation of butane under both coordinates of dihedral rotation and root mean square deviation (RMSD variation were obtained based on the different umbrella samplings from the same TMD simulations. The initial structures for the umbrella samplings can be conveniently selected from the TMD trajectories. For the application of this computational method in the unfolding process of the HP35 protein, the PMF calculation along with the coordinate of the radius of gyration (Rg presents the gradual increase of free energies by about 1 kcal/mol with the energy fluctuations. The feature of conformational transition for the unfolding process of the HP35 protein shows that the spherical structure extends and the middle α-helix unfolds firstly, followed by the unfolding of other α-helices. The computational method for the PMF calculations based on the combination of TMD simulations and umbrella samplings provided a valuable strategy in investigating detailed conformational transition pathways for other allosteric processes.
Uyaguari-Diaz, Miguel I; Slobodan, Jared R; Nesbitt, Matthew J; Croxen, Matthew A; Isaac-Renton, Judith; Prystajecky, Natalie A; Tang, Patrick
2015-04-17
Next-generation sequencing of environmental samples can be challenging because of the variable DNA quantity and quality in these samples. High quality DNA libraries are needed for optimal results from next-generation sequencing. Environmental samples such as water may have low quality and quantities of DNA as well as contaminants that co-precipitate with DNA. The mechanical and enzymatic processes involved in extraction and library preparation may further damage the DNA. Gel size selection enables purification and recovery of DNA fragments of a defined size for sequencing applications. Nevertheless, this task is one of the most time-consuming steps in the DNA library preparation workflow. The protocol described here enables complete automation of agarose gel loading, electrophoretic analysis, and recovery of targeted DNA fragments. In this study, we describe a high-throughput approach to prepare high quality DNA libraries from freshwater samples that can be applied also to other environmental samples. We used an indirect approach to concentrate bacterial cells from environmental freshwater samples; DNA was extracted using a commercially available DNA extraction kit, and DNA libraries were prepared using a commercial transposon-based protocol. DNA fragments of 500 to 800 bp were gel size selected using Ranger Technology, an automated electrophoresis workstation. Sequencing of the size-selected DNA libraries demonstrated significant improvements to read length and quality of the sequencing reads.
Importance of Sample Size for the Estimation of Repeater F Waves in Amyotrophic Lateral Sclerosis
Institute of Scientific and Technical Information of China (English)
Jia Fang; Ming-Sheng Liu; Yu-Zhou Guan; Bo Cui; Li-Ying Cui
2015-01-01
Background:In amyotrophic lateral sclerosis (ALS),repeater F waves are increased.Accurate assessment of repeater F waves requires an adequate sample size.Methods:We studied the F waves of left ulnar nerves in ALS patients.Based on the presence or absence of pyramidal signs in the left upper limb,the ALS patients were divided into two groups:One group with pyramidal signs designated as P group and the other without pyramidal signs designated as NP group.The Index repeating neurons (RN) and Index repeater F waves (Freps) were compared among the P,NP and control groups following 20 and 100 stimuli respectively.For each group,the Index RN and Index Freps obtained from 20 and 100 stimuli were compared.Results:In the P group,the Index RN (P =0.004) and Index Freps (P =0.001) obtained from 100 stimuli were significantly higher than from 20 stimuli.For F waves obtained from 20 stimuli,no significant differences were identified between the P and NP groups for Index RN (P =0.052) and Index Freps (P =0.079); The Index RN (P ＜ 0.001) and Index Freps (P ＜ 0.001) of the P group were significantly higher than the control group; The Index RN (P =0.002) of the NP group was significantly higher than the control group.For F waves obtained from 100 stimuli,the Index RN (P ＜ 0.001) and Index Freps (P ＜ 0.001) of the P group were significantly higher than the NP group; The Index RN (P ＜ 0.001) and Index Freps (P ＜ 0.001) of the P and NP groups were significantly higher than the control group.Conclusions:Increased repeater F waves reflect increased excitability of motor neuron pool and indicate upper motor neuron dysfunction in ALS.For an accurate evaluation of repeater F waves in ALS patients especially those with moderate to severe muscle atrophy,100 stimuli would be required.
Multiscale sampling of plant diversity: Effects of minimum mapping unit size
Stohlgren, T.J.; Chong, G.W.; Kalkhan, M.A.; Schell, L.D.
1997-01-01
Only a small portion of any landscape can be sampled for vascular plant diversity because of constraints of cost (salaries, travel time between sites, etc.). Often, the investigator decides to reduce the cost of creating a vegetation map by increasing the minimum mapping unit (MMU), and/or by reducing the number of vegetation classes to be considered. Questions arise about what information is sacrificed when map resolution is decreased. We compared plant diversity patterns from vegetation maps made with 100-ha, 50-ha, 2-ha, and 0.02-ha MMUs in a 754-ha study area in Rocky Mountain National Park, Colorado, United States, using four 0.025-ha and 21 0.1-ha multiscale vegetation plots. We developed and tested species-log(area) curves, correcting the curves for within-vegetation type heterogeneity with Jaccard's coefficients. Total species richness in the study area was estimated from vegetation maps at each resolution (MMU), based on the corrected species-area curves, total area of the vegetation type, and species overlap among vegetation types. With the 0.02-ha MMU, six vegetation types were recovered, resulting in an estimated 552 species (95% CI = 520-583 species) in the 754-ha study area (330 plant species were observed in the 25 plots). With the 2-ha MMU, five vegetation types were recognized, resulting in an estimated 473 species for the study area. With the 50-ha MMU, 439 plant species were estimated for the four vegetation types recognized in the study area. With the 100-ha MMU, only three vegetation types were recognized, resulting in an estimated 341 plant species for the study area. Locally rare species and keystone ecosystems (areas of high or unique plant diversity) were missed at the 2-ha, 50-ha, and 100-ha scales. To evaluate the effects of minimum mapping unit size requires: (1) an initial stratification of homogeneous, heterogeneous, and rare habitat types; and (2) an evaluation of within-type and between-type heterogeneity generated by environmental
Core size effect on the dry and saturated ultrasonic pulse velocity of limestone samples.
Ercikdi, Bayram; Karaman, Kadir; Cihangir, Ferdi; Yılmaz, Tekin; Aliyazıcıoğlu, Şener; Kesimal, Ayhan
2016-12-01
This study presents the effect of core length on the saturated (UPVsat) and dry (UPVdry) P-wave velocities of four different biomicritic limestone samples, namely light grey (BL-LG), dark grey (BL-DG), reddish (BL-R) and yellow (BL-Y), using core samples having different lengths (25-125mm) at a constant diameter (54.7mm). The saturated P-wave velocity (UPVsat) of all core samples generally decreased with increasing the sample length. However, the dry P-wave velocity (UPVdry) of samples obtained from BL-LG and BL-Y limestones increased with increasing the sample length. In contrast to the literature, the dry P-wave velocity (UPVdry) values of core samples having a length of 75, 100 and 125mm were consistently higher (2.8-46.2%) than those of saturated (UPVsat). Chemical and mineralogical analyses have shown that the P wave velocity is very sensitive to the calcite and clay minerals potentially leading to the weakening/disintegration of rock samples in the presence of water. Severe fluctuations in UPV values were observed to occur between 25 and 75mm sample lengths, thereafter, a trend of stabilization was observed. The maximum variation of UPV values between the sample length of 75mm and 125mm was only 7.3%. Therefore, the threshold core sample length was interpreted as 75mm for UPV measurement in biomicritic limestone samples used in this study.
Kelley, Ken
2008-01-01
Methods of sample size planning are developed from the accuracy in parameter approach in the multiple regression context in order to obtain a sufficiently narrow confidence interval for the population squared multiple correlation coefficient when regressors are random. Approximate and exact methods are developed that provide necessary sample size so that the expected width of the confidence interval will be sufficiently narrow. Modifications of these methods are then developed so that necessary sample size will lead to sufficiently narrow confidence intervals with no less than some desired degree of assurance. Computer routines have been developed and are included within the MBESS R package so that the methods discussed in the article can be implemented. The methods and computer routines are demonstrated using an empirical example linking innovation in the health services industry with previous innovation, personality factors, and group climate characteristics.
DEFF Research Database (Denmark)
Gardi, Jonathan Eyal; Nyengaard, Jens Randel; Gundersen, Hans Jørgen Gottlieb
2008-01-01
The proportionator is a novel and radically different approach to sampling with microscopes based on well-known statistical theory (probability proportional to size - PPS sampling). It uses automatic image analysis, with a large range of options, to assign to every field of view in the section...... of its entirely different sampling strategy, based on known but non-uniform sampling probabilities, the proportionator for the first time allows the real CE at the section level to be automatically estimated (not just predicted), unbiased - for all estimators and at no extra cost to the user....
Energy Technology Data Exchange (ETDEWEB)
Shin, Y.W.; Wiedermann, A.H.
1984-02-01
A method was published, based on the integral method of characteristics, by which the junction and boundary conditions needed in computation of a flow in a piping network can be accurately formulated. The method for the junction and boundary conditions formulation together with the two-step Lax-Wendroff scheme are used in a computer program; the program in turn, is used here in calculating sample problems related to the blowdown transient of a two-phase flow in the piping network downstream of a PWR pressurizer. Independent, nearly exact analytical solutions also are obtained for the sample problems. Comparison of the results obtained by the hybrid numerical technique with the analytical solutions showed generally good agreement. The good numerical accuracy shown by the results of our scheme suggest that the hybrid numerical technique is suitable for both benchmark and design calculations of PWR pressurizer blowdown transients.
da Jornada, Felipe H.; Qiu, Diana Y.; Louie, Steven G.
2017-01-01
First-principles calculations based on many-electron perturbation theory methods, such as the ab initio G W and G W plus Bethe-Salpeter equation (G W -BSE) approach, are reliable ways to predict quasiparticle and optical properties of materials, respectively. However, these methods involve more care in treating the electron-electron interaction and are considerably more computationally demanding when applied to systems with reduced dimensionality, since the electronic confinement leads to a slower convergence of sums over the Brillouin zone due to a much more complicated screening environment that manifests in the "head" and "neck" elements of the dielectric matrix. Here we present two schemes to sample the Brillouin zone for G W and G W -BSE calculations: the nonuniform neck subsampling method and the clustered sampling interpolation method, which can respectively be used for a family of single-particle problems, such as G W calculations, and for problems involving the scattering of two-particle states, such as when solving the BSE. We tested these methods on several few-layer semiconductors and graphene and show that they perform a much more efficient sampling of the Brillouin zone and yield two to three orders of magnitude reduction in the computer time. These two methods can be readily incorporated into several ab initio packages that compute electronic and optical properties through the G W and G W -BSE approaches.
Homeopathy: statistical significance versus the sample size in experiments with Toxoplasma gondii
Directory of Open Access Journals (Sweden)
Ana LÃƒÂºcia Falavigna Guilherme
2011-09-01
, examined in its full length. This study was approved by the Ethics Committee for animal experimentation of the UEM - Protocol 036/2009. The data were compared using the tests Mann Whitney and Bootstrap [7] with the statistical software BioStat 5.0. Results and discussion: There was no significant difference when analyzed with the Mann-Whitney, even multiplying the "n" ten times (p=0.0618. The number of cysts observed in BIOT 200DH group was 4.5 Ã‚Â± 3.3 and 12.8 Ã‚Â± 9.7 in the CONTROL group. Table 1 shows the results obtained using the bootstrap analysis for each data changed from 2n until 2n+5, and their respective p-values. With the inclusion of more elements in the different groups, tested one by one, randomly, increasing gradually the samples, we observed the sample size needed to statistically confirm the results seen experimentally. Using 17 mice in group BIOT 200DH and 19 in the CONTROL group we have already observed statistical significance. This result suggests that experiments involving highly diluted substances and infection of mice with T. gondii should work with experimental groups with 17 animals at least. Despite the current and relevant ethical discussions about the number of animals used for experimental procedures the number of animals involved in each experiment must meet the characteristics of each item to be studied. In the case of experiments involving highly diluted substances, experimental animal models are still rudimentary and the biological effects observed appear to be also individualized, as described in literature for homeopathy [8]. The fact that the statistical significance was achieved by increasing the sample observed in this trial, tell us about a rare event, with a strong individual behavior, difficult to demonstrate in a result set, treated simply with a comparison of means or medians. Conclusion: Bootstrap seems to be an interesting methodology for the analysis of data obtained from experiments with highly diluted
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
Studies were conducted on specific core collections constructedon the basis of different traits and sample size by the method of stepwise cluster with three sampling strategies based on genotypic values of cotton.A total of 21 traits (11 agronomy traits,5 fiber traits and 5 seed traits) were used to construct main core collections.Specific core collections,as representative of the initial collection,were constructed by agronomy,fiber or seed trait,respectively.As compared with the main core collection,specific core collections tended to have similar property for maintaining genetic diversity of agronomy,seed or fiber traits.Core collections developed by about sample size of 17% (P2=0.17) and 24% (P1= 0.24) with three sampling strategies could be quite representative of the initial collection.
Directory of Open Access Journals (Sweden)
L. Luquot
2015-11-01
Full Text Available The aim of this study is to compare the structural, geometrical and transport parameters of a limestone rock sample determined by X-ray microtomography (XMT images and laboratory experiments. Total and effective porosity, surface-to-volume ratio, pore size distribution, permeability, tortuosity and effective diffusion coefficient have been estimated. Sensitivity analyses of the segmentation parameters have been performed. The limestone rock sample studied here have been characterized using both approaches before and after a reactive percolation experiment. Strong dissolution process occured during the percolation, promoting a wormhole formation. This strong heterogeneity formed after the percolation step allows to apply our methodology to two different samples and enhance the use of experimental techniques or XMT images depending on the rock heterogeneity. We established that for most of the parameters calculated here, the values obtained by computing XMT images are in agreement with the classical laboratory measurements. We demonstrated that the computational porosity is more informative than the laboratory one. We observed that pore size distributions obtained by XMT images and laboratory experiments are slightly different but complementary. Regarding the effective diffusion coefficient, we concluded that both approaches are valuable and give similar results. Nevertheless, we wrapped up that computing XMT images to determine transport, geometrical and petrophysical parameters provides similar results than the one measured at the laboratory but with much shorter durations.
Directory of Open Access Journals (Sweden)
Simon Boitard
2016-03-01
Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.
Method to study sample object size limit of small-angle x-ray scattering computed tomography
Choi, Mina; Ghammraoui, Bahaa; Badal, Andreu; Badano, Aldo
2016-03-01
Small-angle x-ray scattering (SAXS) imaging is an emerging medical tool that can be used for in vivo detailed tissue characterization and has the potential to provide added contrast to conventional x-ray projection and CT imaging. We used a publicly available MC-GPU code to simulate x-ray trajectories in a SAXS-CT geometry for a target material embedded in a water background material with varying sample sizes (1, 3, 5, and 10 mm). Our target materials were water solution of gold nanoparticle (GNP) spheres with a radius of 6 nm and a water solution with dissolved serum albumin (BSA) proteins due to their well-characterized scatter profiles at small angles and highly scattering properties. The background material was water. Our objective is to study how the reconstructed scatter profile degrades at larger target imaging depths and increasing sample sizes. We have found that scatter profiles of the GNP in water can still be reconstructed at depths up to 5 mm embedded at the center of a 10 mm sample. Scatter profiles of BSA in water were also reconstructed at depths up to 5 mm in a 10 mm sample but with noticeable signal degradation as compared to the GNP sample. This work presents a method to study the sample size limits for future SAXS-CT imaging systems.
Energy Technology Data Exchange (ETDEWEB)
Rusin, Tiago; Rebello, Wilson F.; Vellozo, Sergio O.; Gomes, Renato G., E-mail: tiagorusin@ime.eb.b, E-mail: rebello@ime.eb.b, E-mail: vellozo@cbpf.b, E-mail: renatoguedes@ime.eb.b [Instituto Militar de Engenharia (IME), Rio de Janeiro, RJ (Brazil). Dept. de Engenharia Nuclear; Vital, Helio C., E-mail: vital@ctex.eb.b [Centro Tecnologico do Exercito (CTEx), Rio de Janeiro, RJ (Brazil); Silva, Ademir X., E-mail: ademir@con.ufrj.b [Universidade Federal do Rio de Janeiro (PEN/COPPE/UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear
2011-07-01
A cavity-type cesium-137 research irradiating facility at CTEx has been modeled by using the Monte Carlo code MCNPX. The irradiator has been daily used in experiments to optimize the use of ionizing radiation for conservation of many kinds of food and to improve materials properties. In order to correlate the effects of the treatment, average doses have been calculated for each irradiated sample, accounting for the measured dose rate distribution in the irradiating chambers. However that approach is only approximate, being subject to significant systematic errors due to the heterogeneous internal structure of most samples that can lead to large anisotropy in attenuation and Compton scattering properties across the media. Thus this work is aimed at further investigating such uncertainties by calculating the dose rate distribution inside the items treated such that a more accurate and representative estimate of the total absorbed dose can be determined for later use in the effects-versus-dose correlation curves. Samples of different simplified geometries and densities (spheres, cylinders, and parallelepipeds), have been modeled to evaluate internal dose rate distributions within the volume of the samples and the overall effect on the average dose. (author)
The accuracy of instrumental neutron activation analysis of kilogram-size inhomogeneous samples.
Blaauw, M; Lakmaker, O; van Aller, P
1997-07-01
The feasibility of quantitative instrumental neutron activation analysis (INAA) of samples in the kilogram range without internal standardization has been demonstrated by Overwater et al. (Anal. Chem. 1996, 68, 341). In their studies, however, they demonstrated only the agreement between the "corrected" γ ray spectrum of homogeneous large samples and that of small samples of the same material. In this paper, the k(0) calibration of the IRI facilities for large samples is described, and, this time in terms of (trace) element concentrations, some of Overwater's results for homogeneous materials are presented again, as well as results obtained from inhomogeneous materials and subsamples thereof. It is concluded that large-sample INAA can be as accurate as ordinary INAA, even when applied to inhomogeneous materials.
Roeder, Felix; Wachtlin, Daniel; Schulze, Ralf
2012-06-01
The availability of cone beam computed tomography (CBCT) and the numbers of CBCT scans rise constantly, increasing the radiation burden to the patient. A growing discussion is noticeable if a CBCT scan prior to the surgical removal of wisdom teeth may be indicated. We aimed to confirm non-inferiority with respect to damage of the inferior alveolar nerve in patients diagnosed by panoramic radiography compared to CBCT in a prospective randomized controlled multicentre trial. Sample size (number of required third molar removals) was calculated for the study and control groups as 183,474 comparing temporary and 649,036 comparing permanent neurosensory disturbances of the inferior alveolar nerve. Modifying parameter values resulted in sample sizes ranging from 39,584 to 245,724 respectively 140,024 to 869,250. To conduct a clinical study to prove a potential benefit from CBCT scans prior to surgical removal of lower wisdom teeth with respect to the most important parameter, i.e., nerval damage, is almost impossible due to the very large sample sizes required. This fact vice versa indicates that CBCT scans should only be performed in high risk wisdom tooth removals.
Jiang, Wei; Luo, Yun; Maragliano, Luca; Roux, Benoît
2012-11-13
An extremely scalable computational strategy is described for calculations of the potential of mean force (PMF) in multidimensions on massively distributed supercomputers. The approach involves coupling thousands of umbrella sampling (US) simulation windows distributed to cover the space of order parameters with a Hamiltonian molecular dynamics replica-exchange (H-REMD) algorithm to enhance the sampling of each simulation. In the present application, US/H-REMD is carried out in a two-dimensional (2D) space and exchanges are attempted alternatively along the two axes corresponding to the two order parameters. The US/H-REMD strategy is implemented on the basis of parallel/parallel multiple copy protocol at the MPI level, and therefore can fully exploit computing power of large-scale supercomputers. Here the novel technique is illustrated using the leadership supercomputer IBM Blue Gene/P with an application to a typical biomolecular calculation of general interest, namely the binding of calcium ions to the small protein Calbindin D9k. The free energy landscape associated with two order parameters, the distance between the ion and its binding pocket and the root-mean-square deviation (rmsd) of the binding pocket relative the crystal structure, was calculated using the US/H-REMD method. The results are then used to estimate the absolute binding free energy of calcium ion to Calbindin D9k. The tests demonstrate that the 2D US/H-REMD scheme greatly accelerates the configurational sampling of the binding pocket, thereby improving the convergence of the potential of mean force calculation.
Algina, James; Keselman, H. J.
2008-01-01
Applications of distribution theory for the squared multiple correlation coefficient and the squared cross-validation coefficient are reviewed, and computer programs for these applications are made available. The applications include confidence intervals, hypothesis testing, and sample size selection. (Contains 2 tables.)
van Rijnsoever, F.J.
2015-01-01
This paper explores the sample size in qualitative research that is required to reach theoretical saturation. I conceptualize a population as consisting of sub-populations that contain different types of information sources that hold a number of codes. Theoretical saturation is reached after all the
Meyer, J. Patrick; Seaman, Michael A.
2013-01-01
The authors generated exact probability distributions for sample sizes up to 35 in each of three groups ("n" less than or equal to 105) and up to 10 in each of four groups ("n" less than or equal to 40). They compared the exact distributions to the chi-square, gamma, and beta approximations. The beta approximation was best in…
Heymann, D.; Lakatos, S.; Walton, J. R.
1973-01-01
Review of the results of inert gas measurements performed on six grain-size fractions and two single particles from four samples of Luna 20 material. Presented and discussed data include the inert gas contents, element and isotope systematics, radiation ages, and Ar-36/Ar-40 systematics.
Study and Calculation of Feeding Size for R-pendulum Mill%R型摆式磨粉机进料粒度的研究与计算
Institute of Scientific and Technical Information of China (English)
谭开强
2013-01-01
研究推导R型摆式磨粉机进料粒度计算公式,对R型摆式磨料机工况下的进料粒度进行计算,合理的确定R型摆式磨粉机的最大进料尺寸.%This paper researches and deduces the calculation of feeding size for R-pendulum mill.It calculates the feeding size for R-pendulum mill at working condition and reasonably determine the maximum feeding size for R-pendulum mill.
Does size matter? An investigation into the Rey Complex Figure in a pediatric clinical sample.
Loughan, Ashlee R; Perna, Robert B; Galbreath, Jennifer D
2014-01-01
The Rey Complex Figure Test (RCF) copy requires visuoconstructional skills and significant attentional, organizational, and problem-solving skills. Most scoring schemes codify a subset of the details involved in figure construction. Research is unclear regarding the meaning of figure size. The research hypothesis of our inquiry is that size of the RCF copy will have neuropsychological significance. Data from 95 children (43 girls, 52 boys; ages 6-18 years) with behavioral and academic issues revealed that larger figure drawings were associated with higher RCF total scores and significantly higher scores across many neuropsychological tests including the Wechsler Individual Achievement Test-Second Edition (WIAT-II) Word Reading (F = 5.448, p = .022), WIAT-II Math Reasoning (F = 6.365, p = .013), Children's Memory Scale Visual Delay (F = 4.015, p = .048), Trail-Making Test-Part A (F = 5.448, p = .022), and RCF Recognition (F = 4.862, p = .030). Results indicated that wider figures were associated with higher cognitive functioning, which may be part of an adaptive strategy in helping facilitate accurate and relative proportions of the complex details presented in the RCF. Overall, this study initiates the investigation of the RCF size and the relationship between size and a child's neuropsychological profile.
Energy Technology Data Exchange (ETDEWEB)
Mulet, R.; Diaz, O.; Altshuler, E. [Superconductivity Laboratory, IMRE-Physics Faculty, University of Havana, La Habana (Cuba)
1997-10-01
The percolative character of the current paths and the self-field effects were considered to estimate optimal sample dimensions for the transport current of a granular superconductor by means of a Monte Carlo algorithm and critical-state model calculations. We showed that, under certain conditions, self-field effects are negligible and the J{sub c} dependence on sample dimensions is determined by the percolative character of the current. Optimal dimensions are demonstrated to be a function of the fraction of superconducting phase in the sample. (author)
Ellison, Laura E.; Lukacs, Paul M.
2014-01-01
Concern for migratory tree-roosting bats in North America has grown because of possible population declines from wind energy development. This concern has driven interest in estimating population-level changes. Mark-recapture methodology is one possible analytical framework for assessing bat population changes, but sample size requirements to produce reliable estimates have not been estimated. To illustrate the sample sizes necessary for a mark-recapture-based monitoring program we conducted power analyses using a statistical model that allows reencounters of live and dead marked individuals. We ran 1,000 simulations for each of five broad sample size categories in a Burnham joint model, and then compared the proportion of simulations in which 95% confidence intervals overlapped between and among years for a 4-year study. Additionally, we conducted sensitivity analyses of sample size to various capture probabilities and recovery probabilities. More than 50,000 individuals per year would need to be captured and released to accurately determine 10% and 15% declines in annual survival. To detect more dramatic declines of 33% or 50% survival over four years, then sample sizes of 25,000 or 10,000 per year, respectively, would be sufficient. Sensitivity analyses reveal that increasing recovery of dead marked individuals may be more valuable than increasing capture probability of marked individuals. Because of the extraordinary effort that would be required, we advise caution should such a mark-recapture effort be initiated because of the difficulty in attaining reliable estimates. We make recommendations for what techniques show the most promise for mark-recapture studies of bats because some techniques violate the assumptions of mark-recapture methodology when used to mark bats.
Use of High-Frequency In-Home Monitoring Data May Reduce Sample Sizes Needed in Clinical Trials.
Directory of Open Access Journals (Sweden)
Hiroko H Dodge
Full Text Available Trials in Alzheimer's disease are increasingly focusing on prevention in asymptomatic individuals. This poses a challenge in examining treatment effects since currently available approaches are often unable to detect cognitive and functional changes among asymptomatic individuals. Resultant small effect sizes require large sample sizes using biomarkers or secondary measures for randomized controlled trials (RCTs. Better assessment approaches and outcomes capable of capturing subtle changes during asymptomatic disease stages are needed.We aimed to develop a new approach to track changes in functional outcomes by using individual-specific distributions (as opposed to group-norms of unobtrusive continuously monitored in-home data. Our objective was to compare sample sizes required to achieve sufficient power to detect prevention trial effects in trajectories of outcomes in two scenarios: (1 annually assessed neuropsychological test scores (a conventional approach, and (2 the likelihood of having subject-specific low performance thresholds, both modeled as a function of time.One hundred nineteen cognitively intact subjects were enrolled and followed over 3 years in the Intelligent Systems for Assessing Aging Change (ISAAC study. Using the difference in empirically identified time slopes between those who remained cognitively intact during follow-up (normal control, NC and those who transitioned to mild cognitive impairment (MCI, we estimated comparative sample sizes required to achieve up to 80% statistical power over a range of effect sizes for detecting reductions in the difference in time slopes between NC and MCI incidence before transition.Sample size estimates indicated approximately 2000 subjects with a follow-up duration of 4 years would be needed to achieve a 30% effect size when the outcome is an annually assessed memory test score. When the outcome is likelihood of low walking speed defined using the individual-specific distributions of
RISK-ASSESSMENT PROCEDURES AND ESTABLISHING THE SIZE OF SAMPLES FOR AUDITING FINANCIAL STATEMENTS
Directory of Open Access Journals (Sweden)
Daniel Botez
2014-12-01
Full Text Available In auditing financial statements, the procedures for the assessment of the risks and the calculation of the materiality differ from an auditor to another, by audit cabinet policy or advice professional bodies. All, however, have the reference International Audit Standards ISA 315 “Identifying and assessing the risks of material misstatement through understanding the entity and its environment” and ISA 320 “Materiality in planning and performing an audit”. On the basis of specific practices auditors in Romania, the article shows some laborious and examples of these aspects. Such considerations are presented evaluation of the general inherent risk, a specific inherent risk, the risk of control and the calculation of the materiality.
Sample Size Effect of Magnetomechanical Response for Magnetic Elastomers by Using Permanent Magnets
Directory of Open Access Journals (Sweden)
Tsubasa Oguro
2017-01-01
Full Text Available The size effect of magnetomechanical response of chemically cross-linked disk shaped magnetic elastomers placed on a permanent magnet has been investigated by unidirectional compression tests. A cylindrical permanent magnet with a size of 35 mm in diameter and 15 mm in height was used to create the magnetic field. The magnetic field strength was approximately 420 mT at the center of the upper surface of the magnet. The diameter of the magnetoelastic polymer disks was varied from 14 mm to 35 mm, whereas the height was kept constant (5 mm in the undeformed state. We have studied the influence of the disk diameter on the stress-strain behavior of the magnetoelastic in the presence and in the lack of magnetic field. It was found that the smallest magnetic elastomer with 14 mm diameter did not exhibit measurable magnetomechanical response due to magnetic field. On the opposite, the magnetic elastomers with diameters larger than 30 mm contracted in the direction parallel to the mechanical stress and largely elongated in the perpendicular direction. An explanation is put forward to interpret this size-dependent behavior by taking into account the nonuniform field distribution of magnetic field produced by the permanent magnet.
Alexander, Louise; Snape, Joshua F.; Joy, Katherine H.; Downes, Hilary; Crawford, Ian A.
2016-09-01
Lunar mare basalts provide insights into the compositional diversity of the Moon's interior. Basalt fragments from the lunar regolith can potentially sample lava flows from regions of the Moon not previously visited, thus, increasing our understanding of lunar geological evolution. As part of a study of basaltic diversity at the Apollo 12 landing site, detailed petrological and geochemical data are provided here for 13 basaltic chips. In addition to bulk chemistry, we have analyzed the major, minor, and trace element chemistry of mineral phases which highlight differences between basalt groups. Where samples contain olivine, the equilibrium parent melt magnesium number (Mg#; atomic Mg/[Mg + Fe]) can be calculated to estimate parent melt composition. Ilmenite and plagioclase chemistry can also determine differences between basalt groups. We conclude that samples of approximately 1-2 mm in size can be categorized provided that appropriate mineral phases (olivine, plagioclase, and ilmenite) are present. Where samples are fine-grained (grain size lava flow diversity and petrological significance.
Energy Technology Data Exchange (ETDEWEB)
Pan, Bo; Shibutani, Yoji, E-mail: sibutani@mech.eng.osaka-u.ac.jp [Department of Mechanical Engineering, Osaka University, Suita 565-0871 (Japan); Zhang, Xu [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China); School of Mechanics and Engineering Science, Zhengzhou University, Zhengzhou 450001 (China); Shang, Fulin [State Key Laboratory for Strength and Vibration of Mechanical Structures, School of Aerospace, Xi' an Jiaotong University, Xi' an 710049 (China)
2015-07-07
Recent research has explained that the steeply increasing yield strength in metals depends on decreasing sample size. In this work, we derive a statistical physical model of the yield strength of finite single-crystal micro-pillars that depends on single-ended dislocation pile-up inside the micro-pillars. We show that this size effect can be explained almost completely by considering the stochastic lengths of the dislocation source and the dislocation pile-up length in the single-crystal micro-pillars. The Hall–Petch-type relation holds even in a microscale single-crystal, which is characterized by its dislocation source lengths. Our quantitative conclusions suggest that the number of dislocation sources and pile-ups are significant factors for the size effect. They also indicate that starvation of dislocation sources is another reason for the size effect. Moreover, we investigated the explicit relationship between the stacking fault energy and the dislocation “pile-up” effect inside the sample: materials with low stacking fault energy exhibit an obvious dislocation pile-up effect. Our proposed physical model predicts a sample strength that agrees well with experimental data, and our model can give a more precise prediction than the current single arm source model, especially for materials with low stacking fault energy.
Usami, Satoshi
2014-12-01
Recent years have shown increased awareness of the importance of sample size determination in experimental research. Yet effective and convenient methods for sample size determination, especially in longitudinal experimental design, are still under development, and application of power analysis in applied research remains limited. This article presents a convenient method for sample size determination in longitudinal experimental research using a multilevel model. A fundamental idea of this method is transformation of model parameters (level 1 error variance [σ(2)], level 2 error variances [τ 00, τ 11] and its covariance [τ 01, τ 10], and a parameter representing experimental effect [δ]) into indices (reliability of measurement at the first time point [ρ 1], effect size at the last time point [Δ T ], proportion of variance of outcomes between the first and the last time points [k], and level 2 error correlation [r]) that are intuitively understandable and easily specified. To foster more convenient use of power analysis, numerical tables are constructed that refer to ANOVA results to investigate the influence on statistical power by respective indices.
Szymańska, Ewa; Brodrick, Emma; Williams, Mark; Davies, Antony N; van Manen, Henk-Jan; Buydens, Lutgarde M C
2015-01-20
Ion mobility spectrometry combined with multicapillary column separation (MCC-IMS) is a well-known technology for detecting volatile organic compounds (VOCs) in gaseous samples. Due to their large data size, processing of MCC-IMS spectra is still the main bottleneck of data analysis, and there is an increasing need for data analysis strategies in which the size of MCC-IMS data is reduced to enable further analysis. In our study, the first untargeted chemometric strategy is developed and employed in the analysis of MCC-IMS spectra from 264 breath and ambient air samples. This strategy does not comprise identification of compounds as a primary step but includes several preprocessing steps and a discriminant analysis. Data size is significantly reduced in three steps. Wavelet transform, mask construction, and sparse-partial least squares-discriminant analysis (s-PLS-DA) allow data size reduction with down to 50 variables relevant to the goal of analysis. The influence and compatibility of the data reduction tools are studied by applying different settings of the developed strategy. Loss of information after preprocessing is evaluated, e.g., by comparing the performance of classification models for different classes of samples. Finally, the interpretability of the classification models is evaluated, and regions of spectra that are related to the identification of potential analytical biomarkers are successfully determined. This work will greatly enable the standardization of analytical procedures across different instrumentation types promoting the adoption of MCC-IMS technology in a wide range of diverse application fields.
Tsai, Shan-Ho; Wang, Fugao; Landau, D P
2007-06-01
Using the Wang-Landau sampling method with a two-dimensional random walk we determine the density of states for an asymmetric Ising model with two- and three-body interactions on a triangular lattice, in the presence of an external field. With an accurate density of states we were able to map out the phase diagram accurately and perform quantitative finite-size analyses at, and away from, the critical endpoint. We observe a clear divergence of the curvature of the spectator phase boundary and of the magnetization coexistence diameter derivative at the critical endpoint, and the exponents for both divergences agree well with previous theoretical predictions.
Tsai, Shan-Ho; Wang, Fugao; Landau, D. P.
2007-06-01
Using the Wang-Landau sampling method with a two-dimensional random walk we determine the density of states for an asymmetric Ising model with two- and three-body interactions on a triangular lattice, in the presence of an external field. With an accurate density of states we were able to map out the phase diagram accurately and perform quantitative finite-size analyses at, and away from, the critical endpoint. We observe a clear divergence of the curvature of the spectator phase boundary and of the magnetization coexistence diameter derivative at the critical endpoint, and the exponents for both divergences agree well with previous theoretical predictions.
Early detection of nonnative alleles in fish populations: When sample size actually matters
Croce, Patrick Della; Poole, Geoffrey C.; Payne, Robert A.; Gresswell, Bob
2017-01-01
Reliable detection of nonnative alleles is crucial for the conservation of sensitive native fish populations at risk of introgression. Typically, nonnative alleles in a population are detected through the analysis of genetic markers in a sample of individuals. Here we show that common assumptions associated with such analyses yield substantial overestimates of the likelihood of detecting nonnative alleles. We present a revised equation to estimate the likelihood of detecting nonnative alleles in a population with a given level of admixture. The new equation incorporates the effects of the genotypic structure of the sampled population and shows that conventional methods overestimate the likelihood of detection, especially when nonnative or F-1 hybrid individuals are present. Under such circumstances—which are typical of early stages of introgression and therefore most important for conservation efforts—our results show that improved detection of nonnative alleles arises primarily from increasing the number of individuals sampled rather than increasing the number of genetic markers analyzed. Using the revised equation, we describe a new approach to determining the number of individuals to sample and the number of diagnostic markers to analyze when attempting to monitor the arrival of nonnative alleles in native populations.
Amano, Ken-ich
2013-01-01
Recent frequency-modulated atomic force microscopy (FM-AFM) can measure three-dimensional force distribution between a probe and a sample surface in liquid. The force distribution is, in the present circumstances, assumed to be solvation structure on the sample surface, because the force distribution and solvation structure have somewhat similar shape. However, the force distribution is exactly not the solvation structure. If we would like to obtain the solvation structure by using the liquid AFM, a method for transforming the force distribution into the solvation structure is necessary. Therefore, in this letter, we present the transforming method in a brief style. We call this method as a solution for an inverse problem, because the solvation structure is obtained at first and the force distribution is obtained next in general calculation processes. The method is formulated (mainly) by statistical mechanics of liquid.
In situ detection of small-size insect pests sampled on traps using multifractal analysis
Xia, Chunlei; Lee, Jang-Myung; Li, Yan; Chung, Bu-Keun; Chon, Tae-Soo
2012-02-01
We introduce a multifractal analysis for detecting the small-size pest (e.g., whitefly) images from a sticky trap in situ. An automatic attraction system is utilized for collecting pests from greenhouse plants. We applied multifractal analysis to segment action of whitefly images based on the local singularity and global image characteristics. According to the theory of multifractal dimension, the candidate blobs of whiteflies are initially defined from the sticky-trap image. Two schemes, fixed thresholding and regional minima obtainment, were utilized for feature extraction of candidate whitefly image areas. The experiment was conducted with the field images in a greenhouse. Detection results were compared with other adaptive segmentation algorithms. Values of F measuring precision and recall score were higher for the proposed multifractal analysis (96.5%) compared with conventional methods such as Watershed (92.2%) and Otsu (73.1%). The true positive rate of multifractal analysis was 94.3% and the false positive rate minimal level at 1.3%. Detection performance was further tested via human observation. The degree of scattering between manual and automatic counting was remarkably higher with multifractal analysis (R2=0.992) compared with Watershed (R2=0.895) and Otsu (R2=0.353), ensuring overall detection of the small-size pests is most feasible with multifractal analysis in field conditions.
Memory-Optimized Software Synthesis from Dataflow Program Graphs with Large Size Data Samples
Directory of Open Access Journals (Sweden)
Hyunok Oh
2003-05-01
Full Text Available In multimedia and graphics applications, data samples of nonprimitive type require significant amount of buffer memory. This paper addresses the problem of minimizing the buffer memory requirement for such applications in embedded software synthesis from graphical dataflow programs based on the synchronous dataflow (SDF model with the given execution order of nodes. We propose a memory minimization technique that separates global memory buffers from local pointer buffers: the global buffers store live data samples and the local buffers store the pointers to the global buffer entries. The proposed algorithm reduces 67% memory for a JPEG encoder, 40% for an H.263 encoder compared with unshared versions, and 22% compared with the previous sharing algorithm for the H.263 encoder. Through extensive buffer sharing optimization, we believe that automatic software synthesis from dataflow program graphs achieves the comparable code quality with the manually optimized code in terms of memory requirement.
Second generation laser-heated microfurnace for the preparation of microgram-sized graphite samples
Yang, Bin; Smith, A. M.; Long, S.
2015-10-01
We present construction details and test results for two second-generation laser-heated microfurnaces (LHF-II) used to prepare graphite samples for Accelerator Mass Spectrometry (AMS) at ANSTO. Based on systematic studies aimed at optimising the performance of our prototype laser-heated microfurnace (LHF-I) (Smith et al., 2007 [1]; Smith et al., 2010 [2,3]; Yang et al., 2014 [4]), we have designed the LHF-II to have the following features: (i) it has a small reactor volume of 0.25 mL allowing us to completely graphitise carbon dioxide samples containing as little as 2 μg of C, (ii) it can operate over a large pressure range (0-3 bar) and so has the capacity to graphitise CO2 samples containing up to 100 μg of C; (iii) it is compact, with three valves integrated into the microfurnace body, (iv) it is compatible with our new miniaturised conventional graphitisation furnaces (MCF), also designed for small samples, and shares a common vacuum system. Early tests have shown that the extraneous carbon added during graphitisation in each LHF-II is of the order of 0.05 μg, assuming 100 pMC activity, similar to that of the prototype unit. We use a 'budget' fibre packaged array for the diode laser with custom built focusing optics. The use of a new infrared (IR) thermometer with a short focal length has allowed us to decrease the height of the light-proof safety enclosure. These innovations have produced a cheaper and more compact device. As with the LHF-I, feedback control of the catalyst temperature and logging of the reaction parameters is managed by a LabVIEW interface.
Second generation laser-heated microfurnace for the preparation of microgram-sized graphite samples
Energy Technology Data Exchange (ETDEWEB)
Yang, Bin; Smith, A.M.; Long, S.
2015-10-15
We present construction details and test results for two second-generation laser-heated microfurnaces (LHF-II) used to prepare graphite samples for Accelerator Mass Spectrometry (AMS) at ANSTO. Based on systematic studies aimed at optimising the performance of our prototype laser-heated microfurnace (LHF-I) (Smith et al., 2007 [1]; Smith et al., 2010 [2,3]; Yang et al., 2014 [4]), we have designed the LHF-II to have the following features: (i) it has a small reactor volume of 0.25 mL allowing us to completely graphitise carbon dioxide samples containing as little as 2 μg of C, (ii) it can operate over a large pressure range (0–3 bar) and so has the capacity to graphitise CO{sub 2} samples containing up to 100 μg of C; (iii) it is compact, with three valves integrated into the microfurnace body, (iv) it is compatible with our new miniaturised conventional graphitisation furnaces (MCF), also designed for small samples, and shares a common vacuum system. Early tests have shown that the extraneous carbon added during graphitisation in each LHF-II is of the order of 0.05 μg, assuming 100 pMC activity, similar to that of the prototype unit. We use a ‘budget’ fibre packaged array for the diode laser with custom built focusing optics. The use of a new infrared (IR) thermometer with a short focal length has allowed us to decrease the height of the light-proof safety enclosure. These innovations have produced a cheaper and more compact device. As with the LHF-I, feedback control of the catalyst temperature and logging of the reaction parameters is managed by a LabVIEW interface.
Basic distribution free identification tests for small size samples of environmental data
Energy Technology Data Exchange (ETDEWEB)
Federico, A.G.; Musmeci, F. [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dipt. Ambiente
1998-01-01
Testing two or more data sets for the hypothesis that they are sampled form the same population is often required in environmental data analysis. Typically the available samples have a small number of data and often then assumption of normal distributions is not realistic. On the other hand the diffusion of the days powerful Personal Computers opens new possible opportunities based on a massive use of the CPU resources. The paper reviews the problem introducing the feasibility of two non parametric approaches based on intrinsic equi probability properties of the data samples. The first one is based on a full re sampling while the second is based on a bootstrap approach. A easy to use program is presented. A case study is given based on the Chernobyl children contamination data. [Italiano] Nell`analisi di dati ambientali ricorre spesso il caso di dover sottoporre a test l`ipotesi di provenienza di due, o piu`, insiemi di dati dalla stessa popolazione. Tipicamente i dati disponibili sono pochi e spesso l`ipotesi di provenienza da distribuzioni normali non e` sostenibile. D`altra aprte la diffusione odierna di Personal Computer fornisce nuove possibili soluzioni basate sull`uso intensivo delle risorse della CPU. Il rapporto analizza il problema e presenta la possibilita` di utilizzo di due test non parametrici basati sulle proprieta` intrinseche di equiprobabilita` dei campioni. Il primo e` basato su una tecnica di ricampionamento esaustivo mentre il secondo su un approccio di tipo bootstrap. E` presentato un programma di semplice utilizzo e un caso di studio basato su dati di contaminazione di bambini a Chernobyl.
Directory of Open Access Journals (Sweden)
Oum Keltoum Hakam
2015-09-01
Full Text Available Purpose: As a way of prevention, we have measured the activities of uranium and radium isotopes (234U, 238U, 226Ra, 228Ra for 30 drinking water samples collected from 11 wells, 9 springs (6 hot and 3 cold, 3 commercialised mineral water, and 7 tap water samples. Methods: Activities of the Ra isotopes were measured by ultra-gamma spectrometry using a low background and high efficiency well type germanium detector. The U isotopes were counted in an alpha spectrometer.Results: The measured Uranium and radium activities are similar to those published for other non-polluting regions of the world. Except in one commercialised gaseous water sample, and in two hot spring water samples, the calculated effective doses during one year are inferior to the reference level of 0.1 mSv/year recommended by the International Commission on Radiological Protection. Conclusion: These activities don't present any risk for public health in Morocco. The sparkling water of Oulmes is occasionally consumed as table water and waters of warm springs are not used as main sources of drinking water.
Fang, J; Cui, L Y; Liu, M S; Guan, Y Z; Ding, Q Y; Du, H; Li, B H; Wu, S
2017-03-07
Objective: The study aimed to investigate whether sample sizes of F-wave study differed according to different nerves, different F-wave parameters, and amyotrophic lateral sclerosis(ALS) patients or healthy subjects. Methods: The F-waves in the median, ulnar, tibial, and deep peroneal nerves of 55 amyotrophic lateral sclerosis (ALS) patients and 52 healthy subjects were studied to assess the effect of sample size on the accuracy of measurements of the following F-wave parameters: F-wave minimum latency, maximum latency, mean latency, F-wave persistence, F-wave chronodispersion, mean and maximum F-wave amplitude. A hundred stimuli were used in F-wave study. The values obtained from 100 stimuli were considered "true" values and were compared with the corresponding values from smaller samples of 20, 40, 60 and 80 stimuli. F-wave parameters obtained from different sample sizes were compared between the ALS patients and the normal controls. Results: Significant differences were not detected with samples above 60 stimuli for chronodispersion in all four nerves in normal participants. Significant differences were not detected with samples above 40 stimuli for maximum F-wave amplitude in median, ulnar and tibial nerves in normal participants. When comparing ALS patients and normal controls, significant differences were detected in the maximum (median nerve, Z=-3.560, Pwave latency (median nerve, Z=-3.243, Pwave chronodispersion (Z=-3.152, Pwave persistence in the median (Z=6.139, Pwave amplitude in the tibial nerve(t=2.981, Pwave amplitude in the ulnar (Z=-2.134, Pwave persistence in tibial nerve (Z=2.119, Pwave amplitude in ulnar (Z=-2.552, Pwave amplitude in peroneal nerve (t=2.693, Pwave study differed according to different nerves, different F-wave parameters , and ALS patients or healthy subjects.
Sutor, Malinda M.; Dagg, Michael J.
2008-06-01
The effects of vertical sampling resolution on estimates of plankton biomass and grazing calculations were examined using data collected in two different areas with vertically stratified water columns. Data were collected from one site in the upwelling region off Oregon and from four sites in the Northern Gulf of Mexico, three within the Mississippi River plume and one in adjacent oceanic waters. Plankton were found to be concentrated in discrete layers with sharp vertical gradients at all the stations. Phytoplankton distributions were correlated with gradients in temperature and salinity, but microzooplankton and mesozooplankton distributions were not. Layers of zooplankton were sometimes collocated with layers of phytoplankton, but this was not always the case. Simulated calculations demonstrate that when averages are taken over the water column, or coarser scale vertical sampling resolution is used, biomass and mesozooplankton grazing and filtration rates can be greatly underestimated. This has important implications for understanding the ecological significance of discrete layers of plankton and for assessing rates of grazing and production in stratified water columns.
DEFF Research Database (Denmark)
Picchini, Umberto; Forman, Julie Lyng
2016-01-01
In recent years, dynamical modelling has been provided with a range of breakthrough methods to perform exact Bayesian inference. However, it is often computationally unfeasible to apply exact statistical methodologies in the context of large data sets and complex models. This paper considers...... a nonlinear stochastic differential equation model observed with correlated measurement errors and an application to protein folding modelling. An approximate Bayesian computation (ABC)-MCMC algorithm is suggested to allow inference for model parameters within reasonable time constraints. The ABC algorithm...... applications. A simulation study is conducted to compare our strategy with exact Bayesian inference, the latter resulting two orders of magnitude slower than ABC-MCMC for the considered set-up. Finally, the ABC algorithm is applied to a large size protein data. The suggested methodology is fairly general...
Energy Technology Data Exchange (ETDEWEB)
Faye, C.B.; Amodeo, T.; Fréjafon, E. [Institut National de l' Environnement Industriel et des Risques (INERIS/DRC/CARA/NOVA), Parc Technologique Alata, BP 2, 60550 Verneuil-En-Halatte (France); Delepine-Gilon, N. [Institut des Sciences Analytiques, 5 rue de la Doua, 69100 Villeurbanne (France); Dutouquet, C., E-mail: christophe.dutouquet@ineris.fr [Institut National de l' Environnement Industriel et des Risques (INERIS/DRC/CARA/NOVA), Parc Technologique Alata, BP 2, 60550 Verneuil-En-Halatte (France)
2014-01-01
Pollution of water is a matter of concern all over the earth. Particles are known to play an important role in the transportation of pollutants in this medium. In addition, the emergence of new materials such as NOAA (Nano-Objects, their Aggregates and their Agglomerates) emphasizes the need to develop adapted instruments for their detection. Surveillance of pollutants in particulate form in waste waters in industries involved in nanoparticle manufacturing and processing is a telling example of possible applications of such instrumental development. The LIBS (laser-induced breakdown spectroscopy) technique coupled with the liquid jet as sampling mode for suspensions was deemed as a potential candidate for on-line and real time monitoring. With the final aim in view to obtain the best detection limits, the interaction of nanosecond laser pulses with the liquid jet was examined. The evolution of the volume sampled by laser pulses was estimated as a function of the laser energy applying conditional analysis when analyzing a suspension of micrometric-sized particles of borosilicate glass. An estimation of the sampled depth was made. Along with the estimation of the sampled volume, the evolution of the SNR (signal to noise ratio) as a function of the laser energy was investigated as well. Eventually, the laser energy and the corresponding fluence optimizing both the sampling volume and the SNR were determined. The obtained results highlight intrinsic limitations of the liquid jet sampling mode when using 532 nm nanosecond laser pulses with suspensions. - Highlights: • Micrometric-sized particles in suspensions are analyzed using LIBS and a liquid jet. • The evolution of the sampling volume is estimated as a function of laser energy. • The sampling volume happens to saturate beyond a certain laser fluence. • Its value was found much lower than the beam diameter times the jet thickness. • Particles proved not to be entirely vaporized.
Lugo, Jorge; Sosa, Victor
1999-10-01
The repulsion force between a cylindrical superconductor in the Meissner state and a small permanent magnet was calculated under the assumption that the superconductor was formed by a continuous array of dipoles distributed in the finite volume of the sample. After summing up the dipole-dipole interactions with the magnet, we obtained analytical expressions for the levitation force as a function of the superconductor-magnet distance, radius and thickness of the sample. We analyzed two configurations, with the magnet in a horizontal or vertical orientation.
Energy Technology Data Exchange (ETDEWEB)
Babeyko, A.Yu.; Sobolev, S.V. [Shmidt Institute of Physics of the Earth, Moscow (Russian Federation)]|[Univ. of Karlsruhe (Germany); Sinelnikov, E.D. [Shmidt Institute of Physics of the Earth, Moscow (Russian Federation)]|[State Univ. of New York, Stony Brook, NY (United States); Smirnov, Yu.P. [Scientific Center SG-3, Zapoliarniy (Russian Federation); Derevschikova, N.A. [Shmidt Institute of Physics of the Earth, Moscow (Russian Federation)
1994-09-01
In-situ elastic properties in deep boreholes are controlled by several factors, mainly by lithology, petrofabric, fluid-filled cracks and pores. In order to separate the effects of different factors it is useful to extract lithology-controlled part from observed in-situ velocities. For that purpose we calculated mineralogical composition and isotropic crack-free elastic properties in the lower part of the Kola borehole from bulk chemical compositions of core samples. We use a new technique of petrophysical modeling based on thermodynamic approach. The reasonable accuracy of the modeling is confirmed by comparison with the observations of mineralogical composition and laboratory measurements of density and elastic wave velocities in upper crustal crystalline rocks at high confining pressure. Calculations were carried out for 896 core samples from the depth segment of 6840-10535m. Using these results we estimate density and crack-free isotropic elastic properties of 554 lithology-defined layers composing this depth segment. Average synthetic P-wave velocity appears to be 2.7% higher than the velocity from Vertical Seismic Profiling (VSP), and 5% higher than sonic log velocity. Average synthetic S-wave velocity is 1.4% higher than that from VSP. These differences can be explained by superposition of effects of fabric-related anisotropy, cracks aligned parallel to the foliation plain, and randomly oriented cracks, with the effects of cracks being the predominant control. Low sonic log velocities are likely caused by drilling-induced cracking (hydrofractures) in the borehole walls. The calculated synthetic density and velocity cross-sections can be used for much more detailed interpretations, for which, however, new, more detailed and reliable seismic data are required.
Institute of Scientific and Technical Information of China (English)
LIU Yixing; CHENG Ping
2000-01-01
Cheng[1]gave the limit distribution of weighted PP Cramér-Von Mises test statistic when dimension and sample size tend to infinity simultaneously under the underlying distribution being uniform distribution on Sp-1 = {a:‖a‖ = 1, a ∈ SP-1}; this limit distribution is standard normal distribution N(0, 1). In this paper, we give the BerryEsseen bound of this statistic converging to normal distribution and the law of iterated logarithm.
Directory of Open Access Journals (Sweden)
Shuaicheng Guo
2017-03-01
Full Text Available Entrained air voids can improve the freeze-thaw durability of concrete, and also affect its mechanical and transport properties. Therefore, it is important to measure the air void structure and understand its influence on concrete performance for quality control. This paper aims to measure air void structure evolution at both early-age and hardened stages with the ultrasonic technique, and evaluates its influence on concrete properties. Three samples with different air entrainment agent content were specially prepared. The air void structure was determined with optimized inverse analysis by achieving the minimum error between experimental and theoretical attenuation. The early-age sample measurement showed that the air void content with the whole size range slightly decreases with curing time. The air void size distribution of hardened samples (at Day 28 was compared with American Society for Testing and Materials (ASTM C457 test results. The air void size distribution with different amount of air entrainment agent was also favorably compared. In addition, the transport property, compressive strength, and dynamic modulus of concrete samples were also evaluated. The concrete transport decreased with the curing age, which is in accordance with the air void shrinkage. The correlation between the early-age strength development and hardened dynamic modulus with the ultrasonic parameters was also evaluated. The existence of clustered air voids in the Interfacial Transition Zone (ITZ area was found to cause severe compressive strength loss. The results indicated that this developed ultrasonic technique has potential in air void size distribution measurement, and demonstrated the influence of air void structure evolution on concrete properties during both early-age and hardened stages.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
Fraley, R Chris; Vazire, Simine
2014-01-01
The authors evaluate the quality of research reported in major journals in social-personality psychology by ranking those journals with respect to their N-pact Factors (NF)-the statistical power of the empirical studies they publish to detect typical effect sizes. Power is a particularly important attribute for evaluating research quality because, relative to studies that have low power, studies that have high power are more likely to (a) to provide accurate estimates of effects, (b) to produce literatures with low false positive rates, and (c) to lead to replicable findings. The authors show that the average sample size in social-personality research is 104 and that the power to detect the typical effect size in the field is approximately 50%. Moreover, they show that there is considerable variation among journals in sample sizes and power of the studies they publish, with some journals consistently publishing higher power studies than others. The authors hope that these rankings will be of use to authors who are choosing where to submit their best work, provide hiring and promotion committees with a superior way of quantifying journal quality, and encourage competition among journals to improve their NF rankings.
A Rounding by Sampling Approach to the Minimum Size k-Arc Connected Subgraph Problem
Laekhanukit, Bundit; Singh, Mohit
2012-01-01
In the k-arc connected subgraph problem, we are given a directed graph G and an integer k and the goal is the find a subgraph of minimum cost such that there are at least k-arc disjoint paths between any pair of vertices. We give a simple (1 + 1/k)-approximation to the unweighted variant of the problem, where all arcs of G have the same cost. This improves on the 1 + 2/k approximation of Gabow et al. [GGTW09]. Similar to the 2-approximation algorithm for this problem [FJ81], our algorithm simply takes the union of a k in-arborescence and a k out-arborescence. The main difference is in the selection of the two arborescences. Here, inspired by the recent applications of the rounding by sampling method (see e.g. [AGM+ 10, MOS11, OSS11, AKS12]), we select the arborescences randomly by sampling from a distribution on unions of k arborescences that is defined based on an extreme point solution of the linear programming relaxation of the problem. In the analysis, we crucially utilize the sparsity property of the ext...
RNA Profiling for Biomarker Discovery: Practical Considerations for Limiting Sample Sizes
Directory of Open Access Journals (Sweden)
Danny J. Kelly
2005-01-01
Full Text Available We have compared microarray data generated on Affymetrix™ chips from standard (8 micrograms or low (100 nanograms amounts of total RNA. We evaluated the gene signals and gene fold-change estimates obtained from the two methods and validated a subset of the results by real time, polymerase chain reaction assays. The correlation of low RNA derived gene signals to gene signals obtained from standard RNA was poor for less to moderately abundant genes. Genes with high abundance showed better correlation in signals between the two methods. The signal correlation between the low RNA and standard RNA methods was improved by including a reference sample in the microarray analysis. In contrast, the fold-change estimates for genes were better correlated between the two methods regardless of the magnitude of gene signals. A reference sample based method is suggested for studies that would end up comparing gene signal data from a combination of low and standard RNA templates; no such referencing appears to be necessary when comparing fold-changes of gene expression between standard and low template reactions.
Shrinkage-based diagonal Hotelling’s tests for high-dimensional small sample size data
Dong, Kai
2015-09-16
DNA sequencing techniques bring novel tools and also statistical challenges to genetic research. In addition to detecting differentially expressed genes, testing the significance of gene sets or pathway analysis has been recognized as an equally important problem. Owing to the “large pp small nn” paradigm, the traditional Hotelling’s T2T2 test suffers from the singularity problem and therefore is not valid in this setting. In this paper, we propose a shrinkage-based diagonal Hotelling’s test for both one-sample and two-sample cases. We also suggest several different ways to derive the approximate null distribution under different scenarios of pp and nn for our proposed shrinkage-based test. Simulation studies show that the proposed method performs comparably to existing competitors when nn is moderate or large, but it is better when nn is small. In addition, we analyze four gene expression data sets and they demonstrate the advantage of our proposed shrinkage-based diagonal Hotelling’s test.
Directory of Open Access Journals (Sweden)
V. Indira
2015-03-01
Full Text Available Hydraulic brake in automobile engineering is considered to be one of the important components. Condition monitoring and fault diagnosis of such a component is very essential for safety of passengers, vehicles and to minimize the unexpected maintenance time. Vibration based machine learning approach for condition monitoring of hydraulic brake system is gaining momentum. Training and testing the classifier are two important activities in the process of feature classification. This study proposes a systematic statistical method called power analysis to find the minimum number of samples required to train the classifier with statistical stability so as to get good classification accuracy. Descriptive statistical features have been used and the more contributing features have been selected by using C4.5 decision tree algorithm. The results of power analysis have also been verified using a decision tree algorithm namely, C4.5.
The effect of sample size on fresh plasma thromboplastin ISI determination
DEFF Research Database (Denmark)
Poller, L; Van Den Besselaar, A M; Jespersen, J;
1999-01-01
The possibility of reduction of numbers of fresh coumarin and normal plasmas has been studied in a multicentre manual prothrombin (PT) calibration of high international sensitivity index (ISI) rabbit and low ISI human reference thromboplastins at 14 laboratories. The number of calibrant plasmas...... was reduced progressively by a computer program which generated random numbers to provide 1000 different selections for each reduced sample at each participant laboratory. Results were compared with those of the full set of 20 normal and 60 coumarin plasma calibrations. With the human reagent, 20 coumarins...... and seven normals still achieved the W.H.O. precision limit (3% CV of the slope), but with the rabbit reagent reduction coumarins with 17 normal plasmas led to unacceptable CV. Little reduction of numbers from the full set of 80 fresh plasmas appears advisable. For maximum confidence, when calibrating...
Graf, Michael M H; Maurer, Manuela; Oostenbrink, Chris
2016-11-01
Previous free-energy calculations have shown that the seemingly simple transformation of the tripeptide KXK to KGK in water holds some unobvious challenges concerning the convergence of the forward and backward thermodynamic integration processes (i.e., hysteresis). In the current study, the central residue X was either alanine, serine, glutamic acid, lysine, phenylalanine, or tyrosine. Interestingly, the transformation from alanine to glycine yielded the highest hysteresis in relation to the extent of the chemical change of the side chain. The reason for that could be attributed to poor sampling of φ2 /ψ2 dihedral angles along the transformation. Altering the nature of alanine's Cβ atom drastically improved the sampling and at the same time led to the identification of high energy barriers as cause for it. Consequently, simple strategies to overcome these barriers are to increase simulation time (computationally expensive) or to use enhanced sampling techniques such as Hamiltonian replica exchange molecular dynamics and one-step perturbation. © 2016 The Authors. Journal of Computational Chemistry Published by Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Luís G. Dias
2014-09-01
Full Text Available In this work, the main organic acids (citric, malic and ascorbic acids and sugars (glucose, fructose and sucrose present in commercial fruit beverages (fruit carbonated soft-drinks, fruit nectars and fruit juices were determined. A novel size exclusion high performance liquid chromatography isocratic green method, with ultraviolet and refractive index detectors coupled in series, was developed. This methodology enabled the simultaneous quantification of sugars and organic acids without any sample pre-treatment, even when peak interferences occurred. The method was in-house validated, showing a good linearity (R > 0.999, adequate detection and quantification limits (20 and 280 mg L−1, respectively, satisfactory instrumental and method precisions (relative standard deviations lower than 6% and acceptable method accuracy (relative error lower than 5%. Sugars and organic acids profiles were used to calculate dose-over-threshold values, aiming to evaluate their individual sensory impact on beverage global taste perception. The results demonstrated that sucrose, fructose, ascorbic acid, citric acid and malic acid have the greater individual sensory impact in the overall taste of a specific beverage. Furthermore, although organic acids were present in lower concentrations than sugars, their taste influence was significant and, in some cases, higher than the sugars’ contribution towards the global sensory perception.
Directory of Open Access Journals (Sweden)
Yanwei Li
2016-08-01
Full Text Available The quantum mechanics/molecular mechanics (QM/MM method (e.g., density functional theory (DFT/MM is important in elucidating enzymatic mechanisms. It is indispensable to study “multiple” conformations of enzymes to get unbiased energetic and structural results. One challenging problem, however, is to determine the minimum number of conformations for DFT/MM calculations. Here, we propose two convergence criteria, namely the Boltzmann-weighted average barrier and the disproportionate effect, to tentatively address this issue. The criteria were tested by defluorination reaction catalyzed by fluoroacetate dehalogenase. The results suggest that at least 20 conformations of enzymatic residues are required for convergence using DFT/MM calculations. We also tested the correlation of energy barriers between small QM regions and big QM regions. A roughly positive correlation was found. This kind of correlation has not been reported in the literature. The correlation inspires us to propose a protocol for more efficient sampling. This saves 50% of the computational cost in our current case.
Directory of Open Access Journals (Sweden)
Sebastian Wilhelm
2015-12-01
Full Text Available The production of silica is performed by mixing an inorganic, silicate-based precursor and an acid. Monomeric silicic acid forms and polymerizes to amorphous silica particles. Both further polymerization and agglomeration of the particles lead to a gel network. Since polymerization continues after gelation, the gel network consolidates. This rather slow process is known as “natural syneresis” and strongly influences the product properties (e.g., agglomerate size, porosity or internal surface. “Enforced syneresis” is the superposition of natural syneresis with a mechanical, external force. Enforced syneresis may be used either for analytical or preparative purposes. Hereby, two open key aspects are of particular interest. On the one hand, the question arises whether natural and enforced syneresis are analogous processes with respect to their dependence on the process parameters: pH, temperature and sample size. On the other hand, a method is desirable that allows for correlating natural and enforced syneresis behavior. We can show that the pH-, temperature- and sample size-dependency of natural and enforced syneresis are indeed analogous. It is possible to predict natural syneresis using a correlative model. We found that our model predicts maximum volume shrinkages between 19% and 30% in comparison to measured values of 20% for natural syneresis.
DEFF Research Database (Denmark)
Henriksen, Jens Henrik Sahl
1983-01-01
) exchange of endogeneous macromolecules. A significant 'sieving' is present in this barrier to the largest macromolecule (IgM). Calculations of pore-size equivalent to the observed permselectivity of macromolecules suggest microvascular gaps (or channels) with an average radius about 300 A, i...
Directory of Open Access Journals (Sweden)
Michael B.C. Khoo
2013-11-01
Full Text Available The double sampling (DS X bar chart, one of the most widely-used charting methods, is superior for detecting small and moderate shifts in the process mean. In a right skewed run length distribution, the median run length (MRL provides a more credible representation of the central tendency than the average run length (ARL, as the mean is greater than the median. In this paper, therefore, MRL is used as the performance criterion instead of the traditional ARL. Generally, the performance of the DS X bar chart is investigated under the assumption of known process parameters. In practice, these parameters are usually estimated from an in-control reference Phase-I dataset. Since the performance of the DS X bar chart is significantly affected by estimation errors, we study the effects of parameter estimation on the MRL-based DS X bar chart when the in-control average sample size is minimised. This study reveals that more than 80 samples are required for the MRL-based DS X bar chart with estimated parameters to perform more favourably than the corresponding chart with known parameters.
Arbab, A
2014-10-01
The rice stem borer, Chilo suppressalis (Walker), feeds almost exclusively in paddy fields in most regions of the world. The study of its spatial distribution is fundamental for designing correct control strategies, improving sampling procedures, and adopting precise agricultural techniques. Field experiments were conducted during 2011 and 2012 to estimate the spatial distribution pattern of the overwintering larvae. Data were analyzed using five distribution indices and two regression models (Taylor and Iwao). All of the indices and Taylor's model indicated random spatial distribution pattern of the rice stem borer overwintering larvae. Iwao's patchiness regression was inappropriate for our data as shown by the non-homogeneity of variance, whereas Taylor's power law fitted the data well. The coefficients of Taylor's power law for a combined 2 years of data were a = -0.1118, b = 0.9202 ± 0.02, and r (2) = 96.81. Taylor's power law parameters were used to compute minimum sample size needed to estimate populations at three fixed precision levels, 5, 10, and 25% at 0.05 probabilities. Results based on this equation parameters suggesting that minimum sample sizes needed for a precision level of 0.25 were 74 and 20 rice stubble for rice stem borer larvae when the average larvae is near 0.10 and 0.20 larvae per rice stubble, respectively.
Hagell, Peter; Westergren, Albert
Sample size is a major factor in statistical null hypothesis testing, which is the basis for many approaches to testing Rasch model fit. Few sample size recommendations for testing fit to the Rasch model concern the Rasch Unidimensional Measurement Models (RUMM) software, which features chi-square and ANOVA/F-ratio based fit statistics, including Bonferroni and algebraic sample size adjustments. This paper explores the occurrence of Type I errors with RUMM fit statistics, and the effects of algebraic sample size adjustments. Data with simulated Rasch model fitting 25-item dichotomous scales and sample sizes ranging from N = 50 to N = 2500 were analysed with and without algebraically adjusted sample sizes. Results suggest the occurrence of Type I errors with N less then or equal to 500, and that Bonferroni correction as well as downward algebraic sample size adjustment are useful to avoid such errors, whereas upward adjustment of smaller samples falsely signal misfit. Our observations suggest that sample sizes around N = 250 to N = 500 may provide a good balance for the statistical interpretation of the RUMM fit statistics studied here with respect to Type I errors and under the assumption of Rasch model fit within the examined frame of reference (i.e., about 25 item parameters well targeted to the sample).
Dahlin, Jakob; Spanne, Mårten; Karlsson, Daniel; Dalene, Marianne; Skarping, Gunnar
2008-07-01
Isocyanates in the workplace atmosphere are typically present both in gas and particle phase. The health effects of exposure to isocyanates in gas phase and different particle size fractions are likely to be different due to their ability to reach different parts in the respiratory system. To reveal more details regarding the exposure to isocyanate aerosols, a denuder-impactor (DI) sampler for airborne isocyanates was designed. The sampler consists of a channel-plate denuder for collection of gaseous isocyanates, in series with three-cascade impactor stages with cut-off diameters (d(50)) of 2.5, 1.0 and 0.5 mum. An end filter was connected in series after the impactor for collection of particles smaller than 0.5 mum. The denuder, impactor plates and the end filter were impregnated with a mixture of di-n-butylamine (DBA) and acetic acid for derivatization of the isocyanates. During sampling, the reagent on the impactor plates and the end filter is continuously refreshed, due to the DBA release from the impregnated denuder plates. This secures efficient derivatization of all isocyanate particles. The airflow through the sampler was 5 l min(-1). After sampling, the samples containing the different size fractions were analyzed using liquid chromatography-mass spectrometry (LC-MS)/MS. The DBA impregnation was stable in the sampler for at least 1 week. After sampling, the DBA derivatives were stable for at least 3 weeks. Air sampling was performed in a test chamber (300 l). Isocyanate aerosols studied were thermal degradation products of different polyurethane polymers, spraying of isocyanate coating compounds and pure gas-phase isocyanates. Sampling with impinger flasks, containing DBA in toluene, with a glass fiber filter in series was used as a reference method. The DI sampler showed good compliance with the reference method, regarding total air levels. For the different aerosols studied, vast differences were revealed in the distribution of isocyanate in gas and
Vasiliu, Daniel; Clamons, Samuel; McDonough, Molly; Rabe, Brian; Saha, Margaret
2015-01-01
Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED). Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.
Directory of Open Access Journals (Sweden)
Daniel Vasiliu
Full Text Available Global gene expression analysis using microarrays and, more recently, RNA-seq, has allowed investigators to understand biological processes at a system level. However, the identification of differentially expressed genes in experiments with small sample size, high dimensionality, and high variance remains challenging, limiting the usability of these tens of thousands of publicly available, and possibly many more unpublished, gene expression datasets. We propose a novel variable selection algorithm for ultra-low-n microarray studies using generalized linear model-based variable selection with a penalized binomial regression algorithm called penalized Euclidean distance (PED. Our method uses PED to build a classifier on the experimental data to rank genes by importance. In place of cross-validation, which is required by most similar methods but not reliable for experiments with small sample size, we use a simulation-based approach to additively build a list of differentially expressed genes from the rank-ordered list. Our simulation-based approach maintains a low false discovery rate while maximizing the number of differentially expressed genes identified, a feature critical for downstream pathway analysis. We apply our method to microarray data from an experiment perturbing the Notch signaling pathway in Xenopus laevis embryos. This dataset was chosen because it showed very little differential expression according to limma, a powerful and widely-used method for microarray analysis. Our method was able to detect a significant number of differentially expressed genes in this dataset and suggest future directions for investigation. Our method is easily adaptable for analysis of data from RNA-seq and other global expression experiments with low sample size and high dimensionality.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Directory of Open Access Journals (Sweden)
Glišić Selimir
2015-12-01
Full Text Available Cataract surgery and intraocular lens power calculation is challenging in patients with anterior megalophthalmos and cataract, with postoperative refractive surprise frequently reported. Deep anterior chamber in these patients substantially influence effective lens position. To minimize possibility of refractive surprise, we used Haigis formula that takes into account anterior chamber depth in the lens power calculation for our patient. Cataract was managed by phakoemulsification with standard intraocular lens implanted in the capsular bag. Postoperatively, satisfying refractive result was achieved and refractive surprise was avoided.
The inefficiency of re-weighted sampling and the curse of system size in high order path integration
Ceriotti, Michele; Riordan, Oliver; Manolopoulos, David E
2011-01-01
Computing averages over a target probability density by statistical re-weighting of a set of samples with a different distribution is a strategy which is commonly adopted in fields as diverse as atomistic simulation and finance. Here we present a very general analysis of the accuracy and efficiency of this approach, highlighting some of its weaknesses. We then give an example of how our results can be used, specifically to assess the feasibility of high-order path integral methods. We demonstrate that the most promising of these techniques -- which is based on re-weighted sampling -- is bound to fail as the size of the system is increased, because of the exponential growth of the statistical uncertainty in the re-weighted average.
de Winter, Joost C F; Gosling, Samuel D; Potter, Jeff
2016-09-01
The Pearson product–moment correlation coefficient (rp) and the Spearman rank correlation coefficient (rs) are widely used in psychological research. We compare rp and rs on 3 criteria: variability, bias with respect to the population value, and robustness to an outlier. Using simulations across low (N = 5) to high (N = 1,000) sample sizes we show that, for normally distributed variables, rp and rs have similar expected values but rs is more variable, especially when the correlation is strong. However, when the variables have high kurtosis, rp is more variable than rs. Next, we conducted a sampling study of a psychometric dataset featuring symmetrically distributed data with light tails, and of 2 Likert-type survey datasets, 1 with light-tailed and the other with heavy-tailed distributions. Consistent with the simulations, rp had lower variability than rs in the psychometric dataset. In the survey datasets with heavy-tailed variables in particular, rs had lower variability than rp, and often corresponded more accurately to the population Pearson correlation coefficient (Rp) than rp did. The simulations and the sampling studies showed that variability in terms of standard deviations can be reduced by about 20% by choosing rs instead of rp. In comparison, increasing the sample size by a factor of 2 results in a 41% reduction of the standard deviations of rs and rp. In conclusion, rp is suitable for light-tailed distributions, whereas rs is preferable when variables feature heavy-tailed distributions or when outliers are present, as is often the case in psychological research.
Institute of Scientific and Technical Information of China (English)
LI Xiao-ling; LU Yong-gen; LI Jin-quan; Xu Hai-ming; Muhammad Qasim SHAHID
2011-01-01
The development of a core collection could enhance the utilization of germplasm collections in crop improvement programs and simplify their management.Selection of an appropriate sampling strategy is an important prerequisite to construct a core collection with appropriate size in order to adequately represent the genetic spectrum and maximally capture the genetic diversity in available crop collections.The present study was initiated to construct nested core collections to determine the appropriate sample size to represent the genetic diversity of rice landrace collection based on 15 quantitative traits and 34 qualitative traits of 2 262 rice accessions.The results showed that 50-225 nested core collections,whose sampling rate was 2.2％-9.9％,were sufficient to maintain the maximum genetic diversity of the initial collections.Of these,150 accessions (6.6％) could capture the maximal genetic diversity of the initial collection.Three data types,i.e.qualitative traits (QT1),quantitative traits (QT2) and integrated qualitative and quantitative traits (QTT),were compared for their efficiency in constructing core collections based on the weighted pair-group average method combined with stepwise clustering and preferred sampling on adjusted Euclidean distances.Every combining scheme constructed eight rice core collections (225,200,175,150,125,100,75 and 50).The results showed that the QTT data was the best in constructing a core collection as indicated by the genetic diversity of core collections.A core collection constructed only on the information of QT1 could not represent the initial collection effectively.QTT should be used together to construct a productive core collection.
François, Filip; Maenhaut, Willy; Colin, Jean-Louis; Losno, Remi; Schulz, Michael; Stahlschmidt, Thomas; Spokes, Lucinda; Jickells, Timothy
During an intercomparison field experiment, organized at the Atlantic coast station of Mace Head, Ireland, in April 1991, aerosol samples were collected by four research groups. A variety of samplers was used, combining both high- and low-volume devices, with different types of collection substrates: Hi-Vol Whatman 41 filter holders, single Nuclepore filters and stacked filter units, as well as PIXE cascade impactors. The samples were analyzed by each participating group, using in-house analytical techniques and procedures. The intercomparison of the daily concentrations for 15 elements, measured by two or more participants, revealed a good agreement for the low-volume samplers for the majority of the elements, but also indicated some specific analytical problems, owing to the very low concentrations of the non-sea-salt elements at the sampling site. With the Hi-Vol Whatman 41 filter sampler, on the other hand, much higher results were obtained in particular for the sea-salt and crustal elements. The discrepancy was dependent upon the wind speed and was attributed to a higher collection efficiency of the Hi-Vol sampler for the very coarse particles, as compared to the low-volume devices under high wind speed conditions. The elemental mass size distribution, as derived from parallel cascade impactor samplings by two groups, showed discrepancies in the submicrometer aerosol fraction, which were tentatively attributed to differences in stage cut-off diameters and/or to bounce-off or splintering effects on the quartz impactor slides used by one of the groups. However, the atmospheric concentrations (sums over all stages) were rather similar in the parallel impactor samples and were only slightly lower than those derived from stacked filter unit samples taken in parallel.
Directory of Open Access Journals (Sweden)
Schwermer Heinzpeter
2007-05-01
Full Text Available Abstract Background International trade regulations require that countries document their livestock's sanitary status in general and freedom from specific infective agents in detail provided that import restrictions should be applied. The latter is generally achieved by large national serological surveys and risk assessments. The paper describes the basic structure and application of a generic stochastic model for risk-based sample size calculation of consecutive national surveys to document freedom from contagious disease agents in livestock. Methods In the model, disease spread during the time period between two consecutive surveys was considered, either from undetected infections within the domestic population or from imported infected animals. The @Risk model consists of the domestic spread in-between two national surveys; the infection of domestic herds from animals imported from countries with a sanitary status comparable to Switzerland or lower sanitary status and the summary sheet which summed up the numbers of resulting infected herds of all infection pathways to derive the pre-survey prevalence in the domestic population. Thereof the pre-survey probability of freedom from infection and required survey sample sizes were calculated. A scenario for detection of infected herds by general surveillance was included optionally. Results The model highlights the importance of residual domestic infection spread and characteristics of different import pathways. The sensitivity analysis revealed that number of infected, but undetected domestic herds and the multiplicative between-survey-spread factor were most correlated with the pre-survey probability of freedom from infection and the resulting sample size, respectively. Compared to the deterministic pre-cursor model, the stochastic model was therefore more sensitive to the previous survey's results. Undetected spread of infection in the domestic population between two surveys gained more
Effects of sample size on the second magnetization peak in Bi2Sr2CaCuO8+ at low temperatures
Indian Academy of Sciences (India)
B Kalisky; A Shaulov; Y Yeshurun
2006-01-01
Effects of sample size on the second magnetization peak (SMP) in Bi2Sr2CaCuO8+ crystals are observed at low temperatures, above the temperature where the SMP totally disappears. In particular, the onset of the SMP shifts to lower fields as the sample size decreases - a result that could be interpreted as a size effect in the order-disorder vortex matter phase transition. However, local magnetic measurements trace this effect to metastable disordered vortex states, revealing the same order-disorder transition induction in samples of different size.
Standard Deviation for Small Samples
Joarder, Anwar H.; Latif, Raja M.
2006-01-01
Neater representations for variance are given for small sample sizes, especially for 3 and 4. With these representations, variance can be calculated without a calculator if sample sizes are small and observations are integers, and an upper bound for the standard deviation is immediate. Accessible proofs of lower and upper bounds are presented for…
Orth, Patrick; Zurakowski, David; Alini, Mauro; Cucchiarini, Magali; Madry, Henning
2013-11-01
Advanced tissue engineering approaches for articular cartilage repair in the knee joint rely on translational animal models. In these investigations, cartilage defects may be established either in one joint (unilateral design) or in both joints of the same animal (bilateral design). We hypothesized that a lower intraindividual variability following the bilateral strategy would reduce the number of required joints. Standardized osteochondral defects were created in the trochlear groove of 18 rabbits. In 12 animals, defects were produced unilaterally (unilateral design; n=12 defects), while defects were created bilaterally in 6 animals (bilateral design; n=12 defects). After 3 weeks, osteochondral repair was evaluated histologically applying an established grading system. Based on intra- and interindividual variabilities, required sample sizes for the detection of discrete differences in the histological score were determined for both study designs (α=0.05, β=0.20). Coefficients of variation (%CV) of the total histological score values were 1.9-fold increased following the unilateral design when compared with the bilateral approach (26 versus 14%CV). The resulting numbers of joints needed to treat were always higher for the unilateral design, resulting in an up to 3.9-fold increase in the required number of experimental animals. This effect was most pronounced for the detection of small-effect sizes and estimating large standard deviations. The data underline the possible benefit of bilateral study designs for the decrease of sample size requirements for certain investigations in articular cartilage research. These findings might also be transferred to other scoring systems, defect types, or translational animal models in the field of cartilage tissue engineering.
Directory of Open Access Journals (Sweden)
Govardhani.Immadi
2014-05-01
Full Text Available With the increased demand for long distance Tele communication day by day, satellite communication system was developed. Satellite communications utilize L, C, Ku and Ka bands of frequency to fulfil all the requirements. Utilization of higher frequencies causes severe attenuation due to rain. Rain attenuation is noticeable for frequencies above 10ghz. Amount of attenuation depends on whether the operating wave length is comparable with rain drop diameter or not. In this paper the main focus is on drop size distribution using empirical methods, especially Marshall and Palmer distributions. Empirical methods deal with power law relation between the rain rate(mm/h and radar reflectivity(dBz. Finally it is discussed about the rain rate variation, radar reflectivity, drop size distribution, that is made for two rain events at K L University, Vijayawada on 4th September 2013 and on 18 th August 2013.
Directory of Open Access Journals (Sweden)
S. Otto
2010-11-01
Full Text Available Realistic size equivalence and shape of Saharan mineral dust particles are derived from on in-situ particle, lidar and sun photometer measurements during SAMUM-1 in Morocco (19 May 2006, dealing with measured size- and altitude-resolved axis ratio distributions of assumed spheroidal model particles. The data were applied in optical property, radiative effect, forcing and heating effect simulations to quantify the realistic impact of particle non-sphericity. It turned out that volume-to-surface equivalent spheroids with prolate shape are most realistic: particle non-sphericity only slightly affects single scattering albedo and asymmetry parameter but may enhance extinction coefficient by up to 10%. At the bottom of the atmosphere (BOA the Saharan mineral dust always leads to a loss of solar radiation, while the sign of the forcing at the top of the atmosphere (TOA depends on surface albedo: solar cooling/warming over a mean ocean/land surface. In the thermal spectral range the dust inhibits the emission of radiation to space and warms the BOA. The most realistic case of particle non-sphericity causes changes of total (solar plus thermal forcing by 55/5% at the TOA over ocean/land and 15% at the BOA over both land and ocean and enhances total radiative heating within the dust plume by up to 20%. Large dust particles significantly contribute to all the radiative effects reported.
Sevelius, Jae M.
2017-01-01
Background. Transgender individuals have a gender identity that differs from the sex they were assigned at birth. The population size of transgender individuals in the United States is not well-known, in part because official records, including the US Census, do not include data on gender identity. Population surveys today more often collect transgender-inclusive gender-identity data, and secular trends in culture and the media have created a somewhat more favorable environment for transgender people. Objectives. To estimate the current population size of transgender individuals in the United States and evaluate any trend over time. Search methods. In June and July 2016, we searched PubMed, Cumulative Index to Nursing and Allied Health Literature, and Web of Science for national surveys, as well as “gray” literature, through an Internet search. We limited the search to 2006 through 2016. Selection criteria. We selected population-based surveys that used probability sampling and included self-reported transgender-identity data. Data collection and analysis. We used random-effects meta-analysis to pool eligible surveys and used meta-regression to address our hypothesis that the transgender population size estimate would increase over time. We used subsample and leave-one-out analysis to assess for bias. Main results. Our meta-regression model, based on 12 surveys covering 2007 to 2015, explained 62.5% of model heterogeneity, with a significant effect for each unit increase in survey year (F = 17.122; df = 1,10; b = 0.026%; P = .002). Extrapolating these results to 2016 suggested a current US population size of 390 adults per 100 000, or almost 1 million adults nationally. This estimate may be more indicative for younger adults, who represented more than 50% of the respondents in our analysis. Authors’ conclusions. Future national surveys are likely to observe higher numbers of transgender people. The large variety in questions used to ask
Durney, Brandon C; Bachert, Beth A; Sloane, Hillary S; Lukomski, Slawomir; Landers, James P; Holland, Lisa A
2015-06-23
Phospholipid additives are a cost-effective medium to separate deoxyribonucleic acid (DNA) fragments and possess a thermally-responsive viscosity. This provides a mechanism to easily create and replace a highly viscous nanogel in a narrow bore capillary with only a 10°C change in temperature. Preparations composed of dimyristoyl-sn-glycero-3-phosphocholine (DMPC) and 1,2-dihexanoyl-sn-glycero-3-phosphocholine (DHPC) self-assemble, forming structures such as nanodisks and wormlike micelles. Factors that influence the morphology of a particular DMPC-DHPC preparation include the concentration of lipid in solution, the temperature, and the ratio of DMPC and DHPC. It has previously been established that an aqueous solution containing 10% phospholipid with a ratio of [DMPC]/[DHPC]=2.5 separates DNA fragments with nearly single base resolution for DNA fragments up to 500 base pairs in length, but beyond this size the resolution decreases dramatically. A new DMPC-DHPC medium is developed to effectively separate and size DNA fragments up to 1500 base pairs by decreasing the total lipid concentration to 2.5%. A 2.5% phospholipid nanogel generates a resolution of 1% of the DNA fragment size up to 1500 base pairs. This increase in the upper size limit is accomplished using commercially available phospholipids at an even lower material cost than is achieved with the 10% preparation. The separation additive is used to evaluate size markers ranging between 200 and 1500 base pairs in order to distinguish invasive strains of Streptococcus pyogenes and Aspergillus species by harnessing differences in gene sequences of collagen-like proteins in these organisms. For the first time, a reversible stacking gel is integrated in a capillary sieving separation by utilizing the thermally-responsive viscosity of these self-assembled phospholipid preparations. A discontinuous matrix is created that is composed of a cartridge of highly viscous phospholipid assimilated into a separation matrix
Gündüç, Semra; Dilaver, Mehmet; Aydın, Meral; Gündüç, Yiğit
2005-02-01
In this work we have studied the dynamic scaling behavior of two scaling functions and we have shown that scaling functions obey the dynamic finite size scaling rules. Dynamic finite size scaling of scaling functions opens possibilities for a wide range of applications. As an application we have calculated the dynamic critical exponent (z) of Wolff's cluster algorithm for 2-, 3- and 4-dimensional Ising models. Configurations with vanishing initial magnetization are chosen in order to avoid complications due to initial magnetization. The observed dynamic finite size scaling behavior during early stages of the Monte Carlo simulation yields z for Wolff's cluster algorithm for 2-, 3- and 4-dimensional Ising models with vanishing values which are consistent with the values obtained from the autocorrelations. Especially, the vanishing dynamic critical exponent we obtained for d=3 implies that the Wolff algorithm is more efficient in eliminating critical slowing down in Monte Carlo simulations than previously reported.
Terzyk, Artur P; Furmaniak, Sylwester; Harris, Peter J F; Gauden, Piotr A; Włoch, Jerzy; Kowalczyk, Piotr; Rychlicki, Gerhard
2007-11-28
A plausible model for the structure of non-graphitizing carbon is one which consists of curved, fullerene-like fragments grouped together in a random arrangement. Although this model was proposed several years ago, there have been no attempts to calculate the properties of such a structure. Here, we determine the density, pore size distribution and adsorption properties of a model porous carbon constructed from fullerene-like elements. Using the method proposed recently by Bhattacharya and Gubbins (BG), which was tested in this study for ideal and defective carbon slits, the pore size distributions (PSDs) of the initial model and two related carbon models are calculated. The obtained PSD curves show that two structures are micro-mesoporous (with different ratio of micro/mesopores) and the third is strictly microporous. Using the grand canonical Monte Carlo (GCMC) method, adsorption isotherms of Ar (87 K) are simulated for all the structures. Finally PSD curves are calculated using the Horvath-Kawazoe, non-local density functional theory (NLDFT), Nguyen and Do, and Barrett-Joyner-Halenda (BJH) approaches, and compared with those predicted by the BG method. This is the first study in which different methods of calculation of PSDs for carbons from adsorption data can be really verified, since absolute (i.e. true) PSDs are obtained using the BG method. This is also the first study reporting the results of computer simulations of adsorption on fullerene-like carbon models.
Hughes, William O.; McNelis, Anne M.
2010-01-01
The Earth Observing System (EOS) Terra spacecraft was launched on an Atlas IIAS launch vehicle on its mission to observe planet Earth in late 1999. Prior to launch, the new design of the spacecraft's pyroshock separation system was characterized by a series of 13 separation ground tests. The analysis methods used to evaluate this unusually large amount of shock data will be discussed in this paper, with particular emphasis on population distributions and finding statistically significant families of data, leading to an overall shock separation interface level. The wealth of ground test data also allowed a derivation of a Mission Assurance level for the flight. All of the flight shock measurements were below the EOS Terra Mission Assurance level thus contributing to the overall success of the EOS Terra mission. The effectiveness of the statistical methodology for characterizing the shock interface level and for developing a flight Mission Assurance level from a large sample size of shock data is demonstrated in this paper.
Mélachio, Tanekou Tito Trésor; Njiokou, Flobert; Ravel, Sophie; Simo, Gustave; Solano, Philippe; De Meeûs, Thierry
2015-07-01
Human and animal trypanosomiases are two major constraints to development in Africa. These diseases are mainly transmitted by tsetse flies in particular by Glossina palpalis palpalis in Western and Central Africa. To set up an effective vector control campaign, prior population genetics studies have proved useful. Previous studies on population genetics of G. p. palpalis using microsatellite loci showed high heterozygote deficits, as compared to Hardy-Weinberg expectations, mainly explained by the presence of null alleles and/or the mixing of individuals belonging to several reproductive units (Wahlund effect). In this study we implemented a system of trapping, consisting of a central trap and two to four satellite traps around the central one to evaluate a possible role of the Wahlund effect in tsetse flies from three Cameroon human and animal African trypanosomiases foci (Campo, Bipindi and Fontem). We also estimated effective population sizes and dispersal. No difference was observed between the values of allelic richness, genetic diversity and Wright's FIS, in the samples from central and from satellite traps, suggesting an absence of Wahlund effect. Partitioning of the samples with Bayesian methods showed numerous clusters of 2-3 individuals as expected from a population at demographic equilibrium with two expected offspring per reproducing female. As previously shown, null alleles appeared as the most probable factor inducing these heterozygote deficits in these populations. Effective population sizes varied from 80 to 450 individuals while immigration rates were between 0.05 and 0.43, showing substantial genetic exchanges between different villages within a focus. These results suggest that the "suppression" with establishment of physical barriers may be the best strategy for a vector control campaign in this forest context.
Directory of Open Access Journals (Sweden)
Valéria Schimitz Marodim
2000-10-01
Full Text Available Este estudo visa a estabelecer o delineamento experimental e o tamanho de amostra para a cultura da alface (Lactuca sativa em hidroponia, pelo sistema NFT (Nutrient film technique. O experimento foi conduzido no Laboratório de Cultivos Sem Solo/Hidroponia, no Departamento de Fitotecnia da Universidade Federal de Santa Maria e baseou-se em dados de massa de plantas. Os resultados obtidos mostraram que, usando estrutura de cultivo de alface em hidroponia sobre bancadas de fibrocimento com seis canais, o delineamento experimental adequado é blocos ao acaso se a unidade experimental for constituída de faixas transversais aos canais das bancadas, e deve ser inteiramente casualizado se a bancada for a unidade experimental; para a variável massa de plantas, o tamanho da amostra é de 40 plantas para uma semi-amplitude do intervalo de confiança em percentagem da média (d igual a 5% e de 7 plantas para um d igual a 20%.This study was carried out to establish the experimental design and sample size for hydroponic lettuce (Lactuca sativa crop under nutrient film technique. The experiment was conducted in the Laboratory of Hydroponic Crops of the Horticulture Department of the Federal University of Santa Maria. The evaluated traits were plant weight. Under hydroponic conditions on concrete bench with six ducts, the most indicated experimental design for lettuce is randomised blocks for duct transversal plots or completely randomised for bench plot. The sample size for plant weight should be 40 and 7 plants, respectively, for a confidence interval of mean percentage (d equal to 5% and 20%.
Li, Aifeng; Ma, Feifei; Song, Xiuli; Yu, Rencheng
2011-03-18
Solid-phase adsorption toxin tracking (SPATT) technology was developed as an effective passive sampling method for dissolved diarrhetic shellfish poisoning (DSP) toxins in seawater. HP20 and SP700 resins have been reported as preferred adsorption substrates for lipophilic algal toxins and are recommended for use in SPATT testing. However, information on the mechanism of passive adsorption by these polymeric resins is still limited. Described herein is a study on the adsorption of OA and DTX1 toxins extracted from Prorocentrum lima algae by HP20 and SP700 resins. The pore size distribution of the adsorbents was characterized by a nitrogen adsorption method to determine the relationship between adsorption and resin porosity. The Freundlich equation constant showed that the difference in adsorption capacity for OA and DTX1 toxins was not determined by specific surface area, but by the pore size distribution in particular, with micropores playing an especially important role. Additionally, it was found that differences in affinity between OA and DTX1 for aromatic resins were as a result of polarity discrepancies due to DTX1 having an additional methyl moiety.
HyPEP FY-07 Report: Initial Calculations of Component Sizes, Quasi-Static, and Dynamics Analyses
Energy Technology Data Exchange (ETDEWEB)
Chang Oh
2007-07-01
The Very High Temperature Gas-Cooled Reactor (VHTR) coupled to the High Temperature Steam Electrolysis (HTSE) process is one of two reference integrated systems being investigated by the U.S. Department of Energy and Idaho National Laboratory for the production of hydrogen. In this concept a VHTR outlet temperature of 900 °C provides thermal energy and high efficiency electricity for the electrolysis of steam in the HTSE process. In the second reference system the Sulfur Iodine (SI) process is coupled to the VHTR to produce hydrogen thermochemically. This report describes component sizing studies and control system strategies for achieving plant production and operability goals for these two reference systems. The optimal size and design condition for the intermediate heat exchanger, one of the most important components for integration of the VHTR and HTSE plants, was estimated using an analytic model. A partial load schedule and control system was designed for the integrated plant using a quasi-static simulation. Reactor stability for temperature perturbations in the hydrogen plant was investigated using both a simple analytic method and a dynamic simulation. Potential efficiency improvements over the VHTR/HTSE plant were investigated for an alternative design that directly couples a High Temperature Steam Rankin Cycle (HTRC) to the HTSE process. This work was done using the HYSYS code and results for the HTRC/HTSE system were compared to the VHTR/HTSE system. Integration of the VHTR with SI process plants was begun. Using the ASPEN plus code the efficiency was estimated. Finally, this report describes planning for the validation and verification of the HYPEP code.
DEFF Research Database (Denmark)
Haugbøl, Steven; Pinborg, Lars H; Arfan, Haroon M
2006-01-01
PURPOSE: To determine the reproducibility of measurements of brain 5-HT2A receptors with an [18F]altanserin PET bolus/infusion approach. Further, to estimate the sample size needed to detect regional differences between two groups and, finally, to evaluate how partial volume correction affects...... reproducibility and the required sample size. METHODS: For assessment of the variability, six subjects were investigated with [18F]altanserin PET twice, at an interval of less than 2 weeks. The sample size required to detect a 20% difference was estimated from [18F]altanserin PET studies in 84 healthy subjects......% (range 5-12%), whereas in regions with a low receptor density, BP1 reproducibility was lower, with a median difference of 17% (range 11-39%). Partial volume correction reduced the variability in the sample considerably. The sample size required to detect a 20% difference in brain regions with high...
Energy Technology Data Exchange (ETDEWEB)
Lobach, V.A.; Sobolev, A.B.; Shul' gin, B.V.
1987-05-01
Calculations have been performed on the clusters (A/sub x/B/sub y/) (x = 1, 13; y = 6, 14), corresponding to perfect crystals of the alkaline-earth oxides (AEO) MgO, CaO, and SrO by means of methods involving molecular clusters (MC) and crystalline clusters (CC) in the SCF-X/sub approx./-RW method. It is found that MC is unsuitable for describing perfect AEO, because they have a long-range Coulomb interaction and a potential cluster effect. Even in the CC method, the nonstoichiometric composition of (A/sub x/B/sub y/) for x < 13 and y < 14 does not allow one to obtain satisfactory agreement with the observed optical and x-ray spectra. The (A/sub 13/B/sub 14/) and (B/sub 13/A/sub 14/) clusters reproduce satisfactorily the partial composition of the valence band (VB) and the conduction band (CB), as well as the widths of those bands, the fine structure of the K emission spectrum for oxygen in MgO, and the observed electron-density distribution. A study is made of the effects of varying the radii of the spheres on the error from the region between the spheres with muffin-tin averaging.
Size of the fragment for crystal cluster SCF-X/sub /-SW calculations of alkaline earth metal oxides
Energy Technology Data Exchange (ETDEWEB)
Lobach, V.A.; Sobolev, A.B.; Shul' gin, B.V.
Calculation of (A/sub x/B/sub y/) (x=1, 13; y=6, 14) clusters, corresponding to ideal crystals of alkaline earth metal oxides (AEMO) MgO, CaO, SrO by means of molecular cluster (MC) and crystal cluster (CC) SCF-X/sub /-SW method is carried out. MC method is not suitable for description of ideal AEMO electron structure due to long-range Coulomb interaction and potential cluster effect. Even in CC method at x < 13 and y < 14 (A/sub x/B/sub y/) cluster nonstoichiometry is inhibitory to the obtaining of satisfactory agreement with the experimental optical and X-ray spectra. (A13B14) and (B13A14) clusters satisfactorily reproduce partial composition of valence band (VB) and conduction band (CB), VB and CB widths, a fine structure of oxygen K-emission spectra in MgO and also experimental distribution of electron density. Sphere radii variation effect on the value of intersphere region error with muffin-tin averaging is considered.
风电机组塔架钢板下料尺寸计算%The Calculation of the Size of the Wind Turbine Tower Steel Cutting
Institute of Scientific and Technical Information of China (English)
马志文; 梁慧丽
2012-01-01
The calculation of the size of the wind turbine tower steel cutting is proposed in this paper. With a conical cylinder structure, wind turbine tower is a kind of tapered cylinder structure which is constituted of several conical shells rely on flange connection, the tower requests high requirements of dimension calculation and strict control of its error for each plate. A fixed blanking template is designed in this article. Input certain known data, the template will automatically calculate all the required size. The designed template minimize the human factors as much as possible as well.% 本文提供了风电机组塔架钢板下料尺寸计算方法，风电机组塔架是由数段圆锥筒体依靠法兰连接组成一个锥形圆筒状结构，对每一张钢板的尺寸计算有极高的要求，必须严格控制其误差。本文给出了一个固定的模板，只需输入个别已知数据，模板就会自动算出所有所需尺寸，同时尽可能减少人为因素。
2014-07-01
GUIDANCE DOCUMENT Passive PE Sampling in Support of In Situ Remediation of Contaminated Sediments – Passive Sampler PRC Calculation Software... subject to a penalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE...Sediments â Passive Sampler PRC Calculation Software Userâs Guide 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d
Dendukuri, Nandini; Bélisle, Patrick; Joseph, Lawrence
2010-11-20
Diagnostic tests rarely provide perfect results. The misclassification induced by imperfect sensitivities and specificities of diagnostic tests must be accounted for when planning prevalence studies or investigations into properties of new tests. The previous work has shown that applying a single imperfect test to estimate prevalence can often result in very large sample size requirements, and that sometimes even an infinite sample size is insufficient for precise estimation because the problem is non-identifiable. Adding a second test can sometimes reduce the sample size substantially, but infinite sample sizes can still occur as the problem remains non-identifiable. We investigate the further improvement possible when three diagnostic tests are to be applied. We first develop methods required for studies when three conditionally independent tests are available, using different Bayesian criteria. We then apply these criteria to prototypic scenarios, showing that large sample size reductions can occur compared to when only one or two tests are used. As the problem is now identifiable, infinite sample sizes cannot occur except in pathological situations. Finally, we relax the conditional independence assumption, demonstrating in this once again non-identifiable situation that sample sizes may substantially grow and possibly be infinite. We apply our methods to the planning of two infectious disease studies, the first designed to estimate the prevalence of Strongyloides infection, and the second relating to estimating the sensitivity of a new test for tuberculosis transmission. The much smaller sample sizes that are typically required when three as compared to one or two tests are used should encourage researchers to plan their studies using more than two diagnostic tests whenever possible. User-friendly software is available for both design and analysis stages greatly facilitating the use of these methods.
Smedslund Geir; Zangi Heidi Andersen; Mowinckel Petter; Hagen Kåre Birger
2013-01-01
Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71)...
Smith, Philip L; Lilburn, Simon D; Corbett, Elaine A; Sewell, David K; Kyllingsbæk, Søren
2016-09-01
We investigated the capacity of visual short-term memory (VSTM) in a phase discrimination task that required judgments about the configural relations between pairs of black and white features. Sewell et al. (2014) previously showed that VSTM capacity in an orientation discrimination task was well described by a sample-size model, which views VSTM as a resource comprised of a finite number of noisy stimulus samples. The model predicts the invariance of [Formula: see text] , the sum of squared sensitivities across items, for displays of different sizes. For phase discrimination, the set-size effect significantly exceeded that predicted by the sample-size model for both simultaneously and sequentially presented stimuli. Instead, the set-size effect and the serial position curves with sequential presentation were predicted by an attention-weighted version of the sample-size model, which assumes that one of the items in the display captures attention and receives a disproportionate share of resources. The choice probabilities and response time distributions from the task were well described by a diffusion decision model in which the drift rates embodied the assumptions of the attention-weighted sample-size model.
Nasir, M.; Pratama, D.; Anam, C.; Haryanto, F.
2016-03-01
The aim of this research was to calculate Size Specific Dose Estimates (SSDE) generated by the varian OBI CBCT v1.4 X-ray tube working at 100 kV using EGSnrc Monte Carlo simulations. The EGSnrc Monte Carlo code used in this simulation was divided into two parts. Phase space file data resulted by the first part simulation became an input to the second part. This research was performed with varying phantom diameters of 5 to 35 cm and varying phantom lengths of 10 to 25 cm. Dose distribution data were used to calculate SSDE values using trapezoidal rule (trapz) function in a Matlab program. SSDE obtained from this calculation was compared to that in AAPM report and experimental data. It was obtained that the normalization of SSDE value for each phantom diameter was between 1.00 and 3.19. The normalization of SSDE value for each phantom length was between 0.96 and 1.07. The statistical error in this simulation was 4.98% for varying phantom diameters and 5.20% for varying phantom lengths. This study demonstrated the accuracy of the Monte Carlo technique in simulating the dose calculation. In the future, the influence of cylindrical phantom material to SSDE would be studied.
Soo, Jhy-Charm; Lee, Eun Gyung; Lee, Larry A; Kashon, Michael L; Harper, Martin
2014-10-01
Lee et al. (Evaluation of pump pulsation in respirable size-selective sampling: part I. Pulsation measurements. Ann Occup Hyg 2014a;58:60-73) introduced an approach to measure pump pulsation (PP) using a real-world sampling train, while the European Standards (EN) (EN 1232-1997 and EN 12919-1999) suggest measuring PP using a resistor in place of the sampler. The goal of this study is to characterize PP according to both EN methods and to determine the relationship of PP between the published method (Lee et al., 2014a) and the EN methods. Additional test parameters were investigated to determine whether the test conditions suggested by the EN methods were appropriate for measuring pulsations. Experiments were conducted using a factorial combination of personal sampling pumps (six medium- and two high-volumetric flow rate pumps), back pressures (six medium- and seven high-flow rate pumps), resistors (two types), tubing lengths between a pump and resistor (60 and 90 cm), and different flow rates (2 and 2.5 l min(-1) for the medium- and 4.4, 10, and 11.2 l min(-1) for the high-flow rate pumps). The selection of sampling pumps and the ranges of back pressure were based on measurements obtained in the previous study (Lee et al., 2014a). Among six medium-flow rate pumps, only the Gilian5000 and the Apex IS conformed to the 10% criterion specified in EN 1232-1997. Although the AirChek XR5000 exceeded the 10% limit, the average PP (10.9%) was close to the criterion. One high-flow rate pump, the Legacy (PP=8.1%), conformed to the 10% criterion in EN 12919-1999, while the Elite12 did not (PP=18.3%). Conducting supplemental tests with additional test parameters beyond those used in the two subject EN standards did not strengthen the characterization of PPs. For the selected test conditions, a linear regression model [PPEN=0.014+0.375×PPNIOSH (adjusted R2=0.871)] was developed to determine the PP relationship between the published method (Lee et al., 2014a) and the EN methods
Franke, Karl-Josef; Szyrach, Mara; Nilius, Georg; Hetzel, Jürgen; Hetzel, Martin; Ruehle, Karl-Heinz; Enderle, Markus D
2009-08-01
Cryoextraction is a procedure for recanalization of obstructed airways caused by exophytic growing tumors. Biopsy samples obtained with this method can be used for histological diagnosis. The objective of this study was to evaluate the parameters influencing the size of cryobiopsies in an in vitro animal model. New flexible cryoprobes with different diameters were used to extract biopsies from lung tissue. These biopsies were compared with forceps biopsy (gold standard) in terms of the biopsy size. Tissue dependency of the biopsy size was analyzed by comparing biopsies taken from the lung, the liver, and gastric mucosa. The effect of contact pressure exerted by the tip of the cryoprobe on the tissue was analyzed on liver tissue separately. Biopsy size was estimated by measuring the weight and the diameter. Weight and diameter of cryobiopsies correlated positively with longer activation times and larger diameters of the cryoprobe. The weight of the biopsies was tissue dependent: lung biopsy diameter. The biopsy size increased when the probe was pressed on the tissue during cooling. Cryobiopsies can be taken from different tissue types with flexible cryoprobes. The size of the samples depends on tissue type, probe diameter, application time, and pressure exerted by the probe on the tissue. Even the cryoprobe with the smallest diameter can provide larger biopsies than a forceps biopsy in lung. It can be expected that the same parameters influence the sample size of biopsies in vivo.
Ciarleglio, Maria M; Arendt, Christopher D; Makuch, Robert W; Peduzzi, Peter N
2015-03-01
Specification of the treatment effect that a clinical trial is designed to detect (θA) plays a critical role in sample size and power calculations. However, no formal method exists for using prior information to guide the choice of θA. This paper presents a hybrid classical and Bayesian procedure for choosing an estimate of the treatment effect to be detected in a clinical trial that formally integrates prior information into this aspect of trial design. The value of θA is found that equates the pre-specified frequentist power and the conditional expected power of the trial. The conditional expected power averages the traditional frequentist power curve using the conditional prior distribution of the true unknown treatment effect θ as the averaging weight. The Bayesian prior distribution summarizes current knowledge of both the magnitude of the treatment effect and the strength of the prior information through the assumed spread of the distribution. By using a hybrid classical and Bayesian approach, we are able to formally integrate prior information on the uncertainty and variability of the treatment effect into the design of the study, mitigating the risk that the power calculation will be overly optimistic while maintaining a frequentist framework for the final analysis. The value of θA found using this method may be written as a function of the prior mean μ0 and standard deviation τ0, with a unique relationship for a given ratio of μ0/τ0. Results are presented for Normal, Uniform, and Gamma priors for θ.
George, Goldy C.; Hoelscher, Deanna M.; Nicklas, Theresa A.; Kelder, Steven H.
2009-01-01
Objective: To examine diet- and body size-related attitudes and behaviors associated with supplement use in a representative sample of fourth-grade students in Texas. Design: Cross-sectional data from the School Physical Activity and Nutrition study, a probability-based sample of schoolchildren. Children completed a questionnaire that assessed…
Willruth, A M; Steinhard, J; Enzensberger, C; Axt-Fliedner, R; Gembruch, U; Doelle, A; Dimitriou, I; Fimmers, R; Bahlmann, F
2016-02-04
Purpose: To assess the time intervals of the cardiac cycle in healthy fetuses in the second and third trimester using color tissue Doppler imaging (cTDI) and to evaluate the influence of different sizes of sample gates on time interval values. Materials and Methods: Time intervals were measured from the cTDI-derived Doppler waveform using a small and large region of interest (ROI) in healthy fetuses. Results: 40 fetuses were included. The median gestational age at examination was 26 + 1 (range: 20 + 5 - 34 + 5) weeks. The median frame rate was 116/s (100 - 161/s) and the median heart rate 143 (range: 125 - 158) beats per minute (bpm). Using small and large ROIs, the second trimester right ventricular (RV) mean isovolumetric contraction times (ICTs) were 39.8 and 41.4 ms (p = 0.17), the mean ejection times (ETs) were 170.2 and 164.6 ms (p < 0.001), the mean isovolumetric relaxation times (IRTs) were 52.8 and 55.3 ms (p = 0.08), respectively. The left ventricular (LV) mean ICTs were 36.2 and 39.4 ms (p = 0.05), the mean ETs were 167.4 and 164.5 ms (p = 0.013), the mean IRTs were 53.9 and 57.1 ms (p = 0.05), respectively. The third trimester RV mean ICTs were 50.7 and 50.4 ms (p = 0.75), the mean ETs were 172.3 and 181.4 ms (p = 0.49), the mean IRTs were 50.2 and 54.6 ms (p = 0.03); the LV mean ICTs were 45.1 and 46.2 ms (p = 0.35), the mean ETs were 175.2 vs. 172.9 ms (p = 0.29), the mean IRTs were 47.1 and 50.0 ms (p = 0.01), respectively. Conclusion: Isovolumetric time intervals can be analyzed precisely and relatively independent of ROI size. In the near future, automatic time interval measurement using ultrasound systems will be feasible and the analysis of fetal myocardial function can become part of the clinical routine.
Johnson, Kenneth L.; White, K, Preston, Jr.
2012-01-01
The NASA Engineering and Safety Center was requested to improve on the Best Practices document produced for the NESC assessment, Verification of Probabilistic Requirements for the Constellation Program, by giving a recommended procedure for using acceptance sampling by variables techniques. This recommended procedure would be used as an alternative to the potentially resource-intensive acceptance sampling by attributes method given in the document. This document contains the outcome of the assessment.
Trattner, Sigal; Cheng, Bin; Pieniazek, Radoslaw L.; Hoffmann, Udo; Douglas, Pamela S.; Einstein, Andrew J.
2014-01-01
Purpose: Effective dose (ED) is a widely used metric for comparing ionizing radiation burden between different imaging modalities, scanners, and scan protocols. In computed tomography (CT), ED can be estimated by performing scans on an anthropomorphic phantom in which metal-oxide-semiconductor field-effect transistor (MOSFET) solid-state dosimeters have been placed to enable organ dose measurements. Here a statistical framework is established to determine the sample size (number of scans) needed for estimating ED to a desired precision and confidence, for a particular scanner and scan protocol, subject to practical limitations. Methods: The statistical scheme involves solving equations which minimize the sample size required for estimating ED to desired precision and confidence. It is subject to a constrained variation of the estimated ED and solved using the Lagrange multiplier method. The scheme incorporates measurement variation introduced both by MOSFET calibration, and by variation in MOSFET readings between repeated CT scans. Sample size requirements are illustrated on cardiac, chest, and abdomen–pelvis CT scans performed on a 320-row scanner and chest CT performed on a 16-row scanner. Results: Sample sizes for estimating ED vary considerably between scanners and protocols. Sample size increases as the required precision or confidence is higher and also as the anticipated ED is lower. For example, for a helical chest protocol, for 95% confidence and 5% precision for the ED, 30 measurements are required on the 320-row scanner and 11 on the 16-row scanner when the anticipated ED is 4 mSv; these sample sizes are 5 and 2, respectively, when the anticipated ED is 10 mSv. Conclusions: Applying the suggested scheme, it was found that even at modest sample sizes, it is feasible to estimate ED with high precision and a high degree of confidence. As CT technology develops enabling ED to be lowered, more MOSFET measurements are needed to estimate ED with the same
Accurate and timely spatial predictions of vegetation cover from remote imagery are an important data source for natural resource management. High-quality in situ data are needed to develop and validate these products. Point-intercept sampling techniques are a common method for obtaining quantitativ...
Energy Technology Data Exchange (ETDEWEB)
Nasrabadi, M.N. [Department of Physics, Faculty of Science, University of Kashan, Km. 6, Ravand Road, Kashan (Iran, Islamic Republic of)], E-mail: mnnasri@kashanu.ac.ir; Jalali, M. [Isfahan Nuclear Science and Technology Research Institute, Atomic Energy organization of Iran (Iran, Islamic Republic of); Mohammadi, A. [Department of Physics, Faculty of Science, University of Kashan, Km. 6, Ravand Road, Kashan (Iran, Islamic Republic of)
2007-10-15
In this work thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing materials is studied using bulk sample prompt gamma neutron activation analysis (BSPGNAA) with the MCNP code. The code was used to perform three dimensional simulations of a neutron source, neutron detector and sample of various material compositions. The MCNP model was validated against experimental measurements of the neutron flux performed using a BF{sub 3} detector. Simulations were performed to predict thermal neutron self-shielding in aqueous bulk samples containing neutron absorbing solutes. In practice, the MCNP calculations are combined with experimental measurements of the relative thermal neutron flux over the sample's surface, with respect to a reference water sample, to derive the thermal neutron self-shielding within the sample. The proposed methodology can be used for the determination of the elemental concentration of unknown aqueous samples by BSPGNAA where knowledge of the average thermal neutron flux within the sample volume is required.
Directory of Open Access Journals (Sweden)
Gerrit eVoordouw
2016-03-01
Full Text Available Microbially-influenced corrosion (MIC contributes to the general corrosion rate (CR, which is typically measured with carbon steel coupons. Here we explore the use of carbon steel ball bearings, referred to as beads (55.0 ± 0.3 mg; Ø = 0.238 cm, for determining CRs. CRs for samples from an oil field in Oceania incubated with beads were determined by the weight loss method, using acid treatment to remove corrosion products. The release of ferrous and ferric iron was also measured and CRs based on weight loss and iron determination were in good agreement. Average CRs were 0.022 mm/yr for 8 produced waters with high numbers (105/ml of acid-producing bacteria (APB, but no sulfate-reducing bacteria (SRB. Average CRs were 0.009 mm/yr for 5 central processing facility (CPF waters, which had no APB or SRB due to weekly biocide treatment and 0.036 mm/yr for 2 CPF tank bottom sludges, which had high numbers of APB (106/ml and SRB (108/ml. Hence, corrosion monitoring with carbon steel beads indicated that biocide treatment of CPF waters decreased the CR, except where biocide did not penetrate. The CR for incubations with 20 ml of a produced water decreased from 0.061 to 0.007 mm/yr when increasing the number of beads from 1 to 40. CRs determined with beads were higher than those with coupons, possibly also due to a higher weight of iron per unit volume used in incubations with coupons. Use of 1 ml syringe columns, containing carbon steel beads and injected with 10 ml/day of SRB-containing medium for 256 days gave a CR of 0.11 mm/yr under flow conditions. The standard deviation of the distribution of residual bead weights, a measure for the unevenness of the corrosion, increased with increasing CR. The most heavily corroded beads showed significant pitting. Hence the use of uniformly sized carbon steel beads offers new opportunities for screening and monitoring of corrosion including determination of the distribution of corrosion rates, which allows
Barghouty, A. F.
2013-01-01
Accurate estimates of electron-capture cross sections at energies relevant to ENA modeling (approx. few MeV per nucleon) and for multi-electron ions must rely on detailed, but computationally expensive, quantummechanical description of the collision process. Kuang's semi-classical approach is an elegant and efficient way to arrive at these estimates. Motivated by ENA modeling efforts, we shall briefly present this approach along with sample applications and report on current progress.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Investigations into forest soils face the problem of the high level of spatial variability that is an inherent property of all forest soils. In order to investigate the effect of changes in residue management practices on soil properties in hoop pine (Araucaria cunninghamii Aiton ex A. Cunn.) plantations of subtropical Australia it was important to understand the intensity of sampling effort required to overcome the spatial variability induced by those changes. Harvest residues were formed into windrows to prevent nitrogen (N) losses through volatilisation and erosion that had previously occurred as a result of pile and burn operations. We selected second rotation (2R) hoop pine sites where the windrows (10-15 m apart) had been formed 1, 2 and 3 years prior to sampling in order to examine the spatial variability in soil carbon (C)and N and in potential mineralisable N (PMN) in the areas beneath and between (inter-) the windrows. We examined the implications of soil variability on the number of samples required to detect differences in means for specific soil properties,at different ages and at specified levels of accuracy. Sample size needed to accurately reflect differences between means was not affected by the position where the samples were taken relative to the windrows but differed according to the parameter to be sampled. The relative soil sampling size required for detecting differences between means of a soil property in the inter-windrow and beneath-windrow positions was highly dependent on the soil property assessed and the acceptable relative sampling error. An alternative strategy for soil sampling should be considered, if the estimated sample size exceeds 50 replications. The possible solution to this problem is collection of composite soil samples allowing a substantial reduction in the number of samples required for chemical analysis without loss in the precision of the mean estimates for a particular soil property.
Parshintsev, Jevgeni; Ruiz-Jimenez, Jose; Petäjä, Tuukka; Hartonen, Kari; Kulmala, Markku; Riekkola, Marja-Liisa
2011-07-01
In this research, the two most common filter media, quartz and Teflon, were tested to obtain information about the possible adsorption of gas-phase compounds onto filters during long sample collection of atmospheric aerosols. Particles of nanometer-size for off-line chemical characterization were collected using a recently introduced differential mobility analyzer for size separation. Samples were collected at an urban site (Helsinki, SMEARIII station) during spring 2010. Sampling time was 4 to 10 days for particles 50, 40, or 30 nm in diameter. Sample air flow was 4 L/min. The sampling setup was arranged so that two samples were obtained for each sampling period almost simultaneously: one containing particles and adsorbed gas-phase compounds and one containing adsorbed gas-phase compounds only. Filters were extracted and analyzed for the presence of selected carboxylic acids, polyols, nitrogen-containing compounds, and aldehydes. The results showed that, in quartz filter samples, gas-phase adsorption may be responsible for as much as 100% of some compound masses. Whether quartz or Teflon, simultaneous collection of gas-phase zero samples is essential during the whole sampling period. The dependence of the adsorption of gas-phase compounds on vapor pressure and the effect of adsorption on the deposited aerosol layer are discussed.
Energy Technology Data Exchange (ETDEWEB)
Rodríguez-Kessler, P. L., E-mail: peter.rodriguez@ipicyt.edu.mx [Instituto Potosino de Investigación Científica y Tecnológica, San Luis Potosí 78216 (Mexico); Rodríguez-Domínguez, A. R. [Instituto de Física, Universidad Autónoma de San Luis Potosí, San Luis Potosí 78000 (Mexico)
2015-11-14
Size and structure effects on the oxygen reduction reaction on Pt{sub N} clusters with N = 12–13 atoms have been investigated using periodic density functional theory calculations with the generalized gradient approximation. To describe the catalytic activity, we calculated the O and OH adsorption energies on the cluster surface. The oxygen binding on the 3-fold hollow sites on stable Pt{sub 12−13} cluster models resulted more favorable for the reaction with O, compared with the Pt{sub 13}(I{sub h}) and Pt{sub 55}(I{sub h}) icosahedral particles, in which O binds strongly. However, the rate-limiting step resulted in the removal of the OH species due to strong adsorptions on the vertex sites, reducing the utility of the catalyst surface. On the other hand, the active sites of Pt{sub 12−13} clusters have been localized on the edge sites. In particular, the OH adsorption on a bilayer Pt{sub 12} cluster is the closest to the optimal target; with 0.0-0.2 eV weaker than the Pt(111) surface. However, more progress is necessary to activate the vertex sites of the clusters. The d-band center of Pt{sub N} clusters shows that the structural dependence plays a decisive factor in the cluster reactivity.
Energy Technology Data Exchange (ETDEWEB)
Romero, L.; Travesi, A.
1983-07-01
A codes, BETAL, was developed, written in FORTRAN IV, to automatize calculations and presentations of the result of the total alpha-beta activities measurements in environmental samples. This code performs the necessary calculations for transformation the activities measured in total counts, to pCi/1., bearing in mind the efficiency of the detector used and the other necessary parameters. Further more, it appraise the standard deviation of the result, and calculus the Lower limit of detection for each measurement. This code is written in iterative way by screen-operator dialogue, and asking the necessary data to perform the calculation of the activity in each case by a screen label. The code could be executed through any screen and keyboard terminal, (whose computer accepts Fortran IV) with a printer connected to the said computer. (Author) 5 refs.
Caldararu, Octav; Olsson, Martin A.; Riplinger, Christoph; Neese, Frank; Ryde, Ulf
2017-01-01
We have tried to calculate the free energy for the binding of six small ligands to two variants of the octa-acid deep cavitand host in the SAMPL5 blind challenge. We employed structures minimised with dispersion-corrected density-functional theory with small basis sets and energies were calculated using large basis sets. Solvation energies were calculated with continuum methods and thermostatistical corrections were obtained from frequencies calculated at the HF-3c level. Care was taken to minimise the effects of the flexibility of the host by keeping the complexes as symmetric and similar as possible. In some calculations, the large net charge of the host was reduced by removing the propionate and benzoate groups. In addition, the effect of a restricted molecular dynamics sampling of structures was tested. Finally, we tried to improve the energies by using the DLPNO-CCSD(T) approach. Unfortunately, results of quite poor quality were obtained, with no correlation to the experimental data, systematically too positive affinities (by 50 kJ/mol) and a mean absolute error (after removal of the systematic error) of 11-16 kJ/mol. DLPNO-CCSD(T) did not improve the results, so the accuracy is not limited by the energy function. Instead, four likely sources of errors were identified: first, the minimised structures were often incorrect, owing to the omission of explicit solvent. They could be partly improved by performing the minimisations in a continuum solvent with four water molecules around the charged groups of the ligands. Second, some ligands could bind in several different conformations, requiring sampling of reasonable structures. Third, there is an indication the continuum-solvation model has problems to accurately describe the binding of both the negatively and positively charged guest molecules. Fourth, different methods to calculate the thermostatistical corrections gave results that differed by up to 30 kJ/mol and there is an indication that HF-3c overestimates
DEFF Research Database (Denmark)
Thorlund, Kristian; Anema, Aranka; Mills, Edward
2010-01-01
To illustrate the utility of statistical monitoring boundaries in meta-analysis, and provide a framework in which meta-analysis can be interpreted according to the adequacy of sample size. To propose a simple method for determining how many patients need to be randomized in a future trial before ...
Treen, Emily; Atanasova, Christina; Pitt, Leyland; Johnson, Michael
2016-01-01
Marketing instructors using simulation games as a way of inducing some realism into a marketing course are faced with many dilemmas. Two important quandaries are the optimal size of groups and how much of the students' time should ideally be devoted to the game. Using evidence from a very large sample of teams playing a simulation game, the study…
The objective of this research was to examine diet- and body size-related attitudes and behaviors associated with supplement use in a representative sample of fourth-grade students in Texas. The research design consisted of cross-sectional data from the School Physical Activity and Nutrition study, ...
Park, Seung Shik; Kim, Young J; Kang, Chang Hee
2007-05-01
To analyze polycyclic aromatic hydrocarbons (PAHs) at an urban site in Seoul, South Korea, 24-hr ambient air PM2.5 samples were collected during five intensive sampling periods between November 1998 and December 1999. To determine the PAH size distribution, 3-day size-segregated aerosol samples were also collected in December 1999. Concentrations of the 16 PAHs in the PM2.5 particles ranged from 3.9 to 119.9 ng m(-3) with a mean of 24.3 ng m(-3). An exceptionally high concentration of PAHs( approximately 120 ng m(-3)) observed during a haze event in December 1999 was likely influenced more by diesel vehicle exhaust than by gasoline exhaust, as well as air stagnation, as evidenced by the low carbon monoxide/elemental carbon (CO/EC) ratio of 205 found in this study and results reported by previous studies. The total PAHs associated with the size-segregated particles showed unimodal distributions. Compared to the unimodal size distributions of PAHs with modal peaks at particles during transport to the sampling site. Further, the fraction of PAHs associated with coarse particles(> 1.8 microm) increased as the molecular weight of the PAHs decreased due to volatilization of fine particles followed by condensation onto coarse particles.
Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica
2013-01-01
This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…
Hancock, Gregory R.; Freeman, Mara J.
2001-01-01
Provides select power and sample size tables and interpolation strategies associated with the root mean square error of approximation test of not close fit under standard assumed conditions. The goal is to inform researchers conducting structural equation modeling about power limitations when testing a model. (SLD)
Directory of Open Access Journals (Sweden)
Robert D. Otto
2003-04-01
Full Text Available Wildlife radio-telemetry and tracking projects often determine a priori required sample sizes by statistical means or default to the maximum number that can be maintained within a limited budget. After initiation of such projects, little attention is focussed on effective sample size requirements, resulting in lack of statistical power. The Department of National Defence operates a base in Labrador, Canada for low level jet fighter training activities, and maintain a sample of satellite collars on the George River caribou (Rangifer tarandus caribou herd of the region for spatial avoidance mitiga¬tion purposes. We analysed existing location data, in conjunction with knowledge of life history, to develop estimates of satellite collar sample sizes required to ensure adequate mitigation of GRCH. We chose three levels of probability in each of six annual caribou seasons. Estimated number of collars required ranged from 15 to 52, 23 to 68, and 36 to 184 for 50%, 75%, and 90% probability levels, respectively, depending on season. Estimates can be used to make more informed decisions about mitigation of GRCH, and, generally, our approach provides a means to adaptively assess radio collar sam¬ple sizes for ongoing studies.
Rogan, Joanne C.; Keselman, H. J.
1977-01-01
The effects of variance heterogeneity on the empirical probability of a Type I error for the analysis of variance (ANOVA) F-test are examined. The rate of Type I error varies as a function of the degree of variance heterogeneity, and the ANOVA F-test is not always robust to variance heterogeneity when sample sizes are equal. (Author/JAC)
Evidence for a Global Sampling Process in Extraction of Summary Statistics of Item Sizes in a Set.
Tokita, Midori; Ueda, Sachiyo; Ishiguchi, Akira
2016-01-01
Several studies have shown that our visual system may construct a "summary statistical representation" over groups of visual objects. Although there is a general understanding that human observers can accurately represent sets of a variety of features, many questions on how summary statistics, such as an average, are computed remain unanswered. This study investigated sampling properties of visual information used by human observers to extract two types of summary statistics of item sets, average and variance. We presented three models of ideal observers to extract the summary statistics: a global sampling model without sampling noise, global sampling model with sampling noise, and limited sampling model. We compared the performance of an ideal observer of each model with that of human observers using statistical efficiency analysis. Results suggest that summary statistics of items in a set may be computed without representing individual items, which makes it possible to discard the limited sampling account. Moreover, the extraction of summary statistics may not necessarily require the representation of individual objects with focused attention when the sets of items are larger than 4.
DEFF Research Database (Denmark)
Aukland, S M; Westerhausen, R; Plessen, K J
2011-01-01
BACKGROUND AND PURPOSE: Several studies suggest that VLBW is associated with a reduced CC size later in life. We aimed to clarify this in a prospective, controlled study of 19-year-olds, hypothesizing that those with LBWs had smaller subregions of CC than the age-matched controls, even after...
Institute of Scientific and Technical Information of China (English)
秦艳杰; 金迪; 初冠囡; 李霞; 李永仁
2012-01-01
This study examines the effects of sample size and number of AFLP primer pairs on genetic structure in cultured sea urchin (Strongyloeentrotus intermedius). 8 different sample sizes, 10,20,30,40,50,60,70 and 80 individuals,were used to calculate the genetic parameters including Nei's gene diversity(H),Shannon's information index (I) and percent of polymorphic loci (PP). These indicators increased dramatically along with the increasing of sample sizes and then held a stable trend. When the sample sizes were equal to,or more than 50,PP showed no significant differences with the sample size rising. H and I showed no significant differences when there were more than 30 and 20 individuals,respectively. The genetic parameters were also calculated from band information detected by each of one,two, three,four and five AFLP primer pairs,and those calculated from six AFLP primer pairs were considered as control. H,I and PP were all showed significant differences between results from one or two primer pairs and those from control. When the numbers of AFLP primer pairs were equal to or more than three,all genetic parameters showed no significant differences. Accordingly, we suggested that when AFLP markers were used to estimate genetic diversity of sea urchin population, the minimum sample size should not be less than 50 and at least 3 AFLP primer pairs(or,more than 100 loci) were required when the sample size was large enough(80 individuals).%采用10,20,30,40,50,60,70,80共8个样本量梯度,随机选择并逐步增加AFLP引物组合,研究了AFLP标记数量及样本量对中间球海胆(Strongylocentrotus intermedius)群体遗传学指标的影响.结果表明:基因多样性指数(H)、Shannon氏指数(I)和多态位点比例(PP)3个遗传学指标对样本量的敏感程度不同,PP指标受样本量影响较大,样本量≥50个时方稳定,H指标在样本量≥30时稳定下来,而I指标在样本量≥20时即不再出现显著差异.共采用6对AFLP引物对80
Directory of Open Access Journals (Sweden)
Dow Geoffrey S
2003-02-01
Full Text Available Abstract Background There is no known biochemical basis for the adverse neurological events attributed to mefloquine. Identification of genes modulated by toxic agents using microarrays may provide sufficient information to generate hypotheses regarding their mode of action. However, this utility may be compromised if sample sizes are too low or the filtering methods used to identify differentially expressed genes are inappropriate. Methods The transcriptional changes induced in rat neuroblastoma cells by a physiological dose of mefloquine (10 micro-molar were investigated using Affymetrix arrays. A large sample size was used (total of 16 arrays. Genes were ranked by P-value (t-test. RT-PCR was used to confirm (or reject the expression changes of several of the genes with the lowest P-values. Different P-value filtering methods were compared in terms of their ability to detect these differentially expressed genes. A retrospective power analysis was then performed to determine whether the use of lower sample sizes might also have detected those genes with altered transcription. Results Based on RT-PCR, mefloquine upregulated cJun, IkappaB and GADD153. Reverse Holm-Bonferroni P-value filtering was superior to other methods in terms of maximizing detection of differentially expressed genes but not those with unaltered expression. Reduction of total microarray sample size ( Conclusions Adequate sample sizes and appropriate selection of P-value filtering methods are essential for the reliable detection of differentially expressed genes. The changes in gene expression induced by mefloquine suggest that the ER might be a neuronal target of the drug.
The Harmonic Calculation of Novel Linear Combination Sampling SPWM%新型线性组合采样法SPWM谐波数值分析
Institute of Scientific and Technical Information of China (English)
严海龙; 王榕生
2012-01-01
This article first illustrates the shortcoming of the asymmetric regular sampling SPWM, it requires a large sampling numbers, and so occupy a large processor resources. Then a detailed analysis of the linear combination sampling method to solve this problem is given. Through the comparison of symmetric regular sampling method and two-vertex sampling method, this article leads to novel linear combination sampling method, and then uses Matlab numerical calculation functions for harmonic analysis of several sampling SPWM wave, also finally gives a few key procedures in the analysis process. The analysis results validated the linear combination sampling method, it also proved that the two-vertex sampling method is not practical.%阐明了常用的不对称规则采样法SPWM的采样次数多、占用处理器资源大的问题,详细分析了能够解决这一问题的线性组合采样法的整体原理。通过双顶点采样法与对称规则采样法的比较,引出新型线性组合采样法,并利用Matlab强大的数值计算功能对几种采样法SPWM波行进行谐波分析,同时给出了分析过程中的关键程序段。分析结果证明了线性组合采样法的有效性,也证明了双顶点采样法是不实用的。
Holdeman, James D.; Clisset, James R.; Moder, Jeffrey P.
2010-01-01
The primary purpose of this jet-in-crossflow study was to calculate expected results for two configurations for which limited or no experimental results have been published: (1) cases of opposed rows of closely-spaced jets from inline and staggered round holes and (2) rows of jets from alternating large and small round holes. Simulations of these configurations were performed using an Excel (Microsoft Corporation) spreadsheet implementation of a NASA-developed empirical model which had been shown in previous publications to give excellent representations of mean experimental scalar results suggesting that the NASA empirical model for the scalar field could confidently be used to investigate these configurations. The supplemental Excel spreadsheet is posted with the current report on the NASA Glenn Technical Reports Server (http://gltrs.grc.nasa.gov) and can be accessed from the Supplementary Notes section as TM-2010-216100-SUPPL1.xls. Calculations for cases of opposed rows of jets with the orifices on one side shifted show that staggering can improve the mixing, particularly for cases where jets would overpenetrate slightly if the orifices were in an aligned configuration. The jets from the larger holes dominate the mixture fraction for configurations with a row of large holes opposite a row of smaller ones although the jet penetration was about the same. For single and opposed rows with mixed hole sizes, jets from the larger holes penetrated farther. For all cases investigated, the dimensionless variance of the mixture fraction decreased significantly with increasing downstream distance. However, at a given downstream distance, the variation between cases was small.
Wang, Lei; Li, Zhenyu; Jiang, Jia; An, Taiyu; Qin, Hongwei; Hu, Jifan
2017-01-01
In the present work, we demonstrate that ferromagnetic resonance and magneto-permittivity resonance can be observed in appropriate microwave frequencies at room temperature for multiferroic nano-BiFeO3/paraffin composite sample with an appropriate sample-thickness (such as 2 mm). Ferromagnetic resonance originates from the room-temperature weak ferromagnetism of nano-BiFeO3. The observed magneto-permittivity resonance in multiferroic nano-BiFeO3 is connected with the dynamic magnetoelectric coupling through Dzyaloshinskii-Moriya (DM) magnetoelectric interaction or the combination of magnetostriction and piezoelectric effects. In addition, we experimentally observed the resonance of negative imaginary permeability for nano BiFeO3/paraffin toroidal samples with longer sample thicknesses D=3.7 and 4.9 mm. Such resonance of negative imaginary permeability belongs to sample-size resonance.
Steven, E.; Jobiliong, E.; Eugenio, P. M.; Brooks, J. S.
2012-04-01
A procedure for fabricating adhesive stamp electrodes based on gold coated adhesive tape used to measure electronic transport properties of supra-micron samples in the lateral range 10-100 μm and thickness >1 μm is described. The electrodes can be patterned with a ˜4 μm separation by metal deposition through a mask using Nephila clavipes spider dragline silk fibers. Ohmic contact is made by adhesive lamination of a sample onto the patterned electrodes. The performance of the electrodes with temperature and magnetic field is demonstrated for the quasi-one-dimensional organic conductor (TMTSF)2PF6 and single crystal graphite, respectively.
All-reflective UV-VIS-NIR transmission and fluorescence spectrometer for μm-sized samples
Directory of Open Access Journals (Sweden)
Friedrich O. Kirchner
2014-07-01
Full Text Available We report on an optical transmission spectrometer optimized for tiny samples. The setup is based on all-reflective parabolic optics and delivers broadband operation from 215 to 1030 nm. A fiber-coupled light source is used for illumination and a fiber-coupled miniature spectrometer for detection. The diameter of the probed area is less than 200 μm for all wavelengths. We demonstrate the capability to record transmission, absorption, reflection, fluorescence and refractive indices of tiny and ultrathin sample flakes with this versatile device. The performance is validated with a solid state wavelength standard and with dye solutions.
Boessen, Ruud; van der Baan, Frederieke; Groenwold, Rolf; Egberts, Antoine; Klungel, Olaf; Grobbee, Diederick; Knol, Mirjam; Roes, Kit
2013-01-01
Two-stage clinical trial designs may be efficient in pharmacogenetics research when there is some but inconclusive evidence of effect modification by a genomic marker. Two-stage designs allow to stop early for efficacy or futility and can offer the additional opportunity to enrich the study population to a specific patient subgroup after an interim analysis. This study compared sample size requirements for fixed parallel group, group sequential, and adaptive selection designs with equal overall power and control of the family-wise type I error rate. The designs were evaluated across scenarios that defined the effect sizes in the marker positive and marker negative subgroups and the prevalence of marker positive patients in the overall study population. Effect sizes were chosen to reflect realistic planning scenarios, where at least some effect is present in the marker negative subgroup. In addition, scenarios were considered in which the assumed 'true' subgroup effects (i.e., the postulated effects) differed from those hypothesized at the planning stage. As expected, both two-stage designs generally required fewer patients than a fixed parallel group design, and the advantage increased as the difference between subgroups increased. The adaptive selection design added little further reduction in sample size, as compared with the group sequential design, when the postulated effect sizes were equal to those hypothesized at the planning stage. However, when the postulated effects deviated strongly in favor of enrichment, the comparative advantage of the adaptive selection design increased, which precisely reflects the adaptive nature of the design.
Lorenzo, C; Carretero, J M; Arsuaga, J L; Gracia, A; Martínez, I
1998-05-01
A sexual dimorphism more marked than in living humans has been claimed for European Middle Pleistocene humans, Neandertals and prehistoric modern humans. In this paper, body size and cranial capacity variation are studied in the Sima de los Huesos Middle Pleistocene sample. This is the largest sample of non-modern humans found to date from one single site, and with all skeletal elements represented. Since the techniques available to estimate the degree of sexual dimorphism in small palaeontological samples are all unsatisfactory, we have used the bootstraping method to asses the magnitude of the variation in the Sima de los Huesos sample compared to modern human intrapopulational variation. We analyze size variation without attempting to sex the specimens a priori. Anatomical regions investigated are scapular glenoid fossa; acetabulum; humeral proximal and distal epiphyses; ulnar proximal epiphysis; radial neck; proximal femur; humeral, femoral, ulnar and tibial shaft; lumbosacral joint; patella; calcaneum; and talar trochlea. In the Sima de los Huesos sample only the humeral midshaft perimeter shows an unusual high variation (only when it is expressed by the maximum ratio, not by the coefficient of variation). In spite of that the cranial capacity range at Sima de los Huesos almost spans the rest of the European and African Middle Pleistocene range. The maximum ratio is in the central part of the distribution of modern human samples. Thus, the hypothesis of a greater sexual dimorphism in Middle Pleistocene populations than in modern populations is not supported by either cranial or postcranial evidence from Sima de los Huesos.
Analysis of Nb3Sn Strand Microstructure After Full-size SULTAN Test of ITER TF Conductor Sample
Kaverin, D.; Potanina, L.; Shutov, K.; Vysotsky, V.; Tronza, V.; Mitin, A.; Abdyukhanov, I.; Alekseev, M.
The study of defects generated in superconducting filaments of Nb3Sn strands under electromagnetic and thermal cycling was carried out for the TFRF3 cable-in-conduit-conductor (CICC) sample that passed final testing inthe SULTAN test facility. The TFRF3 sample was manufactured forthe qualification of the RF Toroidal Field (TF) CICC. The strand samples were taken from different locations in the cross-section of TFRF3 and different positions along its axis in relation to background magnetic field. Qualitative and quantitative analysis of defects were carried out using metallographic analysis of images obtained by Laser Scanning Microscope. We analyzed number, type, and distribution of defects in filaments of the Nb3Sn strand samples extracted from different petals of TFRF3 in dependence on thestrand location in the cross-section (the center of petal, nearby the spiral, nearby the outer jacket) in the high field zone (HFZ). The results about the defects amount and their distribution are presented and discussed.
Directory of Open Access Journals (Sweden)
Morecroft Michael D
2001-07-01
Full Text Available Abstract Background The Resource Dispersion Hypothesis (RDH proposes a mechanism for the passive formation of social groups where resources are dispersed, even in the absence of any benefits of group living per se. Despite supportive modelling, it lacks empirical testing. The RDH predicts that, rather than Territory Size (TS increasing monotonically with Group Size (GS to account for increasing metabolic needs, TS is constrained by the dispersion of resource patches, whereas GS is independently limited by their richness. We conducted multiple-year tests of these predictions using data from the long-term study of badgers Meles meles in Wytham Woods, England. The study has long failed to identify direct benefits from group living and, consequently, alternative explanations for their large group sizes have been sought. Results TS was not consistently related to resource dispersion, nor was GS consistently related to resource richness. Results differed according to data groupings and whether territories were mapped using minimum convex polygons or traditional methods. Habitats differed significantly in resource availability, but there was also evidence that food resources may be spatially aggregated within habitat types as well as between them. Conclusions This is, we believe, the largest ever test of the RDH and builds on the long-term project that initiated part of the thinking behind the hypothesis. Support for predictions were mixed and depended on year and the method used to map territory borders. We suggest that within-habitat patchiness, as well as model assumptions, should be further investigated for improved tests of the RDH in the future.
Energy Technology Data Exchange (ETDEWEB)
Damiani, Rick [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2016-02-08
This manual summarizes the theory and preliminary verifications of the JacketSE module, which is an offshore jacket sizing tool that is part of the Wind-Plant Integrated System Design & Engineering Model toolbox. JacketSE is based on a finite-element formulation and on user-prescribed inputs and design standards' criteria (constraints). The physics are highly simplified, with a primary focus on satisfying ultimate limit states and modal performance requirements. Preliminary validation work included comparing industry data and verification against ANSYS, a commercial finite-element analysis package. The results are encouraging, and future improvements to the code are recommended in this manual.
Chang, G. S.; Lillo, M. A.
2009-08-01
The National Nuclear Security Administrations (NNSA) Reduced Enrichment for Research and Test Reactors (RERTR) program assigned to the Idaho National Laboratory (INL) the responsibility of developing and demonstrating high uranium density research reactor fuel forms to enable the use of low enriched uranium (LEU) in research and test reactors around the world. A series of full-size fuel plate experiments have been proposed for irradiation testing in the center flux trap (CFT) position of the Advanced Test Reactor (ATR). These full-size fuel plate tests are designated as the AFIP tests. The AFIP nominal fuel zone is rectangular in shape having a designed length of 21.5-in (54.61-cm), width of 1.6-in (4.064-cm), and uniform thickness of 0.014-in (0.03556-cm). This gives a nominal fuel zone volume of 0.482 in3 (7.89 cm3) per fuel plate. The AFIP test assembly has two test positions. Each test position is designed to hold 2 full-size plates, for a total of 4 full-size plates per test assembly. The AFIP test plates will be irradiated at a peak surface heat flux of about 350 W/cm2 and discharged at a peak U-235 burn-up of about 70 at.%. Based on limited irradiation testing of the monolithic (U-10Mo) fuel form, it is desirable to keep the peak fuel temperature below 250°C to achieve this, it will be necessary to keep plate heat fluxes below 500 W/cm2. Due to the heavy U-235 loading and a plate width of 1.6-in (4.064-cm), the neutron self-shielding will increase the local-to-average-ratio (L2AR) fission power near the sides of the fuel plates. To demonstrate that the AFIP experiment will meet the ATR safety requirements, a very detailed 2-dimensional (2D) Y-Z fission power profile was evaluated in order to best predict the fuel plate temperature distribution. The ability to accurately predict fuel plate power and burnup are essential to both the design of the AFIP tests as well as evaluation of the irradiated fuel performance. To support this need, a detailed MCNP Y
Jakopic, Rozle; Richter, Stephan; Kühn, Heinz; Benedik, Ljudmila; Pihlar, Boris; Aregbe, Yetunde
2009-01-01
A sample preparation procedure for isotopic measurements using thermal ionization mass spectrometry (TIMS) was developed which employs the technique of carburization of rhenium filaments. Carburized filaments were prepared in a special vacuum chamber in which the filaments were exposed to benzene vapour as a carbon supply and carburized electrothermally. To find the optimal conditions for the carburization and isotopic measurements using TIMS, the influence of various parameters such as benzene pressure, carburization current and the exposure time were tested. As a result, carburization of the filaments improved the overall efficiency by one order of magnitude. Additionally, a new "multi-dynamic" measurement technique was developed for Pu isotope ratio measurements using a "multiple ion counting" (MIC) system. This technique was combined with filament carburization and applied to the NBL-137 isotopic standard and samples of the NUSIMEP 5 inter-laboratory comparison campaign, which included certified plutonium materials at the ppt-level. The multi-dynamic measurement technique for plutonium, in combination with filament carburization, has been shown to significantly improve the precision and accuracy for isotopic analysis of environmental samples with low-levels of plutonium.
Liu, Tian; Spincemaille, Pascal; de Rochefort, Ludovic; Kressler, Bryan; Wang, Yi
2009-01-01
Magnetic susceptibility differs among tissues based on their contents of iron, calcium, contrast agent, and other molecular compositions. Susceptibility modifies the magnetic field detected in the MR signal phase. The determination of an arbitrary susceptibility distribution from the induced field shifts is a challenging, ill-posed inverse problem. A method called "calculation of susceptibility through multiple orientation sampling" (COSMOS) is proposed to stabilize this inverse problem. The field created by the susceptibility distribution is sampled at multiple orientations with respect to the polarization field, B(0), and the susceptibility map is reconstructed by weighted linear least squares to account for field noise and the signal void region. Numerical simulations and phantom and in vitro imaging validations demonstrated that COSMOS is a stable and precise approach to quantify a susceptibility distribution using MRI.
Volkova, V. (Valeriya); Tronina, A.; Pogorelova, T.
2010-01-01
The distribution of the Hsu test statistic has been investigated in case when distributions of observed random variables di er from the normal law by methods of statistical simulation. The limiting statistic distributions have been approxi- mated for a number observation distribution laws. The investigation of Bartels and Wald-Wolfowitz test statistic distributions has been carried out in the case of the limited sample sizes.
Shirazi, Mohammadali; Reddy Geedipally, Srinivas; Lord, Dominique
2017-01-01
Severity distribution functions (SDFs) are used in highway safety to estimate the severity of crashes and conduct different types of safety evaluations and analyses. Developing a new SDF is a difficult task and demands significant time and resources. To simplify the process, the Highway Safety Manual (HSM) has started to document SDF models for different types of facilities. As such, SDF models have recently been introduced for freeway and ramps in HSM addendum. However, since these functions or models are fitted and validated using data from a few selected number of states, they are required to be calibrated to the local conditions when applied to a new jurisdiction. The HSM provides a methodology to calibrate the models through a scalar calibration factor. However, the proposed methodology to calibrate SDFs was never validated through research. Furthermore, there are no concrete guidelines to select a reliable sample size. Using extensive simulation, this paper documents an analysis that examined the bias between the 'true' and 'estimated' calibration factors. It was indicated that as the value of the true calibration factor deviates further away from '1', more bias is observed between the 'true' and 'estimated' calibration factors. In addition, simulation studies were performed to determine the calibration sample size for various conditions. It was found that, as the average of the coefficient of variation (CV) of the 'KAB' and 'C' crashes increases, the analyst needs to collect a larger sample size to calibrate SDF models. Taking this observation into account, sample-size guidelines are proposed based on the average CV of crash severities that are used for the calibration process.
Rakow, Tobias; El Deeb, Sami; Hahne, Thomas; El-Hady, Deia Abd; AlBishri, Hassan M; Wätzig, Hermann
2014-09-01
In this study, size-exclusion chromatography and high-resolution atomic absorption spectrometry methods have been developed and evaluated to test the stability of proteins during sample pretreatment. This especially includes different storage conditions but also adsorption before or even during the chromatographic process. For the development of the size exclusion method, a Biosep S3000 5 μm column was used for investigating a series of representative model proteins, namely bovine serum albumin, ovalbumin, monoclonal immunoglobulin G antibody, and myoglobin. Ambient temperature storage was found to be harmful to all model proteins, whereas short-term storage up to 14 days could be done in an ordinary refrigerator. Freezing the protein solutions was always complicated and had to be evaluated for each protein in the corresponding solvent. To keep the proteins in their native state a gentle freezing temperature should be chosen, hence liquid nitrogen should be avoided. Furthermore, a high-resolution continuum source atomic absorption spectrometry method was developed to observe the adsorption of proteins on container material and chromatographic columns. Adsorption to any container led to a sample loss and lowered the recovery rates. During the pretreatment and high-performance size-exclusion chromatography, adsorption caused sample losses of up to 33%.
Kolak, Jon; Hackley, Paul C.; Ruppert, Leslie F.; Warwick, Peter D.; Burruss, Robert
2015-01-01
To investigate the potential for mobilizing organic compounds from coal beds during geologic carbon dioxide (CO2) storage (sequestration), a series of solvent extractions using dichloromethane (DCM) and using supercritical CO2 (40 °C and 10 MPa) were conducted on a set of coal samples collected from Louisiana and Ohio. The coal samples studied range in rank from lignite A to high volatile A bituminous, and were characterized using proximate, ultimate, organic petrography, and sorption isotherm analyses. Sorption isotherm analyses of gaseous CO2 and methane show a general increase in gas storage capacity with coal rank, consistent with findings from previous studies. In the solvent extractions, both dry, ground coal samples and moist, intact core plug samples were used to evaluate effects of variations in particle size and moisture content. Samples were spiked with perdeuterated surrogate compounds prior to extraction, and extracts were analyzed via gas chromatography–mass spectrometry. The DCM extracts generally contained the highest concentrations of organic compounds, indicating the existence of additional hydrocarbons within the coal matrix that were not mobilized during supercritical CO2 extractions. Concentrations of aliphatic and aromatic compounds measured in supercritical CO2 extracts of core plug samples generally are lower than concentrations in corresponding extracts of dry, ground coal samples, due to differences in particle size and moisture content. Changes in the amount of extracted compounds and in surrogate recovery measured during consecutive supercritical CO2extractions of core plug samples appear to reflect the transition from a water-wet to a CO2-wet system. Changes in coal core plug mass during supercritical CO2 extraction range from 3.4% to 14%, indicating that a substantial portion of coal moisture is retained in the low-rank coal samples. Moisture retention within core plug samples, especially in low-rank coals, appears to inhibit
Directory of Open Access Journals (Sweden)
M.I. Baranov
2016-06-01
Full Text Available Purpose. Calculation and experimental researches of the electro-thermal resistibility of the steel sheet samples to action standard pulse current components of the artificial lightning with amplitude-time parameters (ATP, corresponded the requirements of normative documents of USA for SAE ARP 5412 & SAE ARP 5416. Methodology. Electrophysics bases of technique of high tensions and large impulsive currents (LIC, and also scientific and technical bases of planning of devices of high-voltage impulsive technique and measuring in them LIC. Сurrent amplitude ImA=±200 kA (with a tolerance of ±10 %; current action integral JA=2∙106 A2•s (with a tolerance of ±20 %; time, corresponding to the amplitude of the current ImA, tmA≤50 microseconds; the duration of the current flow τpA≤500 microseconds. Results. The results of the evaluation of the calculated and experimental studies of electro-thermal resistance of the samples of plates measuring 0,5 m 0,5 m stainless steel 1 mm thickness to the action on them artificial lightning impulse currents with rationed ATP on the requirements of normative documents of USA for SAE ARP 5412 & SAE ARP 5416. A pulse A- component have a first amplitude 192 kA, the corresponding time of 34 μs, and the duration aperiodic component amplitude 804 A, corresponding to the time 9 ms. It has been shown that the long C- component current of artificial lightning can lead to keyhole these samples. The diameter of the holes in this thin steel sheet, which is formed during the flow of current C- components can reach 15 mm. The results of calculation and experiment agree within 28 %. Originality. For the first time in world practice on the generator large pulsed currents experimental studies of resistibility of sheet steel samples to the action of artificial lightning currents with critical parameters. Practical value. Using the results obtained in the practice of lightning protection will significantly improve the
Energy Technology Data Exchange (ETDEWEB)
Plionis, Alexander A [Los Alamos National Laboratory; Peterson, Dominic S [Los Alamos National Laboratory; Tandon, Lav [Los Alamos National Laboratory; Lamont, Stephen P [Los Alamos National Laboratory
2009-01-01
Uranium particles within the respirable size range pose a significant hazard to the health and safety of workers. Significant differences in the deposition and incorporation patterns of aerosols within the respirable range can be identified and integrated into sophisticated health physics models. Data characterizing the uranium particle size distribution resulting from specific foundry-related processes are needed. Using personal air sampling cascade impactors, particles collected from several foundry processes were sorted by activity median aerodynamic diameter onto various Marple substrates. After an initial gravimetric assessment of each impactor stage, the substrates were analyzed by alpha spectrometry to determine the uranium content of each stage. Alpha spectrometry provides rapid nondestructive isotopic data that can distinguish process uranium from natural sources and the degree of uranium contribution to the total accumulated particle load. In addition, the particle size bins utilized by the impactors provide adequate resolution to determine if a process particle size distribution is: lognormal, bimodal, or trimodal. Data on process uranium particle size values and distributions facilitate the development of more sophisticated and accurate models for internal dosimetry, resulting in an improved understanding of foundry worker health and safety.
Gaonkar, Bilwaj; Hovda, David; Martin, Neil; Macyszyn, Luke
2016-03-01
Deep Learning, refers to large set of neural network based algorithms, have emerged as promising machine- learning tools in the general imaging and computer vision domains. Convolutional neural networks (CNNs), a specific class of deep learning algorithms, have been extremely effective in object recognition and localization in natural images. A characteristic feature of CNNs, is the use of a locally connected multi layer topology that is inspired by the animal visual cortex (the most powerful vision system in existence). While CNNs, perform admirably in object identification and localization tasks, typically require training on extremely large datasets. Unfortunately, in medical image analysis, large datasets are either unavailable or are extremely expensive to obtain. Further, the primary tasks in medical imaging are organ identification and segmentation from 3D scans, which are different from the standard computer vision tasks of object recognition. Thus, in order to translate the advantages of deep learning to medical image analysis, there is a need to develop deep network topologies and training methodologies, that are geared towards medical imaging related tasks and can work in a setting where dataset sizes are relatively small. In this paper, we present a technique for stacked supervised training of deep feed forward neural networks for segmenting organs from medical scans. Each `neural network layer' in the stack is trained to identify a sub region of the original image, that contains the organ of interest. By layering several such stacks together a very deep neural network is constructed. Such a network can be used to identify extremely small regions of interest in extremely large images, inspite of a lack of clear contrast in the signal or easily identifiable shape characteristics. What is even more intriguing is that the network stack achieves accurate segmentation even when it is trained on a single image with manually labelled ground truth. We validate
Directory of Open Access Journals (Sweden)
AR Silva
2011-03-01
Full Text Available O objetivo deste estudo foi determinar o tamanho apropriado de amostra por meio da técnica de simulação de subamostras para a caracterização de variáveis morfológicas de frutos de oito acessos (variedades de quatro espécies de pimenteira (Capsicum spp., que foram cultivadas em área experimental da UFPB. Foram analisados tamanhos reduzidos de amostras, variando de 3 a 29 frutos, com 100 amostras para cada tamanho simulado em um processo de amostragem com reposição de dados. Realizou-se análise de variância para os números mínimos de frutos por amostra que representasse a amostra de referência (30 frutos em cada variável estudada, constituindo um delineamento experimental inteiramente casualizado com duas repetições, onde cada dado representou o primeiro número de frutos na amostra simulada que não apresentou nenhum valor fora do intervalo de confiança da amostra de referência e que assim manteve-se até a última subamostra da simulação. A técnica de simulação utilizada permitiu obter, com a mesma precisão da amostra de 30 frutos, reduções do tamanho amostral em torno de 50%, dependendo da variável morfológica, não havendo diferenças entre os acessos.The appropriate sample size for the evaluation of morphological fruit traits of pepper was evaluated through a technique of simulation of subsamples. The treatments consisted of eight accessions of four pepper species (Capsicum spp., cultivated in an experimental area of the Universidade Federal da Paraíba. Small samples, ranging from 3 to 29 fruits were evaluated. For each sample size, 100 subsamples were simulated with data replacement. The data were submitted to analysis of variance, in a complete randomized design, for the minimum number of fruits per sample. Each collected data consisted of the first number of fruits in the simulated sample without values out of the confidence interval. This procedure was done up to the last subsample simulation. The
Directory of Open Access Journals (Sweden)
Pierre Mollet
Full Text Available We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex. The SCR model yielded a total population size estimate (posterior mean of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130-147. The observed sex ratio was skewed towards males (0.63. The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54-0.61, suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.
Mollet, Pierre; Kery, Marc; Gardner, Beth; Pasinelli, Gilberto; Royle, Andy
2015-01-01
We conducted a survey of an endangered and cryptic forest grouse, the capercaillie Tetrao urogallus, based on droppings collected on two sampling occasions in eight forest fragments in central Switzerland in early spring 2009. We used genetic analyses to sex and individually identify birds. We estimated sex-dependent detection probabilities and population size using a modern spatial capture-recapture (SCR) model for the data from pooled surveys. A total of 127 capercaillie genotypes were identified (77 males, 46 females, and 4 of unknown sex). The SCR model yielded atotal population size estimate (posterior mean) of 137.3 capercaillies (posterior sd 4.2, 95% CRI 130–147). The observed sex ratio was skewed towards males (0.63). The posterior mean of the sex ratio under the SCR model was 0.58 (posterior sd 0.02, 95% CRI 0.54–0.61), suggesting a male-biased sex ratio in our study area. A subsampling simulation study indicated that a reduced sampling effort representing 75% of the actual detections would still yield practically acceptable estimates of total size and sex ratio in our population. Hence, field work and financial effort could be reduced without compromising accuracy when the SCR model is used to estimate key population parameters of cryptic species.
Junttila, Virpi; Kauranne, Tuomo; Finley, Andrew O.; Bradford, John B.
2015-01-01
Modern operational forest inventory often uses remotely sensed data that cover the whole inventory area to produce spatially explicit estimates of forest properties through statistical models. The data obtained by airborne light detection and ranging (LiDAR) correlate well with many forest inventory variables, such as the tree height, the timber volume, and the biomass. To construct an accurate model over thousands of hectares, LiDAR data must be supplemented with several hundred field sample measurements of forest inventory variables. This can be costly and time consuming. Different LiDAR-data-based and spatial-data-based sampling designs can reduce the number of field sample plots needed. However, problems arising from the features of the LiDAR data, such as a large number of predictors compared with the sample size (overfitting) or a strong correlation among predictors (multicollinearity), may decrease the accuracy and precision of the estimates and predictions. To overcome these problems, a Bayesian linear model with the singular value decomposition of predictors, combined with regularization, is proposed. The model performance in predicting different forest inventory variables is verified in ten inventory areas from two continents, where the number of field sample plots is reduced using different sampling designs. The results show that, with an appropriate field plot selection strategy and the proposed linear model, the total relative error of the predicted forest inventory variables is only 5%–15% larger using 50 field sample plots than the error of a linear model estimated with several hundred field sample plots when we sum up the error due to both the model noise variance and the model’s lack of fit.
Directory of Open Access Journals (Sweden)
ELISA M. BucruANON
1997-01-01
Full Text Available The influence of different insect and seed sample size and heat treatment on the infestation of bean weevil, Callosobruchus chinensis on mungbean,Vjg/m radiata (L. Wilczek, was studied. Insect and seed sample size as well as varieties/genotype had significant influence in obtaining large responses in the number of eggs and progenies of the bean weevil. Use of at least 10 adult weevils to infest test samples containing at least 40 seeds for a 5-day oviposition period should produce reliable results when infesting mungbean seeds with unsexed weevils. Dry heat treatment was very effective in disinfesting mungbean seeds from the bean weevil in different developmental stages. It improved germination depending upon the condition of the seed before treatment and certain temperature limits. A suggested treatment for mungbean dismfestation using dry heat would be 60°C and 70°C for two-and one-hour treatments, respectively at 12% moisture content. For seeds in bulk, 60°C is much preferred.
DEFF Research Database (Denmark)
Burild, Anders; Frandsen, Henrik Lauritz; Poulsen, Morten;
2014-01-01
with tandem mass spectrometry method was developed to quantify vitamin D3 and 25‐hydroxyvitamin D3 simultaneously in porcine tissues. A sample of 0.2–1 g was saponified followed by liquid–liquid extraction and normal‐phase solid‐phase extraction. The analytes were derivatized with 4‐phenyl‐1,2,4‐triazoline‐3......Most methods for the quantification of physiological levels of vitamin D3 and 25‐hydroxyvitamin D3 are developed for food analysis where the sample size is not usually a critical parameter. In contrast, in life science studies sample sizes are often limited. A very sensitive liquid chromatography......,5‐dione to improve the ionization efficiency by electrospray ionization. The method was validated in porcine liver and adipose tissue, and the accuracy was determined to be 72–97% for vitamin D3 and 91–124% for 25‐hydroxyvitamin D3. The limit of quantification was...
Influence of sample size on bryophyte ecological indices%样方大小对苔藓植物生态学指标的影响
Institute of Scientific and Technical Information of China (English)
沈蕾; 郭水良; 宋洪涛; 娄玉霞; 曹同
2011-01-01
为了分析样方大小对苔藓植物生态指标的影响,在环境相对一致的条件下,在各样点以巢式取样法调查苔藓植物盖度,取样的大小分别为20 cm× 20 cm,30 cm× 30 cm,40 cm×40 cm,50 cm× 50 cm,60 cm×60 cm.通过统计发现,随着取样面积的增加,目测法所获得的优势种、总的苔藓植物的盖度呈现下降趋势,但是非优势种和偶见种的盖度却有上升趋势;随着样方大小之间差异的扩大,所得调查数据间的差异也在扩大;随着取样面积的增加,样方中苔藓植物的多样性指数、生态位宽度和重叠值、苔藓植物的平均种数均符合饱和曲线的增加规律;取样面积大小对环境因子与苔藓植物分布之间关系的分析结果也有明显影响;在生境相对一致的土生环境下,苔藓植物的取样面积可考虑在40 cm×40 cm～50 cm× 50 cm的范围内.%In order to analyze the influences of sample sizes on byophyte ecological indices, plots were located using systematic sampling method under the similar ecological conditions,and the coverage of bryophytes were investigated by nested sampling method,the size of samples were 20 cm×20 cm,30 cm×30 cm,40 cm×40 cm,50 cm×50 cm and 60 cm× 60 cm, respectively. A total of 73 plots including 365 samples were surveyed in the present study. Bryophyrte coverages at each quadrat were recorded by vision estimation. Data analyses showed that the diversity indices,niche width and overlap,average species number of bryophyte per sample increased with the enlargement of sample size. Sampling size also affected the relationship between enviornmental varialbes and bryophyte distribution. In sites with relatively homogeneous habitats, sampling area for bryophyte communities could be considered from 40 cm× 40 cm to 50 cm× 50 cm.
DEFF Research Database (Denmark)
Rousing, Tine; Møller, Steen Henrik; Hansen, Steffen W
2012-01-01
European Fur Breeder's Association initiated the "WelFur project" in 2009 which is aiming at developing an applicable on farm welfare assessment protocol for mink based on the Welfare Quality® principles. Such a welfare assessment system should possess the following qualities: It should be "high......" in validity, reliability as well as feasibility - the latter both as regards time and economy costs. This paper based on empiric data addressed the questions on needed sample size for a robust herd assessment of animal based measures. The animal based part of the full WelFur protocol including 9 animal based...
David Normando; Marco Antonio de Oliveira Almeida; Cátia Cardoso Abdo Quintão
2011-01-01
INTRODUÇÃO: o dimensionamento adequado da amostra estudada e a análise apropriada do erro do método são passos importantes na validação dos dados obtidos em determinado estudo científico, além das questões éticas e econômicas. OBJETIVO: esta investigação tem o objetivo de avaliar, quantitativamente, com que frequência os pesquisadores da ciência ortodôntica têm empregado o cálculo amostral e a análise do erro do método em pesquisas publicadas no Brasil e nos Estados Unidos. MÉTODOS: dois impo...
Institute of Scientific and Technical Information of China (English)
孙德忠; 何红蓼
2005-01-01
To improve precision of analytical results of pressurized acid digestion-ICP-MS, the first step is to improve the homogeneity by reducing the particle size of sample. Using of ultra-fine sample (-500 mesh) can reduce the mass of the sample and the acid. The satisfactory precision is achieved at 2 mg sampling mass.
Energy Technology Data Exchange (ETDEWEB)
Rocklin, Gabriel J. [Department of Pharmaceutical Chemistry, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550, USA and Biophysics Graduate Program, University of California San Francisco, 1700 4th St., San Francisco, California 94143-2550 (United States); Mobley, David L. [Departments of Pharmaceutical Sciences and Chemistry, University of California Irvine, 147 Bison Modular, Building 515, Irvine, California 92697-0001, USA and Department of Chemistry, University of New Orleans, 2000 Lakeshore Drive, New Orleans, Louisiana 70148 (United States); Dill, Ken A. [Laufer Center for Physical and Quantitative Biology, 5252 Stony Brook University, Stony Brook, New York 11794-0001 (United States); Hünenberger, Philippe H., E-mail: phil@igc.phys.chem.ethz.ch [Laboratory of Physical Chemistry, Swiss Federal Institute of Technology, ETH, 8093 Zürich (Switzerland)
2013-11-14
The calculation of a protein-ligand binding free energy based on molecular dynamics (MD) simulations generally relies on a thermodynamic cycle in which the ligand is alchemically inserted into the system, both in the solvated protein and free in solution. The corresponding ligand-insertion free energies are typically calculated in nanoscale computational boxes simulated under periodic boundary conditions and considering electrostatic interactions defined by a periodic lattice-sum. This is distinct from the ideal bulk situation of a system of macroscopic size simulated under non-periodic boundary conditions with Coulombic electrostatic interactions. This discrepancy results in finite-size effects, which affect primarily the charging component of the insertion free energy, are dependent on the box size, and can be large when the ligand bears a net charge, especially if the protein is charged as well. This article investigates finite-size effects on calculated charging free energies using as a test case the binding of the ligand 2-amino-5-methylthiazole (net charge +1 e) to a mutant form of yeast cytochrome c peroxidase in water. Considering different charge isoforms of the protein (net charges −5, 0, +3, or +9 e), either in the absence or the presence of neutralizing counter-ions, and sizes of the cubic computational box (edges ranging from 7.42 to 11.02 nm), the potentially large magnitude of finite-size effects on the raw charging free energies (up to 17.1 kJ mol{sup −1}) is demonstrated. Two correction schemes are then proposed to eliminate these effects, a numerical and an analytical one. Both schemes are based on a continuum-electrostatics analysis and require performing Poisson-Boltzmann (PB) calculations on the protein-ligand system. While the numerical scheme requires PB calculations under both non-periodic and periodic boundary conditions, the latter at the box size considered in the MD simulations, the analytical scheme only requires three non
Sample size in usability studies
Schmettow, Martin
2012-01-01
Usability studies are important for developing usable, enjoyable products, identifying design flaws (usability problems) likely to compromise the user experience. Usability testing is recommended for improving interactive design, but discovery of usability problems depends on the number of users tes
D'Huys, Elke; Seaton, Daniel B; Poedts, Stefaan
2016-01-01
Many natural processes exhibit power-law behavior. The power-law exponent is linked to the underlying physical process and therefore its precise value is of interest. With respect to the energy content of nanoflares, for example, a power-law exponent steeper than 2 is believed to be a necessary condition to solve the enigmatic coronal heating problem. Studying power-law distributions over several orders of magnitudes requires sufficient data and appropriate methodology. In this paper we demonstrate the shortcomings of some popular methods in solar physics that are applied to data of typical sample sizes. We use synthetic data to study the effect of the sample size on the performance of different estimation methods and show that vast amounts of data are needed to obtain a reliable result with graphical methods (where the power-law exponent is estimated by a linear fit on a log-transformed histogram of the data). We revisit published results on power laws for the angular width of solar coronal mass ejections an...
Institute of Scientific and Technical Information of China (English)
LIANG Yong-shu; GAO Zhi-qiang; SHEN Xi-hong,; ZHAN Xiao-deng,; ZHANG Ying-xin; Wu Wei-ming; CAO Li-yong; CHENG Shi-hua
2011-01-01
To clarify the most appropriate sample size for obtaining phenotypic data for a single line,we investigated the main-effect QTL (M-QTL) of a quantitative trait plant height (ph) in a recombinant inbred line (RIL) population of rice (derived from the cross between Xieqingzao B and Zhonghui 9308) using five individual plants in 2006 and 2009.Twenty-six ph phenotypic datasets from the completely random combinations of 2,3,4,and 5 plants in a single line,and five ph phenotypic datasets from five individual plants were used to detect the QTLs.Fifteen M-QTLs were detected by 1 to 31 datasets.Of these,qph7a was detected repeatedly by all the 31 ph datasets in 2006 and explained 11.67％ to 23.93％ of phenotypic variation; qph3 was detected repeatedly by all the 31 datasets and explained 5.21％ to 7.93％ and 11.51％ to 24.46％ of phenotypic variance in 2006 and 2009,respectively.The results indicate that the M-QTL for a quantitative trait could be detected repeatedly by the phenotypic values from 5 individual plants and 26 sets of completely random combinations of phenotypic data within a single line in an RIL population under different environments.The sample size for a single line of the RIL population did not affect the efficiency for identification of stably expressed M-QTLs.
Sample Calculations and Analysis on Vulnerability Criteria of Dead Ship Stability%瘫船稳性薄弱性评估样船计算分析
Institute of Scientific and Technical Information of China (English)
马坤; 刘飞; 李楷
2015-01-01
IMO is now drafting the second generation intact stability criterion, one of which main parts is dead ship stability vulnerability assessment. Based on IMO SDC1/INF.6 dead ship stability failure mode vulnerability criterion draft, computation programs for dead ship stability vulnerability assessment level 1 and level 2 are developed. Calculation samples include fishery administration vessels, oil tankers, patrol ships and sea guard ships. There are three loading conditions containing load draft, design draft and ballast draft being calculated for each ship assessing the dead ship stability vulnerability. Consistency between level 1 and level 2 is studied, which provides reference to the level 2 vulnerability criterion for dead ship stability.%国际海事组织 IMO 目前正在制定第二代完整稳性衡准，其中瘫船稳性薄弱性衡准是其中的重要内容。基于 IMO SDC 1/INF.6瘫船稳性失效模式薄弱性衡准草案的相关内容，开发了瘫船稳性薄弱性衡准第一层和第二层计算程序，并进行了11艘样船计算，样船包括渔政船、油船、巡逻船、海域看护船等。对于各种船型分别进行满载、设计和压载三个吃水状态的瘫船稳性薄弱性评估，根据计算结果分析了第一层和第二层的协调性，为制定瘫船稳性第二层薄弱性衡准提供参考。
Directory of Open Access Journals (Sweden)
Witold Fraczek
2001-01-01
Full Text Available Models of O3 distribution in two mountain ranges, the Carpathians in Central Europe and the Sierra Nevada in California were constructed using ArcGIS Geostatistical Analyst extension (ESRI, Redlands, CA using kriging and cokriging methods. The adequacy of the spatially interpolated ozone (O3 concentrations and sample size requirements for ozone passive samplers was also examined. In case of the Carpathian Mountains, only a general surface of O3 distribution could be obtained, partially due to a weak correlation between O3 concentration and elevation, and partially due to small numbers of unevenly distributed sample sites. In the Sierra Nevada Mountains, the O3 monitoring network was much denser and more evenly distributed, and additional climatologic information was available. As a result the estimated surfaces were more precise and reliable than those created for the Carpathians. The final maps of O3 concentrations for Sierra Nevada were derived from cokriging algorithm based on two secondary variables — elevation and maximum temperature as well as the determined geographic trend. Evenly distributed and sufficient numbers of sample points are a key factor for model accuracy and reliability.
Fraczek, W; Bytnerowicz, A; Arbaugh, M J
2001-12-07
Models of O3 distribution in two mountain ranges, the Carpathians in Central Europe and the Sierra Nevada in California were constructed using ArcGIS Geostatistical Analyst extension (ESRI, Redlands, CA) using kriging and cokriging methods. The adequacy of the spatially interpolated ozone (O3) concentrations and sample size requirements for ozone passive samplers was also examined. In case of the Carpathian Mountains, only a general surface of O3 distribution could be obtained, partially due to a weak correlation between O3 concentration and elevation, and partially due to small numbers of unevenly distributed sample sites. In the Sierra Nevada Mountains, the O3 monitoring network was much denser and more evenly distributed, and additional climatologic information was available. As a result the estimated surfaces were more precise and reliable than those created for the Carpathians. The final maps of O3 concentrations for Sierra Nevada were derived from cokriging algorithm based on two secondary variables--elevation and maximum temperature as well as the determined geographic trend. Evenly distributed and sufficient numbers of sample points are a key factor for model accuracy and reliability.
Calderon, Christopher P.
2013-07-01
Several single-molecule studies aim to reliably extract parameters characterizing molecular confinement or transient kinetic trapping from experimental observations. Pioneering works from single-particle tracking (SPT) in membrane diffusion studies [Kusumi , Biophys. J.BIOJAU0006-349510.1016/S0006-3495(93)81253-0 65, 2021 (1993)] appealed to mean square displacement (MSD) tools for extracting diffusivity and other parameters quantifying the degree of confinement. More recently, the practical utility of systematically treating multiple noise sources (including noise induced by random photon counts) through likelihood techniques has been more broadly realized in the SPT community. However, bias induced by finite-time-series sample sizes (unavoidable in practice) has not received great attention. Mitigating parameter bias induced by finite sampling is important to any scientific endeavor aiming for high accuracy, but correcting for bias is also often an important step in the construction of optimal parameter estimates. In this article, it is demonstrated how a popular model of confinement can be corrected for finite-sample bias in situations where the underlying data exhibit Brownian diffusion and observations are measured with non-negligible experimental noise (e.g., noise induced by finite photon counts). The work of Tang and Chen [J. Econometrics0304-407610.1016/j.jeconom.2008.11.001 149, 65 (2009)] is extended to correct for bias in the estimated “corral radius” (a parameter commonly used to quantify confinement in SPT studies) in the presence of measurement noise. It is shown that the approach presented is capable of reliably extracting the corral radius using only hundreds of discretely sampled observations in situations where other methods (including MSD and Bayesian techniques) would encounter serious difficulties. The ability to accurately statistically characterize transient confinement suggests additional techniques for quantifying confined and/or hop
Hittner, James B.; N. Clayton Silver
2016-01-01
In linear multiple regression it is common practice to test whether the squared multiple correlation co efficient, R2, differs significantly from zero. Although frequently used, this test is misleading because the expected value of R2 is not zero under the null hypothesis that ρ, the population value of the multiple correlation coefficient, equals zero. The non-zero expected value of R2 has implications both for significance testing and effect size estimation involving the squared multipl...
della Corte, A.; Corato, V.; Di Zenobio, A.; Fiamozzi Zignani, C.; Muzzi, L.; Polli, G. M.; Reccia, L.; Turtù, S.; Bruzzone, P.; Salpietro, E.; Vostner, A.
2010-04-01
One of the design features which yet offers interesting margins for performance optimization of cable-in-conduit conductors (CICCs), is their geometry. For relatively small size Nb3Sn CICCs, operating at high electromagnetic pressure, such as those for the EDIPO project, it has been experimentally shown that a design based on a rectangular layout with higher aspect ratio leads to the best performance, especially in terms of degradation with electromagnetic loads. To extend this analysis to larger size Nb3Sn CICCs, we manufactured and tested, in the SULTAN facility, an ITER toroidal field (TF) cable, inserted into a thick stainless steel tube and then compacted to a high aspect ratio rectangular shape. Besides establishing a new record in Nb3Sn CICC performances for ITER TF type cables, the very good test results confirmed that the conductor properties improve not only by lowering the void fraction and raising the cable twist pitch, as already shown during the ITER TFPRO and the EDIPO test campaigns, but also by the proper optimization of the conductor shape with respect to the electromagnetic force distribution. The sample manufacturing steps, along with the main test results, are presented here.
Energy Technology Data Exchange (ETDEWEB)
Della Corte, A; Corato, V; Di Zenobio, A; Fiamozzi Zignani, C; Muzzi, L; Polli, G M; Reccia, L; Turtu, S [Associazione EURATOM-ENEA sulla Fusione, Via E Fermi 45, 00044 Frascati, Rome (Italy); Bruzzone, P [EPFL-CRPP, Fusion Technology, 5232 Villigen PSI (Switzerland); Salpietro, E [European Fusion Development Agreement, Close Support Unit, Boltzmannstrasse 2, 85748 Garching (Germany); Vostner, A, E-mail: antonio.dellacorte@enea.i [Fusion for Energy, c/ Josep Pla 2, Edificio B3, 08019 Barcelona (Spain)
2010-04-15
One of the design features which yet offers interesting margins for performance optimization of cable-in-conduit conductors (CICCs), is their geometry. For relatively small size Nb{sub 3}Sn CICCs, operating at high electromagnetic pressure, such as those for the EDIPO project, it has been experimentally shown that a design based on a rectangular layout with higher aspect ratio leads to the best performance, especially in terms of degradation with electromagnetic loads. To extend this analysis to larger size Nb{sub 3}Sn CICCs, we manufactured and tested, in the SULTAN facility, an ITER toroidal field (TF) cable, inserted into a thick stainless steel tube and then compacted to a high aspect ratio rectangular shape. Besides establishing a new record in Nb{sub 3}Sn CICC performances for ITER TF type cables, the very good test results confirmed that the conductor properties improve not only by lowering the void fraction and raising the cable twist pitch, as already shown during the ITER TFPRO and the EDIPO test campaigns, but also by the proper optimization of the conductor shape with respect to the electromagnetic force distribution. The sample manufacturing steps, along with the main test results, are presented here.
Belli, Sirio; Ellis, Richard S
2014-01-01
We analyze the stellar populations of a sample of 62 massive (log Mstar/Msun > 10.7) galaxies in the redshift range 1 < z < 1.6, with the main goal of investigating the role of recent quenching in the size growth of quiescent galaxies. We demonstrate that our sample is not biased toward bright, compact, or young galaxies, and thus is representative of the overall quiescent population. Our high signal-to-noise ratio Keck LRIS spectra probe the rest-frame Balmer break region which contains important absorption line diagnostics of recent star formation activity. We show that improved measures of the stellar population parameters, including the star-formation timescale tau, age and dust extinction, can be determined by fitting templates jointly to our spectroscopic and broad-band photometric data. These parameter fits allow us to backtrack the evolving trajectory of individual galaxies on the UVJ color-color plane. In addition to identifying which quiescent galaxies were recently quenched, we discover impor...
Alizadeh, Taher; Shamkhali, Amir Naser
2016-01-15
A new chromatographic procedure, based upon chiral ligand-exchange principal, was developed for the resolution of salbutamol enantiomers. The separation was carried out on a C18 column. (l)-Alanine and Cu(2+) were applied as chiral resolving agent and complexing ion, respectively. The kind of copper salt had definitive effect on the enantioseparation. Density functional theory (DFT) was used to substantiate the effect of various anions, accompanying Cu(2+), on the formation of ternary complexes, assumed to be created during separation process. The DFT results showed that the anion kind had huge effect on the stability difference between two corresponding diastereomeric complexes and their chemical structures. It was shown that the extent of participation of the chiral selector in the ternary diastereomeric complexes formation was managed by the anion kind, affecting thus the enantioseparation efficiency of the developed method. Water/methanol (70:30) mixture containing (l)-alanine-Cu(2+) (4:1) was found to be the best mobile phase for salbutamol enantioseparation. In order to analyze sulbutamol enantiomers in plasma samples, racemic salbutamol was first extracted from the samples via nano-sized salbutamol-imprinted polymer and then enantioseparated by the developed method.
Energy Technology Data Exchange (ETDEWEB)
Andre, F.; Cariou, R.; Antignac, J.P.; Le Bizec, B. [Ecole Nationale Veterinaire de Nantes (FR). Laboratoire d' Etudes des Residus et Contaminants dans les Aliments (LABERCA); Debrauwer, L.; Zalko, D. [Institut National de Recherches Agronomiques (INRA), 31-Toulouse (France). UMR 1089 Xenobiotiques
2004-09-15
The impact of brominated flame retardants on the environment and their potential risk for animal and human health is a present time concern for the scientific community. Numerous studies related to the detection of tetrabromobisphenol A (TBBP-A) and polybrominated diphenylethers (PBDEs) have been developed over the last few years; they were mainly based on GC-ECD, GC-NCI-MS or GC-EI-HRMS, and recently GC-EI-MS/MS. The sample treatment is usually derived from the analytical methods used for dioxins, but recently some authors proposed the utilisation of solid phase extraction (SPE) cartridges. In this study, a new analytical strategy is presented for the multi-residue analysis of TBBP-A and PBDEs from a unique reduced size sample. The main objective of this analytical development is to be applied for background exposure assessment of French population groups to brominated flame retardants, for which, to our knowledge, no data exist. A second objective is to provide an efficient analytical tool to study the transfer of these contaminants through the environment to living organisms, including degradation reactions and metabolic biotransformations.
Zanino, R.; Bruzzone, P.; Ciazynski, D.; Ciotti, M.; Gislon, P.; Nicollet, S.; Savoldi Richard, L.
2004-06-01
The PF-FSJS is a full-size joint sample, based on the NbTi dual-channel cable-in-conduit conductor (CICC) design currently foreseen for the International Thermonuclear Experimental Reactor (ITER) Poloidal Field coil system. It was tested during the summer of 2002 in the Sultan facility of CRPP at a background peak magnetic field of typically 6 T. It includes about 3 m of two jointed conductor sections, using different strands but with identical layout. The sample was cooled by supercritical helium at nominal 4.5-5.0 K and 0.9-1.0 MPa, in forced convection from the top to the bottom of the vertical configuration. A pulsed coil was used to test AC losses in the two legs resulting, above a certain input power threshold, in bundle helium backflow from the heated region. Here we study the thermal-hydraulics of the phenomenon with the M&M code, with particular emphasis on the effects of buoyancy on the helium dynamics, as well as on the thermal-hydraulic coupling between the wrapped bundles of strands in the annular cable region and the central cooling channel. Both issues are ITER relevant, as they affect the more general question of the heat removal capability of the helium in this type of conductors.
Directory of Open Access Journals (Sweden)
Marcel Holyoak
Full Text Available In metapopulations in which habitat patches vary in quality and occupancy it can be complicated to calculate the net time-averaged contribution to reproduction of particular populations. Surprisingly, few indices have been proposed for this purpose. We combined occupancy, abundance, frequency of occurrence, and reproductive success to determine the net value of different sites through time and applied this method to a bird of conservation concern. The Tricolored Blackbird (Agelaius tricolor has experienced large population declines, is the most colonial songbird in North America, is largely confined to California, and breeds itinerantly in multiple habitat types. It has had chronically low reproductive success in recent years. Although young produced per nest have previously been compared across habitats, no study has simultaneously considered site occupancy and reproductive success. Combining occupancy, abundance, frequency of occurrence, reproductive success and nest failure rate we found that that large colonies in grain fields fail frequently because of nest destruction due to harvest prior to fledging. Consequently, net time-averaged reproductive output is low compared to colonies in non-native Himalayan blackberry or thistles, and native stinging nettles. Cattail marshes have intermediate reproductive output, but their reproductive output might be improved by active management. Harvest of grain-field colonies necessitates either promoting delay of harvest or creating alternative, more secure nesting habitats. Stinging nettle and marsh colonies offer the main potential sources for restoration or native habitat creation. From 2005-2011 breeding site occupancy declined 3x faster than new breeding colonies were formed, indicating a rapid decline in occupancy. Total abundance showed a similar decline. Causes of variation in the value for reproduction of nesting substrates and factors behind continuing population declines merit urgent
Institute of Scientific and Technical Information of China (English)
LIANG Bingbing; YUE Xin; WANG Hongxia; LIU Baozhong
2016-01-01
The precise and accurate knowledge of genetic parameters is a prerequisite for making efficient selection strategies in breeding programs. A number of estimators of heritability about important economic traits in many marine mollusks are available in the literature, however very few research have evaluated about the accuracy of genetic parameters estimated with different family structures. Thus, in the present study, the effect of parent sample size for estimating the precision of genetic parameters of four growth traits in clamM. meretrix by factorial designs were analyzed through restricted maximum likelihood (REML) and Bayesian. The results showed that the average estimated heritabilities of growth traits obtained from REML were 0.23–0.32 for 9 and 16 full-sib families and 0.19–0.22 for 25 full-sib families. When using Bayesian inference, the average estimated heritabilities were 0.11–0.12 for 9 and 16 full-sib families and 0.13–0.16 for 25 full-sib families. Compared with REML, Bayesian got lower heritabilities, but still remained at a medium level. When the number of parents increased from 6 to 10, the estimated heritabilities were more closed to 0.20 in REML and 0.12 in Bayesian inference. Genetic correlations among traits were positive and high and had no significant difference between different sizes of designs. The accuracies of estimated breeding values from the 9 and 16 families were less precise than those from 25 families. Our results provide a basic genetic evaluation for growth traits and should be useful for the design and operation of a practical selective breeding program in the clamM. meretrix.
Impact of Sample Size for Rare-Class Classification%样本大小对稀有类分类的影响
Institute of Scientific and Technical Information of China (English)
职为梅; 范明
2011-01-01
The classification of rarely occurring cases is widely used in many real life applications. Most classifiers, which assume a relatively balanced distribution, lose efficacy. Discuss the factors that influence the modeling of a capable classifier in identifying rare events,especially for the factor of sample size. The experiment study using rotation forest carried on 3 datasets from UCI machine learning repository based on weak shows that,in particular imbalance ratio, increases the size of training set using unsupervised resample the large error rate caused by the imbalanced class distribution decreases. The common classification algorithm can reach good effect.%分类稀有类在现实生活中的很多领域都有广泛的应用,但普通的分类算法在分类稀有类时往往失效.探讨了影响稀有类分类的各个因素,针对影响稀有类中的一个因素,样本大小对稀有类的影响进行了研究.对于UCI学习库中的三个数据集,在weka平台上使用Roration Forest进行实验,对于相同的类比率,使用unsupervised resample数据预处理方法使样本由小变大.结果表明在特定的类比率下,使样本变大,由数据的不平衡分布造成的分类错误下降,普通的分类算法在分类稀有类时往往也可以取得很好的分类结果.
Ross, Kenneth N.
1987-01-01
This article considers various kinds of probability and non-probability samples in both experimental and survey studies. Throughout, how a sample is chosen is stressed. Size alone is not the determining consideration in sample selection. Good samples do not occur by accident; they are the result of a careful design. (Author/JAZ)
Cooling rate calculations for silicate glasses.
Birnie, D. P., III; Dyar, M. D.
1986-03-01
Series solution calculations of cooling rates are applied to a variety of samples with different thermal properties, including an analog of an Apollo 15 green glass and a hypothetical silicate melt. Cooling rates for the well-studied green glass and a generalized silicate melt are tabulated for different sample sizes, equilibration temperatures and quench media. Results suggest that cooling rates are heavily dependent on sample size and quench medium and are less dependent on values of physical properties. Thus cooling histories for glasses from planetary surfaces can be estimated on the basis of size distributions alone. In addition, the variation of cooling rate with sample size and quench medium can be used to control quench rate.
Directory of Open Access Journals (Sweden)
Alice Guilleux
Full Text Available Patient-reported outcomes (PRO have gained importance in clinical and epidemiological research and aim at assessing quality of life, anxiety or fatigue for instance. Item Response Theory (IRT models are increasingly used to validate and analyse PRO. Such models relate observed variables to a latent variable (unobservable variable which is commonly assumed to be normally distributed. A priori sample size determination is important to obtain adequately powered studies to determine clinically important changes in PRO. In previous developments, the Raschpower method has been proposed for the determination of the power of the test of group effect for the comparison of PRO in cross-sectional studies with an IRT model, the Rasch model. The objective of this work was to evaluate the robustness of this method (which assumes a normal distribution for the latent variable to violations of distributional assumption. The statistical power of the test of group effect was estimated by the empirical rejection rate in data sets simulated using a non-normally distributed latent variable. It was compared to the power obtained with the Raschpower method. In both cases, the data were analyzed using a latent regression Rasch model including a binary covariate for group effect. For all situations, both methods gave comparable results whatever the deviations from the model assumptions. Given the results, the Raschpower method seems to be robust to the non-normality of the latent trait for determining the power of the test of group effect.
Gritti, Fabrice; Guiochon, Georges
2009-06-01
A general reduced HETP (height equivalent to a theoretical plate) equation is proposed that accounts for the mass transfer of a wide range of molecular weight compounds in monolithic columns. The detailed derivatization of each one of the individual and independent mass transfer contributions (longitudinal diffusion, eddy dispersion, film mass transfer resistance, and trans-skeleton mass transfer resistance) is discussed. The reduced HETPs of a series of small molecules (phenol, toluene, acenaphthene, and amylbenzene) and of a larger molecule, insulin, were measured on three research grade monolithic columns (M150, M225, M350) having different average pore size (approximately 150, 225, and 350 A, respectively) but the same dimension (100 mm x 4.6 mm). The first and second central moments of 2 muL samples were measured and corrected for the extra-column contributions. The h data were fitted to the new HETP equation in order to identify which contribution controls the band broadening in monolithic columns. The contribution of the B-term was found to be negligible compared to that of the A-term, even at very low reduced velocities (numass transfer across the column. Experimental chromatograms exhibited variable degrees of systematic peak fronting, depending on the column studied. The heterogeneity of the distribution of eluent velocities from the column center to its wall (average 5%) is the source of this peak fronting. At high reduced velocities (nu>5), the C-term of the monolithic columns is controlled by film mass transfer resistance between the eluent circulating in the large throughpores and the eluent stagnant inside the thin porous skeleton. The experimental Sherwood number measured on the monolith columns increases from 0.05 to 0.22 while the adsorption energy increases by nearly 6 kJ/mol. Stronger adsorption leads to an increase in the value of the estimated film mass transfer coefficient when a first order film mass transfer rate is assumed (j proportional
Morton, S E; Chiew, Y S; Pretty, C; Moltchanova, E; Scarrott, C; Redmond, D; Shaw, G M; Chase, J G
2017-02-01
Randomised control trials have sought to seek to improve mechanical ventilation treatment. However, few trials to date have shown clinical significance. It is hypothesised that aside from effective treatment, the outcome metrics and sample sizes of the trial also affect the significance, and thus impact trial design. In this study, a Monte-Carlo simulation method was developed and used to investigate several outcome metrics of ventilation treatment, including 1) length of mechanical ventilation (LoMV); 2) Ventilator Free Days (VFD); and 3) LoMV-28, a combination of the other metrics. As these metrics have highly skewed distributions, it also investigated the impact of imposing clinically relevant exclusion criteria on study power to enable better design for significance. Data from invasively ventilated patients from a single intensive care unit were used in this analysis to demonstrate the method. Use of LoMV as an outcome metric required 160 patients/arm to reach 80% power with a clinically expected intervention difference of 25% LoMV if clinically relevant exclusion criteria were applied to the cohort, but 400 patients/arm if they were not. However, only 130 patients/arm would be required for the same statistical significance at the same intervention difference if VFD was used. A Monte-Carlo simulation approach using local cohort data combined with objective patient selection criteria can yield better design of ventilation studies to desired power and significance, with fewer patients per arm than traditional trial design methods, which in turn reduces patient risk. Outcome metrics, such as VFD, should be used when a difference in mortality is also expected between the two cohorts. Finally, the non-parametric approach taken is readily generalisable to a range of trial types where outcome data is similarly skewed.
Institute of Scientific and Technical Information of China (English)
刘循
2013-01-01
The existing methods have such problems as large calculation error, low calculation efficiency, application risks and so on. Therefore, a method for train operation calculation of rail transit based on variable step size iteration approximation was presented. According to train operation path, the guaranteed e-mergency braking limit points and stopping service braking points were determined to calculate train traction operation curve. Variable step size iteration approximation method was adopted to calculate the positions of guaranteed emergency braking trigger point and stopping service braking trigger point. The positions of guaranteed emergency braking trigger point were taken as non-stopping service braking trigger point positions. Based on these positions, train uniform speed operation curve, non-stopping service braking curve and stopping service braking curve were calculated. Thus, the most efficient train operation curve was shaped. This method was adopted for example calculation, and results show that both the efficiency and accuracy of train operation calculation are higher. The calculation results accord with the safety control principle for actual train operation. The accuracy and efficiency of train operation calculation can be effectively controlled through adjusting the threshold value of permissible position error. The calculated train operation curve basically coincides with the actual train operation curve.%既有轨道交通列车运行计算方法存在计算误差大、效率低、应用有风险等问题,因此提出1种基于变步长迭代逼近的轨道交通列车运行计算方法.根据列车运行路径,确定保障紧急制动限制点和停站常用制动停车点,计算列车牵引运行曲线；采用变步长迭代逼近方法计算确定保障紧急制动触发点位置和停站常用制动触发点位置,将保障紧急制动触发点位置作为非停站常用制动触发点位置；据此位置计算列车匀速运行曲线、
Energy Technology Data Exchange (ETDEWEB)
Helton, Jon Craig (Arizona State University, Tempe, AZ); Sallaberry, Cedric J. PhD. (.; .)
2007-04-01
A deep geologic repository for high level radioactive waste is under development by the U.S. Department of Energy at Yucca Mountain (YM), Nevada. As mandated in the Energy Policy Act of 1992, the U.S. Environmental Protection Agency (EPA) has promulgated public health and safety standards (i.e., 40 CFR Part 197) for the YM repository, and the U.S. Nuclear Regulatory Commission has promulgated licensing standards (i.e., 10 CFR Parts 2, 19, 20, etc.) consistent with 40 CFR Part 197 that the DOE must establish are met in order for the YM repository to be licensed for operation. Important requirements in 40 CFR Part 197 and 10 CFR Parts 2, 19, 20, etc. relate to the determination of expected (i.e., mean) dose to a reasonably maximally exposed individual (RMEI) and the incorporation of uncertainty into this determination. This presentation describes and illustrates how general and typically nonquantitive statements in 40 CFR Part 197 and 10 CFR Parts 2, 19, 20, etc. can be given a formal mathematical structure that facilitates both the calculation of expected dose to the RMEI and the appropriate separation in this calculation of aleatory uncertainty (i.e., randomness in the properties of future occurrences such as igneous and seismic events) and epistemic uncertainty (i.e., lack of knowledge about quantities that are poorly known but assumed to have constant values in the calculation of expected dose to the RMEI).
Directory of Open Access Journals (Sweden)
Thomas Newton Martin
2008-06-01
Full Text Available Os objetivos deste trabalho foram demarcar regiões homogêneas e estimar o número de anos de avaliações para as variáveis insolação, radiação solar global e radiação fotossinteticamente ativa para o Estado de São Paulo. Utilizaram-se dados da média mensal de insolação, radiação solar e radiação fotossinteticamente ativa de 18 locais do Estado de São Paulo. A homogeneidade das variâncias entre os meses do ano para os 18 locais (variabilidade temporal e a homogeneidade das variâncias entre os locais em cada mês (variabilidade espacial foram testadas pelo teste de homogeneidade de Bartlett. Estimou-se o tamanho de amostra para cada local durante o ano. Como resultados há variabilidade temporal e espacial para as estimativas de insolação, radiação solar e radiação fotossinteticamente ativa para os 18 municípios avaliados. Além disso, a variabilidade do tamanho de amostra para a insolação, radiação solar e radiação fotossinteticamente ativa depende do local e da época do ano no Estado de São Paulo.The purpose of this study was to separate homogeneous regions and to estimate the numbers of years necessary to evaluate the variables: sunshine, global solar radiation and photossintetically active radiation in Sao Paulo State. Monthly data of sunshine, solar radiation and photossintetically active radiation for 18 places in Sao Paulo State were used in the analysis. The homogeneity of the variances among the months for the 18 places (seasonal variability and the homogeneity of variances among places in each month (spatial variability were tested by the test of homogeneity of Bartlett. In addition, the sample size for each place was calculated during the year. The results show the existence of seasonal and spatial variability in the estimates of sunshine, solar radiation and photossintetically active radiation for the 18 cities evaluated in Sao Paulo State. Moreover, the variability of the sample size for sunshine
US Fish and Wildlife Service, Department of the Interior — Provides guidelines concerning sampling effort to achieve appropriate level of precision regarding avian point count sampling in the MAV. To compare efficacy of...
Schmitz, Tobias; Blaickner, Matthias; Schütz, Christian; Wiehl, Norbert; Kratz, Jens V; Bassler, Niels; Holzscheiter, Michael H; Palmans, Hugo; Sharpe, Peter; Otto, Gerd; Hampel, Gabriele
2010-10-01
To establish Boron Neutron Capture Therapy (BNCT) for non-resectable liver metastases and for in vitro experiments at the TRIGA Mark II reactor at the University of Mainz, Germany, it is necessary to have a reliable dose monitoring system. The in vitro experiments are used to determine the relative biological effectiveness (RBE) of liver and cancer cells in our mixed neutron and gamma field. We work with alanine detectors in combination with Monte Carlo simulations, where we can measure and characterize the dose. To verify our calculations we perform neutron flux measurements using gold foil activation and pin-diodes. Material and methods. When L-α-alanine is irradiated with ionizing radiation, it forms a stable radical which can be detected by electron spin resonance (ESR) spectroscopy. The value of the ESR signal correlates to the amount of absorbed dose. The dose for each pellet is calculated using FLUKA, a multipurpose Monte Carlo transport code. The pin-diode is augmented by a lithium fluoride foil. This foil converts the neutrons into alpha and tritium particles which are products of the (7)Li(n,α)(3)H-reaction. These particles are detected by the diode and their amount correlates to the neutron fluence directly. Results and discussion. Gold foil activation and the pin-diode are reliable fluence measurement systems for the TRIGA reactor, Mainz. Alanine dosimetry of the photon field and charged particle field from secondary reactions can in principle be carried out in combination with MC-calculations for mixed radiation fields and the Hansen & Olsen alanine detector response model. With the acquired data about the background dose and charged particle spectrum, and with the acquired information of the neutron flux, we are capable of calculating the dose to the tissue. Conclusion. Monte Carlo simulation of the mixed neutron and gamma field of the TRIGA Mainz is possible in order to characterize the neutron behavior in the thermal column. Currently we also
Energy Technology Data Exchange (ETDEWEB)
Garcez, R.W.D.; Lopes, J.M.; Silva, A.X., E-mail: marqueslopez@yahoo.com.br [Coordenacao dos Programas de Pos-Graduacao em Engenharia (COPPE/PEN/UFRJ), Rio de Janeiro, RJ (Brazil). Centro de Tecnologia; Domingues, A.M. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Instituto de Fisica; Lima, M.A.F. [Universidade Federal Fluminense (UFF), Niteroi, RJ (Brazil). Instituto de Biologia
2014-07-01
A method based on gamma spectroscopy and on the use of voxel phantoms to calculate dose due to ingestion of {sup 40}K contained in bean samples are presented in this work. To quantify the activity of radionuclide, HPGe detector was used and the data entered in the input file of MCNP code. The highest value of equivalent dose was 7.83 μSv.y{sup -1} in the stomach for white beans, whose activity 452.4 Bq.Kg{sup -1} was the highest of the five analyzed. The tool proved to be appropriate when you want to calculate the dose in organs due to ingestion of food. (author)
Directory of Open Access Journals (Sweden)
Manel Puig-Vidal
2012-01-01
Full Text Available The time required to image large samples is an important limiting factor in SPM-based systems. In multiprobe setups, especially when working with biological samples, this drawback can make impossible to conduct certain experiments. In this work, we present a feedfordward controller based on bang-bang and adaptive controls. The controls are based in the difference between the maximum speeds that can be used for imaging depending on the flatness of the sample zone. Topographic images of Escherichia coli bacteria samples were acquired using the implemented controllers. Results show that to go faster in the flat zones, rather than using a constant scanning speed for the whole image, speeds up the imaging process of large samples by up to a 4x factor.
Energy Technology Data Exchange (ETDEWEB)
Mourant, J.R.; Hielscher, A.H.; Bigio, I.J.
1996-04-01
Details of the interaction of photons with tissue phantoms are elucidated using Monte Carlo simulations. In particular, photon sampling volumes and photon pathlengths are determined for a variety of scattering and absorption parameters. The Monte Carlo simulations are specifically designed to model light delivery and collection geometries relevant to clinical applications of optical biopsy techniques. The Monte Carlo simulations assume that light is delivered and collected by two, nearly-adjacent optical fibers and take into account the numerical aperture of the fibers as well as reflectance and refraction at interfaces between different media. To determine the validity of the Monte Carlo simulations for modeling the interactions between the photons and the tissue phantom in these geometries, the simulations were compared to measurements of aqueous suspensions of polystyrene microspheres in the wavelength range 450-750 nm.
Hamaya, S.; Maeda, H.; Funaki, M.; Fukui, H.
2008-12-01
The relativistic calculation of nuclear magnetic shielding tensors in hydrogen halides is performed using the second-order regular approximation to the normalized elimination of the small component (SORA-NESC) method with the inclusion of the perturbation terms from the metric operator. This computational scheme is denoted as SORA-Met. The SORA-Met calculation yields anisotropies, Δσ =σ∥-σ⊥, for the halogen nuclei in hydrogen halides that are too small. In the NESC theory, the small component of the spinor is combined to the large component via the operator σ⃗ṡπ⃗U/2c, in which π⃗=p⃗+A⃗, U is a nonunitary transformation operator, and c ≅137.036 a.u. is the velocity of light. The operator U depends on the vector potential A⃗ (i.e., the magnetic perturbations in the system) with the leading order c-2 and the magnetic perturbation terms of U contribute to the Hamiltonian and metric operators of the system in the leading order c-4. It is shown that the small Δσ for halogen nuclei found in our previous studies is related to the neglect of the U(0,1) perturbation operator of U, which is independent of the external magnetic field and of the first order with respect to the nuclear magnetic dipole moment. Introduction of gauge-including atomic orbitals and a finite-size nuclear model is also discussed.
Romagnan, Jean Baptiste; Aldamman, Lama; Gasparini, Stéphane; Nival, Paul; Aubert, Anaïs; Jamet, Jean Louis; Stemmann, Lars
2016-10-01
The present work aims to show that high throughput imaging systems can be useful to estimate mesozooplankton community size and taxonomic descriptors that can be the base for consistent large scale monitoring of plankton communities. Such monitoring is required by the European Marine Strategy Framework Directive (MSFD) in order to ensure the Good Environmental Status (GES) of European coastal and offshore marine ecosystems. Time and cost-effective, automatic, techniques are of high interest in this context. An imaging-based protocol has been applied to a high frequency time series (every second day between April 2003 to April 2004 on average) of zooplankton obtained in a coastal site of the NW Mediterranean Sea, Villefranche Bay. One hundred eighty four mesozooplankton net collected samples were analysed with a Zooscan and an associated semi-automatic classification technique. The constitution of a learning set designed to maximize copepod identification with more than 10,000 objects enabled the automatic sorting of copepods with an accuracy of 91% (true positives) and a contamination of 14% (false positives). Twenty seven samples were then chosen from the total copepod time series for detailed visual sorting of copepods after automatic identification. This method enabled the description of the dynamics of two well-known copepod species, Centropages typicus and Temora stylifera, and 7 other taxonomically broader copepod groups, in terms of size, biovolume and abundance-size distributions (size spectra). Also, total copepod size spectra underwent significant changes during the sampling period. These changes could be partially related to changes in the copepod assemblage taxonomic composition and size distributions. This study shows that the use of high throughput imaging systems is of great interest to extract relevant coarse (i.e. total abundance, size structure) and detailed (i.e. selected species dynamics) descriptors of zooplankton dynamics. Innovative
Gupta, Sandeep; Kumar, Krishan; Srivastava, Arun; Srivastava, Alok; Jain, V K
2011-10-15
Ambient aerosol particles were collected using a five-stage impactor at six different sites in Delhi. The impactor segregates the TSPM into five different sizes (viz. >10.9, 10.9-5.4, 5.4-1.6, 1.6-0.7, and 10.9+10.9 to 5.4+5.4 to 1.6μm) and fine (1.6 to 0.7+<0.7μm). It was observed that the dominant PAHs found were pyrene, benzo(a)pyrene, benzo(ghi)perylene and benzo(b)fluoranthene for both the coarse and fine fractions. Source apportionment of polycyclic aromatic hydrocarbons (PAHs) has been carried out using principal component analysis method (PCA) in both coarse and fine size modes. The major sources identified in this study, responsible for the elevated concentration of PAHs in Delhi, are vehicular emission and coal combustion. Some contribution from biomass burning was also observed.
National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...
Institute of Scientific and Technical Information of China (English)
陶敏; 邓山; 王婷乐; 袁金钊; 肖尚斌
2011-01-01
Taking a set of sediment samples which consist of coarse sand and very coarse sand with preponderant grain-size from a mountain stream named Liantuo in Yichang area, then on the basis of grain size analysis with the sieve analysis method, to calculate the grain-size parameters of sediments with the moment method and graphic method separately; the results of the two different methods are compared. The average grain diameters and sorting coefficients of the two methods are very close; they could almost replace each other. The kurtoses of the two sets of data have a correlation but they couldn't replace each other; but the skewness of them is dispersed; they didn't have an obvious correlation with each other. Analysis proved that the reason which created the difference has a relationship with the counting principle of the two different methods and the grain composition of the coarse grained sediments. The coarse grain skewness calculated by the two methods don't have an obvious correlation with each other; the skewness values derived from the graphic method has indicated the tail of grain-size; while the skewness value derived from the moment method has reflected the whole characteristics of grain-size. For the kurtosis values derived from the two methods, when the fractiondistribution curve of the sample is unimodal and centralized, the difference between the results derived from the two methods are insignificant; while when the fraction distribution curve of the sample is double-apices and the times peak frequency value is comparatively high; the results derived from the two methods are quite different. The calculation of the graphic method doesn't take the grain-size characteristics of the tails of both ends into consideration; it may lead to calculating error of the grain-size parameters, however, the moment method is more scientific and the calculating results are more exact. So in the process of integrating and assimilating the information, the difference lies
DEFF Research Database (Denmark)
Qiao, Jixin; Hou, Xiaolin; Roos, Per
2013-01-01
A novel method for bioassay of large volumes of human urine samples using manganese dioxide coprecipitation for preconcentration was developed for rapid determination of 237Np. 242Pu was utilized as a nonisotopic tracer to monitor the chemical yield of 237Np. A sequential injection extraction chr...
Zwick, Rebecca
2012-01-01
Differential item functioning (DIF) analysis is a key component in the evaluation of the fairness and validity of educational tests. The goal of this project was to review the status of ETS DIF analysis procedures, focusing on three aspects: (a) the nature and stringency of the statistical rules used to flag items, (b) the minimum sample size…
Institute of Scientific and Technical Information of China (English)
易顺; 陶勇; 阮方; 吴润; 刘静
2016-01-01
为了提高帘线钢盘条中TiN夹杂的检测效率，从热力学方面探讨了帘线钢盘条中TiN夹杂析出的条件以及尺寸的计算。热力学分析表明，由于溶质元素钛和氮偏析，造成TiN夹杂在固液两相区中析出，且氮元素是影响TiN夹杂长大的最主要因素，经过验证发现，TiN夹杂热力学长大公式的尺寸计算值与实际值基本吻合，平均相对误差为8.62%。提出了合适的数学模型对TiN夹杂尺寸与钛和氮质量分数的数据进行模拟，对TiN夹杂的尺寸进行预测并验证模型的合理性。结果表明，模型的平均相对误差仅为0.95%，可以运用到实际检测工作中，提高钛夹杂的检测效率。%In order to improve the detection efficiency of TiN inclusions in tire cord steel,the conditions of the TiN in-clusions precipitation during solidification and the size calculation of TiN inclusions in tire cord steel were analyzed mainly on the basis of thermodynamics in this research. According to the thermodynamics analysis,the precipitation of TiN inclusions occured in the solid-liquid phase region because of the segregation of Ti and N. In addition,element N was the main factor influencing the growth of TiN inclusions. The result of verification indicated that the calculated size of TiN inclusions according to the thermodynamics growth formula was consistent with the actual values,and the aver-age relative error was 8.62%. A suitable mathematical model was proposed for simulating the size of TiN inclusions with the data of Ti and N contents. The model had predicted the size of TiN inclusions and indicated some rationality,the re-sult showed that the average relative error of model was 0.95%,it can be applied to the actual detection work to improve the detection efficiency of TiN inclusions.
Energy Technology Data Exchange (ETDEWEB)
Gasco, C.; Navarro, N.; Gonzalez, P.; Heras, M. C.; Gapan, M. P.; Alonso, C.; Calderon, A.; Sanchez, D.; Morante, R.; Fernandez, M.; Gajate, A.; Alvarez, A.
2008-08-06
The Department of Vigilance Radiologica y Radiactividad Ambiental from CIEMAT has developed an appropriate analytical methodology for Fe-55 and Ni-63 sequential determination in environmental samples based on the procedure used by RIS0 Laboratories. The experimental results obtained in the mayor and minor elements behaviour (soil and air constituents) in the different types of resins used for separating Fe-55 and Ni-63 are showed in this report. The measuring method of both isotopes by scintillation counting has been optimized with Ultima Gold liquid with different concentrations of stable element Fe and Ni. The decontamination factors of different gamma-emitters are experimentally determined in this method with the presence of soil matrix. The Fe-55 and Ni-63 activity concentrations and their associated uncertainties have been calculated from the counting data and sample preparation. A computer application has been implemented in Visual Basic in excel sheets for: (I) obtaining the counting data from spectrometer and counts in each window, (II) representing graphically the background and sample spectrums, (III) determining the activity concentration and its associated uncertainty and (IV) calculating the characteristic limits using ISO 11929 (2007) with various confidence levels. (Author) 30 refs.
Directory of Open Access Journals (Sweden)
Peter B. Gray
2012-07-01
Full Text Available We investigated body image in St. Kitts, a Caribbean island where tourism, international media, and relatively high levels of body fat are common. Participants were men and women recruited from St. Kitts (n = 39 and, for comparison, U.S. samples from universities (n = 618 and the Internet (n = 438. Participants were shown computer generated images varying in apparent body fat level and muscularity or breast size and they indicated their body type preferences and attitudes. Overall, there were only modest differences in body type preferences between St. Kitts and the Internet sample, with the St. Kitts participants being somewhat more likely to value heavier women. Notably, however, men and women fro