WorldWideScience

Sample records for random sampling variation

  1. The contribution of simple random sampling to observed variations in faecal egg counts.

    Science.gov (United States)

    Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I

    2012-09-10

    It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.

  2. Non-compact random generalized games and random quasi-variational inequalities

    OpenAIRE

    Yuan, Xian-Zhi

    1994-01-01

    In this paper, existence theorems of random maximal elements, random equilibria for the random one-person game and random generalized game with a countable number of players are given as applications of random fixed point theorems. By employing existence theorems of random generalized games, we deduce the existence of solutions for non-compact random quasi-variational inequalities. These in turn are used to establish several existence theorems of noncompact generalized random ...

  3. BWIP-RANDOM-SAMPLING, Random Sample Generation for Nuclear Waste Disposal

    International Nuclear Information System (INIS)

    Sagar, B.

    1989-01-01

    1 - Description of program or function: Random samples for different distribution types are generated. Distribution types as required for performance assessment modeling of geologic nuclear waste disposal are provided. These are: - Uniform, - Log-uniform (base 10 or natural), - Normal, - Lognormal (base 10 or natural), - Exponential, - Bernoulli, - User defined continuous distribution. 2 - Method of solution: A linear congruential generator is used for uniform random numbers. A set of functions is used to transform the uniform distribution to the other distributions. Stratified, rather than random, sampling can be chosen. Truncated limits can be specified on many distributions, whose usual definition has an infinite support. 3 - Restrictions on the complexity of the problem: Generation of correlated random variables is not included

  4. A Bayesian Justification for Random Sampling in Sample Survey

    Directory of Open Access Journals (Sweden)

    Glen Meeden

    2012-07-01

    Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.

  5. An unbiased estimator of the variance of simple random sampling using mixed random-systematic sampling

    OpenAIRE

    Padilla, Alberto

    2009-01-01

    Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...

  6. A practical guide and power analysis for GLMMs: detecting among treatment variation in random effects

    Directory of Open Access Journals (Sweden)

    Morgan P. Kain

    2015-09-01

    Full Text Available In ecology and evolution generalized linear mixed models (GLMMs are becoming increasingly used to test for differences in variation by treatment at multiple hierarchical levels. Yet, the specific sampling schemes that optimize the power of an experiment to detect differences in random effects by treatment/group remain unknown. In this paper we develop a blueprint for conducting power analyses for GLMMs focusing on detecting differences in variance by treatment. We present parameterization and power analyses for random-intercepts and random-slopes GLMMs because of their generality as focal parameters for most applications and because of their immediate applicability to emerging questions in the field of behavioral ecology. We focus on the extreme case of hierarchically structured binomial data, though the framework presented here generalizes easily to any error distribution model. First, we determine the optimal ratio of individuals to repeated measures within individuals that maximizes power to detect differences by treatment in among-individual variation in intercept, among-individual variation in slope, and within-individual variation in intercept. Second, we explore how power to detect differences in target variance parameters is affected by total variation. Our results indicate heterogeneity in power across ratios of individuals to repeated measures with an optimal ratio determined by both the target variance parameter and total sample size. Additionally, power to detect each variance parameter was low overall (in most cases >1,000 total observations per treatment needed to achieve 80% power and decreased with increasing variance in non-target random effects. With growing interest in variance as the parameter of inquiry, these power analyses provide a crucial component for designing experiments focused on detecting differences in variance. We hope to inspire novel experimental designs in ecology and evolution investigating the causes and

  7. Systematic versus random sampling in stereological studies.

    Science.gov (United States)

    West, Mark J

    2012-12-01

    The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.

  8. k-Means: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.

  9. Generation and Analysis of Constrained Random Sampling Patterns

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2016-01-01

    Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...... indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed....

  10. Systematic random sampling of the comet assay.

    Science.gov (United States)

    McArt, Darragh G; Wasson, Gillian R; McKerr, George; Saetzler, Kurt; Reed, Matt; Howard, C Vyvyan

    2009-07-01

    The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory 'tail' DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the 'randomness' of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.

  11. Sampling problems for randomly broken sticks

    Energy Technology Data Exchange (ETDEWEB)

    Huillet, Thierry [Laboratoire de Physique Theorique et Modelisation, CNRS-UMR 8089 et Universite de Cergy-Pontoise, 5 mail Gay-Lussac, 95031, Neuville sur Oise (France)

    2003-04-11

    Consider the random partitioning model of a population (represented by a stick of length 1) into n species (fragments) with identically distributed random weights (sizes). Upon ranking the fragments' weights according to ascending sizes, let S{sub m:n} be the size of the mth smallest fragment. Assume that some observer is sampling such populations as follows: drop at random k points (the sample size) onto this stick and record the corresponding numbers of visited fragments. We shall investigate the following sampling problems: (1) what is the sample size if the sampling is carried out until the first visit of the smallest fragment (size S{sub 1:n})? (2) For a given sample size, have all the fragments of the stick been visited at least once or not? This question is related to Feller's random coupon collector problem. (3) In what order are new fragments being discovered and what is the random number of samples separating the discovery of consecutive new fragments until exhaustion of the list? For this problem, the distribution of the size-biased permutation of the species' weights, as the sequence of their weights in their order of appearance is needed and studied.

  12. Independent random sampling methods

    CERN Document Server

    Martino, Luca; Míguez, Joaquín

    2018-01-01

    This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...

  13. LOD score exclusion analyses for candidate QTLs using random population samples.

    Science.gov (United States)

    Deng, Hong-Wen

    2003-11-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes as putative QTLs using random population samples. Previously, we developed an LOD score exclusion mapping approach for candidate genes for complex diseases. Here, we extend this LOD score approach for exclusion analyses of candidate genes for quantitative traits. Under this approach, specific genetic effects (as reflected by heritability) and inheritance models at candidate QTLs can be analyzed and if an LOD score is < or = -2.0, the locus can be excluded from having a heritability larger than that specified. Simulations show that this approach has high power to exclude a candidate gene from having moderate genetic effects if it is not a QTL and is robust to population admixture. Our exclusion analysis complements association analysis for candidate genes as putative QTLs in random population samples. The approach is applied to test the importance of Vitamin D receptor (VDR) gene as a potential QTL underlying the variation of bone mass, an important determinant of osteoporosis.

  14. Sampling Polya-Gamma random variates: alternate and approximate techniques

    OpenAIRE

    Windle, Jesse; Polson, Nicholas G.; Scott, James G.

    2014-01-01

    Efficiently sampling from the P\\'olya-Gamma distribution, ${PG}(b,z)$, is an essential element of P\\'olya-Gamma data augmentation. Polson et. al (2013) show how to efficiently sample from the ${PG}(1,z)$ distribution. We build two new samplers that offer improved performance when sampling from the ${PG}(b,z)$ distribution and $b$ is not unity.

  15. Randomization of grab-sampling strategies for estimating the annual exposure of U miners to Rn daughters.

    Science.gov (United States)

    Borak, T B

    1986-04-01

    Periodic grab sampling in combination with time-of-occupancy surveys has been the accepted procedure for estimating the annual exposure of underground U miners to Rn daughters. Temporal variations in the concentration of potential alpha energy in the mine generate uncertainties in this process. A system to randomize the selection of locations for measurement is described which can reduce uncertainties and eliminate systematic biases in the data. In general, a sample frequency of 50 measurements per year is sufficient to satisfy the criteria that the annual exposure be determined in working level months to within +/- 50% of the true value with a 95% level of confidence. Suggestions for implementing this randomization scheme are presented.

  16. Sampling large random knots in a confined space

    International Nuclear Information System (INIS)

    Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M

    2007-01-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications

  17. Sampling large random knots in a confined space

    Science.gov (United States)

    Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.

    2007-09-01

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  18. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  19. Detection of somaclonal variation by random amplified polymorphic ...

    African Journals Online (AJOL)

    Detection of somaclonal variation by random amplified polymorphic DNA analysis during micropropagation of Phalaenopsis bellina (Rchb.f.) Christenson. ... Among the primers used, P 16 produced the highest number of bands (29), while primer OPU 10 produced the lowest number (15). The range of similarity coefficient ...

  20. Variational random phase approximation for the anharmonic oscillator

    International Nuclear Information System (INIS)

    Dukelsky, J.; Schuck, P.

    1990-04-01

    The recently derived Variational Random Phase Approximation is examined using the anharmonic oscillator model. Special attention is paid to the ground state RPA wave function and the convergence of the proposed truncation scheme to obtain the diagonal density matrix. Comparison with the standard Coupled Cluster method is made

  1. Efficient sampling of complex network with modified random walk strategies

    Science.gov (United States)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  2. RandomSpot: A web-based tool for systematic random sampling of virtual slides.

    Science.gov (United States)

    Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E

    2015-01-01

    This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.

  3. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  4. A random spatial sampling method in a rural developing nation

    Science.gov (United States)

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  5. Variational Approach to Enhanced Sampling and Free Energy Calculations

    Science.gov (United States)

    Valsson, Omar; Parrinello, Michele

    2014-08-01

    The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.

  6. SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS

    OpenAIRE

    Sampath Sundaram; Ammani Sivaraman

    2010-01-01

    In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i)  Balanced Systematic Sampling (BSS) of  Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg  (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...

  7. Fixed-location hydroacoustic monitoring designs for estimating fish passage using stratified random and systematic sampling

    International Nuclear Information System (INIS)

    Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.

    1993-01-01

    Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs

  8. False Operation of Static Random Access Memory Cells under Alternating Current Power Supply Voltage Variation

    Science.gov (United States)

    Sawada, Takuya; Takata, Hidehiro; Nii, Koji; Nagata, Makoto

    2013-04-01

    Static random access memory (SRAM) cores exhibit susceptibility against power supply voltage variation. False operation is investigated among SRAM cells under sinusoidal voltage variation on power lines introduced by direct RF power injection. A standard SRAM core of 16 kbyte in a 90 nm 1.5 V technology is diagnosed with built-in self test and on-die noise monitor techniques. The sensitivity of bit error rate is shown to be high against the frequency of injected voltage variation, while it is not greatly influenced by the difference in frequency and phase against SRAM clocking. It is also observed that the distribution of false bits is substantially random in a cell array.

  9. Health plan auditing: 100-percent-of-claims vs. random-sample audits.

    Science.gov (United States)

    Sillup, George P; Klimberg, Ronald K

    2011-01-01

    The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.

  10. Variational data assimilation using targetted random walks

    KAUST Repository

    Cotter, S. L.

    2011-02-15

    The variational approach to data assimilation is a widely used methodology for both online prediction and for reanalysis. In either of these scenarios, it can be important to assess uncertainties in the assimilated state. Ideally, it is desirable to have complete information concerning the Bayesian posterior distribution for unknown state given data. We show that complete computational probing of this posterior distribution is now within the reach in the offline situation. We introduce a Markov chain-Monte Carlo (MCMC) method which enables us to directly sample from the Bayesian posterior distribution on the unknown functions of interest given observations. Since we are aware that these methods are currently too computationally expensive to consider using in an online filtering scenario, we frame this in the context of offline reanalysis. Using a simple random walk-type MCMC method, we are able to characterize the posterior distribution using only evaluations of the forward model of the problem, and of the model and data mismatch. No adjoint model is required for the method we use; however, more sophisticated MCMC methods are available which exploit derivative information. For simplicity of exposition, we consider the problem of assimilating data, either Eulerian or Lagrangian, into a low Reynolds number flow in a two-dimensional periodic geometry. We will show that in many cases it is possible to recover the initial condition and model error (which we describe as unknown forcing to the model) from data, and that with increasing amounts of informative data, the uncertainty in our estimations reduces. © 2011 John Wiley & Sons, Ltd.

  11. Self-reference and random sampling approach for label-free identification of DNA composition using plasmonic nanomaterials.

    Science.gov (United States)

    Freeman, Lindsay M; Pang, Lin; Fainman, Yeshaiahu

    2018-05-09

    The analysis of DNA has led to revolutionary advancements in the fields of medical diagnostics, genomics, prenatal screening, and forensic science, with the global DNA testing market expected to reach revenues of USD 10.04 billion per year by 2020. However, the current methods for DNA analysis remain dependent on the necessity for fluorophores or conjugated proteins, leading to high costs associated with consumable materials and manual labor. Here, we demonstrate a potential label-free DNA composition detection method using surface-enhanced Raman spectroscopy (SERS) in which we identify the composition of cytosine and adenine within single strands of DNA. This approach depends on the fact that there is one phosphate backbone per nucleotide, which we use as a reference to compensate for systematic measurement variations. We utilize plasmonic nanomaterials with random Raman sampling to perform label-free detection of the nucleotide composition within DNA strands, generating a calibration curve from standard samples of DNA and demonstrating the capability of resolving the nucleotide composition. The work represents an innovative way for detection of the DNA composition within DNA strands without the necessity of attached labels, offering a highly sensitive and reproducible method that factors in random sampling to minimize error.

  12. Estimates of Inequality Indices Based on Simple Random, Ranked Set, and Systematic Sampling

    OpenAIRE

    Bansal, Pooja; Arora, Sangeeta; Mahajan, Kalpana K.

    2013-01-01

    Gini index, Bonferroni index, and Absolute Lorenz index are some popular indices of inequality showing different features of inequality measurement. In general simple random sampling procedure is commonly used to estimate the inequality indices and their related inference. The key condition that the samples must be drawn via simple random sampling procedure though makes calculations much simpler but this assumption is often violated in practice as the data does not always yield simple random ...

  13. An alternative procedure for estimating the population mean in simple random sampling

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2012-03-01

    Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.

  14. Correlated random sampling for multivariate normal and log-normal distributions

    International Nuclear Information System (INIS)

    Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.

    2012-01-01

    A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.

  15. Random sampling of evolution time space and Fourier transform processing

    International Nuclear Information System (INIS)

    Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor

    2006-01-01

    Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time

  16. Random phenotypic variation of yeast (Saccharomyces cerevisiae) single-gene knockouts fits a double pareto-lognormal distribution.

    Science.gov (United States)

    Graham, John H; Robb, Daniel T; Poe, Amy R

    2012-01-01

    Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of

  17. A random sampling procedure for anisotropic distributions

    International Nuclear Information System (INIS)

    Nagrajan, P.S.; Sethulakshmi, P.; Raghavendran, C.P.; Bhatia, D.P.

    1975-01-01

    A procedure is described for sampling the scattering angle of neutrons as per specified angular distribution data. The cosine of the scattering angle is written as a double Legendre expansion in the incident neutron energy and a random number. The coefficients of the expansion are given for C, N, O, Si, Ca, Fe and Pb and these elements are of interest in dosimetry and shielding. (author)

  18. Intrapopulational body size variation and cranial capacity variation in Middle Pleistocene humans: the Sima de los Huesos sample (Sierra de Atapuerca, Spain).

    Science.gov (United States)

    Lorenzo, C; Carretero, J M; Arsuaga, J L; Gracia, A; Martínez, I

    1998-05-01

    A sexual dimorphism more marked than in living humans has been claimed for European Middle Pleistocene humans, Neandertals and prehistoric modern humans. In this paper, body size and cranial capacity variation are studied in the Sima de los Huesos Middle Pleistocene sample. This is the largest sample of non-modern humans found to date from one single site, and with all skeletal elements represented. Since the techniques available to estimate the degree of sexual dimorphism in small palaeontological samples are all unsatisfactory, we have used the bootstraping method to asses the magnitude of the variation in the Sima de los Huesos sample compared to modern human intrapopulational variation. We analyze size variation without attempting to sex the specimens a priori. Anatomical regions investigated are scapular glenoid fossa; acetabulum; humeral proximal and distal epiphyses; ulnar proximal epiphysis; radial neck; proximal femur; humeral, femoral, ulnar and tibial shaft; lumbosacral joint; patella; calcaneum; and talar trochlea. In the Sima de los Huesos sample only the humeral midshaft perimeter shows an unusual high variation (only when it is expressed by the maximum ratio, not by the coefficient of variation). In spite of that the cranial capacity range at Sima de los Huesos almost spans the rest of the European and African Middle Pleistocene range. The maximum ratio is in the central part of the distribution of modern human samples. Thus, the hypothesis of a greater sexual dimorphism in Middle Pleistocene populations than in modern populations is not supported by either cranial or postcranial evidence from Sima de los Huesos.

  19. Some regional variations in dietary patterns in a random sample of British adults.

    Science.gov (United States)

    Whichelow, M J; Erzinclioglu, S W; Cox, B D

    1991-05-01

    Comparison was made of the reported frequency of consumption or choice of 30 food items by 8860 adults in the 11 standard regions of Great Britain, with the use of log-linear analysis to allow for the age, sex, social class and smoking habit variations between the regions. The South-East was taken as the base region against which the others were compared. The number of food items for which there were significant differences from the South-East were Scotland 23, North 25, North-West and Yorkshire/Humberside 20, Wales 19, West Midlands 15, East Midlands 10, East Anglia 8, South-West 7 and Greater London 9. Overall the findings confirm a North/South trend in relation to eating habits, even when demographic and smoking-habit variations are taken into account, with the frequent consumption of many fruit and vegetable products being much less common and of several high-fat foods (chips, processed meats and fried food) more common in Scotland, Wales and the northern part of England. In most regions there was a significantly lower frequency of consumption of fresh fruit, fruit juice, 'brown' bread, pasta/rice, poultry, skimmed/semi-skimmed milk, light desserts and nuts, and a higher consumption of red meat, fish and fried food than in the South-East.

  20. Variation in the human lymphocyte sister chromatid exchange frequency as a function of time: results of daily and twice-weekly sampling

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, J.D.; Christensen, M.L.; Strout, C.L.; McGee, K.A.; Carrano, A.V.

    1987-01-01

    The variation in lymphocyte sister chromatid exchange (SCE) frequency was investigated in healthy nonsmokers who were not taking any medication. Two separate studies were undertaken. In the first, blood was drawn from four women twice a week for 8 weeks. These donors recorded the onset and termination of menstruation and times of illness. In the second study, blood was obtained from two women and two men for 5 consecutive days on two separate occasions initiated 14 days apart. Analysis of the mean SCE frequencies in each study indicated that significant temporal variation occurred in each donor, and that more variation occurred in the longer study. Some of the variation was found to be associated with the menstrual cycle. In the daily study, most of the variation appeared to be random, but occasional day-to-day changes occurred that were greater than those expected by chance. To determine how well a single SCE sample estimated the pooled mean for each donor in each study, the authors calculated the number of samples that encompassed that donor's pooled mean within 1 or more standard errors. For both studies, about 75% of the samples encompassed the pooled mean within 2 standard errors. An analysis of high-frequency cells (HFCs) was also undertaken. The results for each study indicate that the proportion of HFCs, compared with the use of Fisher's Exact test, is significantly more constant than the means, which were compared by using the t-test. These results coupled with our previous work suggest that HFC analysis may be the method of choice when analyzing data from human population studies.

  1. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  2. Cost-effective sampling of 137Cs-derived net soil redistribution: part 1 – estimating the spatial mean across scales of variation

    International Nuclear Information System (INIS)

    Li, Y.; Chappell, A.; Nyamdavaa, B.; Yu, H.; Davaasuren, D.; Zoljargal, K.

    2015-01-01

    The 137 Cs technique for estimating net time-integrated soil redistribution is valuable for understanding the factors controlling soil redistribution by all processes. The literature on this technique is dominated by studies of individual fields and describes its typically time-consuming nature. We contend that the community making these studies has inappropriately assumed that many 137 Cs measurements are required and hence estimates of net soil redistribution can only be made at the field scale. Here, we support future studies of 137 Cs-derived net soil redistribution to apply their often limited resources across scales of variation (field, catchment, region etc.) without compromising the quality of the estimates at any scale. We describe a hybrid, design-based and model-based, stratified random sampling design with composites to estimate the sampling variance and a cost model for fieldwork and laboratory measurements. Geostatistical mapping of net (1954–2012) soil redistribution as a case study on the Chinese Loess Plateau is compared with estimates for several other sampling designs popular in the literature. We demonstrate the cost-effectiveness of the hybrid design for spatial estimation of net soil redistribution. To demonstrate the limitations of current sampling approaches to cut across scales of variation, we extrapolate our estimate of net soil redistribution across the region, show that for the same resources, estimates from many fields could have been provided and would elucidate the cause of differences within and between regional estimates. We recommend that future studies evaluate carefully the sampling design to consider the opportunity to investigate 137 Cs-derived net soil redistribution across scales of variation. - Highlights: • The 137 Cs technique estimates net time-integrated soil redistribution by all processes. • It is time-consuming and dominated by studies of individual fields. • We use limited resources to estimate soil

  3. Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster

    DEFF Research Database (Denmark)

    Schou, Mads Fristrup

    2013-01-01

    When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and diminishes environmental variance. This method was compared with a traditional egg collection method where eggs are collected directly from the medium. Within each method the observed and expected standard deviations of egg-to-adult viability were compared, whereby the difference in the randomness...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....

  4. Weekday variation in triglyceride concentrations in 1.8 million blood samples

    DEFF Research Database (Denmark)

    Jaskolowski, Jörn; Ritz, Christian; Sjödin, Anders Mikael

    2017-01-01

    BACKGROUND: Triglyceride (TG) concentration is used as a marker of cardio-metabolic risk. However, diurnal and possibly weekday variation exists in TG concentrations. OBJECTIVE: To investigate weekday variation in TG concentrations among 1.8 million blood samples drawn between 2008 and 2015 from...... variations in TG concentrations were recorded for out-patients between the age of 9 to 26 years, with up to 20% higher values on Mondays compared to Fridays (all PTriglyceride concentrations were highest after the weekend and gradually declined during the week. We suggest that unhealthy...

  5. High Levels of Sample-to-Sample Variation Confound Data Analysis for Non-Invasive Prenatal Screening of Fetal Microdeletions.

    Directory of Open Access Journals (Sweden)

    Tianjiao Chu

    Full Text Available Our goal was to test the hypothesis that inter-individual genomic copy number variation in control samples is a confounding factor in the non-invasive prenatal detection of fetal microdeletions via the sequence-based analysis of maternal plasma DNA. The database of genomic variants (DGV was used to determine the "Genomic Variants Frequency" (GVF for each 50kb region in the human genome. Whole genome sequencing of fifteen karyotypically normal maternal plasma and six CVS DNA controls samples was performed. The coefficient of variation of relative read counts (cv.RTC for these samples was determined for each 50kb region. Maternal plasma from two pregnancies affected with a chromosome 5p microdeletion was also sequenced, and analyzed using the GCREM algorithm. We found strong correlation between high variance in read counts and GVF amongst controls. Consequently we were unable to confirm the presence of the microdeletion via sequencing of maternal plasma samples obtained from two sequential affected pregnancies. Caution should be exercised when performing NIPT for microdeletions. It is vital to develop our understanding of the factors that impact the sensitivity and specificity of these approaches. In particular, benign copy number variation amongst controls is a major confounder, and their effects should be corrected bioinformatically.

  6. STATISTICAL LANDMARKS AND PRACTICAL ISSUES REGARDING THE USE OF SIMPLE RANDOM SAMPLING IN MARKET RESEARCHES

    Directory of Open Access Journals (Sweden)

    CODRUŢA DURA

    2010-01-01

    Full Text Available The sample represents a particular segment of the statistical populationchosen to represent it as a whole. The representativeness of the sample determines the accuracyfor estimations made on the basis of calculating the research indicators and the inferentialstatistics. The method of random sampling is part of probabilistic methods which can be usedwithin marketing research and it is characterized by the fact that it imposes the requirementthat each unit belonging to the statistical population should have an equal chance of beingselected for the sampling process. When the simple random sampling is meant to be rigorouslyput into practice, it is recommended to use the technique of random number tables in order toconfigure the sample which will provide information that the marketer needs. The paper alsodetails the practical procedure implemented in order to create a sample for a marketingresearch by generating random numbers using the facilities offered by Microsoft Excel.

  7. CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.

    Science.gov (United States)

    Shalizi, Cosma Rohilla; Rinaldo, Alessandro

    2013-04-01

    The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.

  8. Bespoke Bias for Obtaining Free Energy Differences within Variationally Enhanced Sampling.

    Science.gov (United States)

    McCarty, James; Valsson, Omar; Parrinello, Michele

    2016-05-10

    Obtaining efficient sampling of multiple metastable states through molecular dynamics and hence determining free energy differences is central for understanding many important phenomena. Here we present a new biasing strategy, which employs the recent variationally enhanced sampling approach (Valsson and Parrinello Phys. Rev. Lett. 2014, 113, 090601). The bias is constructed from an intuitive model of the local free energy surface describing fluctuations around metastable minima and depends on only a few parameters which are determined variationally such that efficient sampling between states is obtained. The bias constructed in this manner largely reduces the need of finding a set of collective variables that completely spans the conformational space of interest, as they only need to be a locally valid descriptor of the system about its local minimum. We introduce the method and demonstrate its power on two representative examples.

  9. Variate generation for probabilistic fracture mechanics and fitness-for-service studies

    International Nuclear Information System (INIS)

    Walker, J.R.

    1987-01-01

    Atomic Energy of Canada Limited is conducting studies in Probabilistic Fracture Mechanics. These studies are being conducted as part of a fitness-for-service programme in support of CANDU reactors. The Monte Carlo analyses, which form part of the Probabilistic Fracture Mechanics studies, require that variates can be sampled from probability density functions. Accurate pseudo-random numbers are necessary for accurate variate generation. This report details the principles of variate generation, and describes the production and testing of pseudo-random numbers. A new algorithm has been produced for the correct performance of the lattice test for the independence of pseudo-random numbers. Two new pseudo-random number generators have been produced. These generators have excellent randomness properties and can be made fully machine-independent. Versions, in FORTRAN, for VAX and CDC computers are given. Accurate and efficient algorithms for the generation of variates from the specialized probability density functions of Probabilistic Fracture Mechanics are given. 38 refs

  10. Variations among animals when estimating the undegradable fraction of fiber in forage samples

    Directory of Open Access Journals (Sweden)

    Cláudia Batista Sampaio

    2014-10-01

    Full Text Available The objective of this study was to assess the variability among animals regarding the critical time to estimate the undegradable fraction of fiber (ct using an in situ incubation procedure. Five rumenfistulated Nellore steers were used to estimate the degradation profile of fiber. Animals were fed a standard diet with an 80:20 forage:concentrate ratio. Sugarcane, signal grass hay, corn silage and fresh elephant grass samples were assessed. Samples were put in F57 Ankom® bags and were incubated in the rumens of the animals for 0, 6, 12, 18, 24, 48, 72, 96, 120, 144, 168, 192, 216, 240 and 312 hours. The degradation profiles were interpreted using a mixed non-linear model in which a random effect was associated with the degradation rate. For sugarcane, signal grass hay and corn silage, there were no significant variations among animals regarding the fractional degradation rate of neutral and acid detergent fiber; consequently, the ct required to estimate the undegradable fiber fraction did not vary among animals for those forages. However, a significant variability among animals was found for the fresh elephant grass. The results seem to suggest that the variability among animals regarding the degradation rate of fibrous components can be significant.

  11. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    Science.gov (United States)

    Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.

    2011-09-01

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1,2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi 2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  12. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    International Nuclear Information System (INIS)

    Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.

    2011-01-01

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi 2 /Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  13. LOD score exclusion analyses for candidate genes using random population samples.

    Science.gov (United States)

    Deng, H W; Li, J; Recker, R R

    2001-05-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes with random population samples. We develop a LOD score approach for exclusion analyses of candidate genes with random population samples. Under this approach, specific genetic effects and inheritance models at candidate genes can be analysed and if a LOD score is < or = - 2.0, the locus can be excluded from having an effect larger than that specified. Computer simulations show that, with sample sizes often employed in association studies, this approach has high power to exclude a gene from having moderate genetic effects. In contrast to regular association analyses, population admixture will not affect the robustness of our analyses; in fact, it renders our analyses more conservative and thus any significant exclusion result is robust. Our exclusion analysis complements association analysis for candidate genes in random population samples and is parallel to the exclusion mapping analyses that may be conducted in linkage analyses with pedigrees or relative pairs. The usefulness of the approach is demonstrated by an application to test the importance of vitamin D receptor and estrogen receptor genes underlying the differential risk to osteoporotic fractures.

  14. Generating Random Samples of a Given Size Using Social Security Numbers.

    Science.gov (United States)

    Erickson, Richard C.; Brauchle, Paul E.

    1984-01-01

    The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)

  15. Random On-Board Pixel Sampling (ROPS) X-Ray Camera

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Zhehui [Los Alamos; Iaroshenko, O. [Los Alamos; Li, S. [Los Alamos; Liu, T. [Fermilab; Parab, N. [Argonne (main); Chen, W. W. [Purdue U.; Chu, P. [Los Alamos; Kenyon, G. [Los Alamos; Lipton, R. [Fermilab; Sun, K.-X. [Nevada U., Las Vegas

    2017-09-25

    Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.

  16. Microenvironmental variation in preassay rearing conditions can ...

    Indian Academy of Sciences (India)

    alternatively in the presence of some random environmen- tal noise affecting the ... variation leading to a systematic increase or decrease in the fecundity of all pairs of flies that ... can potentially arise due to nonrandom sampling across the.

  17. Computing variational bounds for flow through random aggregates of Spheres

    International Nuclear Information System (INIS)

    Berryman, J.G.

    1983-01-01

    Known formulas for variational bounds on Darcy's constant for slow flow through porous media depend on two-point and three-poiint spatial correlation functions. Certain bounds due to Prager and Doi depending only a two-point correlation functions have been calculated for the first time for random aggregates of spheres with packing fractions (eta) up to eta = 0.64. Three radial distribution functions for hard spheres were tested for eta up to 0.49: (1) the uniform distribution or ''well-stirred approximation,'' (2) the Percus Yevick approximation, and (3) the semi-empirical distribution of Verlet and Weis. The empirical radial distribution functions of Benett andd Finney were used for packing fractions near the random-close-packing limit (eta/sub RCP/dapprox.0.64). An accurate multidimensional Monte Carlo integration method (VEGAS) developed by Lepage was used to compute the required two-point correlation functions. The results show that Doi's bounds are preferred for eta>0.10 while Prager's bounds are preferred for eta>0.10. The ''upper bounds'' computed using the well-stirred approximation actually become negative (which is physically impossible) as eta increases, indicating the very limited value of this approximation. The other two choices of radial distribution function give reasonable results for eta up to 0.49. However, these bounds do not decrease with eta as fast as expected for large eta. It is concluded that variational bounds dependent on three-point correlation functions are required to obtain more accurate bounds on Darcy's constant for large eta

  18. Detection of the Thickness Variation of a Stainless Steel sample using Pulsed Eddy Current

    International Nuclear Information System (INIS)

    Cheong, Y. M.; Angani, C. S.; Park, D. G.; Jhong, H. K.; Kim, G. D.; Kim, C. G.

    2008-01-01

    The Pulsed Eddy Current (PEC) system has been developed for the detection of thickness variation of stainless steel. The sample was machined as step configuration using stainless steel for thickness variation from 1mm to 5mm step by step. The LabView computer program was developed to display the variation in the amplitude of the detected pulse by scanning the PECT probe on the flat side of the sample. The pickup Sensor measures the effective magnetic field on the sample, which is the sum of the incident field and the field reflected by the specimen due to the induced eddy currents in the sample. We use the hall sensor for the detection. Usage of hall sensor instead of coil as a field detector improves the detectability and special resolution. This technology can be used in detection of local wall thinning of the pipeline of nuclear power plant

  19. Misrepresenting random sampling? A systematic review of research papers in the Journal of Advanced Nursing.

    Science.gov (United States)

    Williamson, Graham R

    2003-11-01

    This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.

  20. Smoothing the redshift distributions of random samples for the baryon acoustic oscillations: applications to the SDSS-III BOSS DR12 and QPM mock samples

    Science.gov (United States)

    Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen

    2017-12-01

    We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.

  1. Random matrix approach to the dynamics of stock inventory variations

    International Nuclear Information System (INIS)

    Zhou Weixing; Mu Guohua; Kertész, János

    2012-01-01

    It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient C ij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ 1 and λ 2 ) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ 1 ) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients C VR between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets. (paper)

  2. Random matrix approach to the dynamics of stock inventory variations

    Science.gov (United States)

    Zhou, Wei-Xing; Mu, Guo-Hua; Kertész, János

    2012-09-01

    It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient Cij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ1 and λ2) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ1) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients CV R between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets.

  3. Control charts for location based on different sampling schemes

    NARCIS (Netherlands)

    Mehmood, R.; Riaz, M.; Does, R.J.M.M.

    2013-01-01

    Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set

  4. A novel quantitative approach for eliminating sample-to-sample variation using a hue saturation value analysis program.

    Science.gov (United States)

    Yabusaki, Katsumi; Faits, Tyler; McMullen, Eri; Figueiredo, Jose Luiz; Aikawa, Masanori; Aikawa, Elena

    2014-01-01

    As computing technology and image analysis techniques have advanced, the practice of histology has grown from a purely qualitative method to one that is highly quantified. Current image analysis software is imprecise and prone to wide variation due to common artifacts and histological limitations. In order to minimize the impact of these artifacts, a more robust method for quantitative image analysis is required. Here we present a novel image analysis software, based on the hue saturation value color space, to be applied to a wide variety of histological stains and tissue types. By using hue, saturation, and value variables instead of the more common red, green, and blue variables, our software offers some distinct advantages over other commercially available programs. We tested the program by analyzing several common histological stains, performed on tissue sections that ranged from 4 µm to 10 µm in thickness, using both a red green blue color space and a hue saturation value color space. We demonstrated that our new software is a simple method for quantitative analysis of histological sections, which is highly robust to variations in section thickness, sectioning artifacts, and stain quality, eliminating sample-to-sample variation.

  5. Standardised Resting Time Prior to Blood Sampling and Diurnal Variation Associated with Risk of Patient Misclassification

    DEFF Research Database (Denmark)

    Bøgh Andersen, Ida; Brasen, Claus L.; Christensen, Henry

    2015-01-01

    .9×10-7) and sodium (p = 8.7×10-16). Only TSH and albumin were clinically significantly influenced by diurnal variation. Resting time had no clinically significant effect. CONCLUSIONS: We found no need for resting 15 minutes prior to blood sampling. However, diurnal variation was found to have a significant......BACKGROUND: According to current recommendations, blood samples should be taken in the morning after 15 minutes' resting time. Some components exhibit diurnal variation and in response to pressures to expand opening hours and reduce waiting time, the aims of this study were to investigate...... the impact of resting time prior to blood sampling and diurnal variation on biochemical components, including albumin, thyrotropin (TSH), total calcium and sodium in plasma. METHODS: All patients referred to an outpatient clinic for blood sampling were included in the period Nov 2011 until June 2014 (opening...

  6. Magnitude of 14C/12C variations based on archaeological samples

    International Nuclear Information System (INIS)

    Kusumgar, S.; Agrawal, D.P.

    1977-01-01

    The magnitude of 14 C/ 12 C variations in the period A.D. 5O0 to 200 B.C. and 370 B.C. to 2900 B.C. is discussed. The 14 C dates of well-dated archaeological samples from India and Egypt do not show any significant divergence from the historical ages. On the other hand, the corrections based on dendrochronological samples show marked deviations for the same time period. A plea is, therefore, made to study old tree samples from Anatolia and Irish bogs and archaeological samples from west Asia to arrive at a more realistic calibration curve. (author)

  7. Variation in the diversity and richness of parasitoid wasps based on sampling effort.

    Science.gov (United States)

    Saunders, Thomas E; Ward, Darren F

    2018-01-01

    Parasitoid wasps are a mega-diverse, ecologically dominant, but poorly studied component of global biodiversity. In order to maximise the efficiency and reduce the cost of their collection, the application of optimal sampling techniques is necessary. Two sites in Auckland, New Zealand were sampled intensively to determine the relationship between sampling effort and observed species richness of parasitoid wasps from the family Ichneumonidae. Twenty traps were deployed at each site at three different times over the austral summer period, resulting in a total sampling effort of 840 Malaise-trap-days. Rarefaction techniques and non-parametric estimators were used to predict species richness and to evaluate the variation and completeness of sampling. Despite an intensive Malaise-trapping regime over the summer period, no asymptote of species richness was reached. At best, sampling captured two-thirds of parasitoid wasp species present. The estimated total number of species present depended on the month of sampling and the statistical estimator used. Consequently, the use of fewer traps would have caught only a small proportion of all species (one trap 7-21%; two traps 13-32%), and many traps contributed little to the overall number of individuals caught. However, variation in the catch of individual Malaise traps was not explained by seasonal turnover of species, vegetation or environmental conditions surrounding the trap, or distance of traps to one another. Overall the results demonstrate that even with an intense sampling effort the community is incompletely sampled. The use of only a few traps and/or for very short periods severely limits the estimates of richness because (i) fewer individuals are caught leading to a greater number of singletons; and (ii) the considerable variation of individual traps means some traps will contribute few or no individuals. Understanding how sampling effort affects the richness and diversity of parasitoid wasps is a useful

  8. Randomized branch sampling to estimatefruit production in Pecan trees cv. ‘Barton’

    Directory of Open Access Journals (Sweden)

    Filemom Manoel Mokochinski

    Full Text Available ABSTRACT: Sampling techniques to quantify the production of fruits are still very scarce and create a gap in crop development research. This study was conducted in a rural property in the county of Cachoeira do Sul - RS to estimate the efficiency of randomized branch sampling (RBS in quantifying the production of pecan fruit at three different ages (5,7 and 10 years. Two selection techniques were tested: the probability proportional to the diameter (PPD and the uniform probability (UP techniques, which were performed on nine trees, three from each age and randomly chosen. The RBS underestimated fruit production for all ages, and its main drawback was the high sampling error (125.17% - PPD and 111.04% - UP. The UP was regarded as more efficient than the PPD, though both techniques estimated similar production and similar experimental errors. In conclusion, we reported that branch sampling was inaccurate for this case study, requiring new studies to produce estimates with smaller sampling error.

  9. Variational Infinite Hidden Conditional Random Fields

    NARCIS (Netherlands)

    Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja; Ghahramani, Zoubin

    2015-01-01

    Hidden conditional random fields (HCRFs) are discriminative latent variable models which have been shown to successfully learn the hidden structure of a given classification problem. An Infinite hidden conditional random field is a hidden conditional random field with a countably infinite number of

  10. Random Variation in Student Performance by Class Size: Implications of NCLB in Rural Pennsylvania

    Science.gov (United States)

    Goetz, Stephan J.

    2005-01-01

    Schools that fail to make "adequate yearly progress" under NCLB face sanctions and may lose students to other schools. In smaller schools, random yearly variation in innate student ability and behavior can cause changes in scores that are beyond the influence of teachers. This study examines changes in reading and math scores across…

  11. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...

  12. Application of random effects to the study of resource selection by animals.

    Science.gov (United States)

    Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L

    2006-07-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions

  13. Monitoring oil persistence on beaches : SCAT versus stratified random sampling designs

    International Nuclear Information System (INIS)

    Short, J.W.; Lindeberg, M.R.; Harris, P.M.; Maselko, J.M.; Pella, J.J.; Rice, S.D.

    2003-01-01

    In the event of a coastal oil spill, shoreline clean-up assessment teams (SCAT) commonly rely on visual inspection of the entire affected area to monitor the persistence of the oil on beaches. Occasionally, pits are excavated to evaluate the persistence of subsurface oil. This approach is practical for directing clean-up efforts directly following a spill. However, sampling of the 1989 Exxon Valdez oil spill in Prince William Sound 12 years later has shown that visual inspection combined with pit excavation does not offer estimates of contaminated beach area of stranded oil volumes. This information is needed to statistically evaluate the significance of change with time. Assumptions regarding the correlation of visually-evident surface oil and cryptic subsurface oil are usually not evaluated as part of the SCAT mandate. Stratified random sampling can avoid such problems and could produce precise estimates of oiled area and volume that allow for statistical assessment of major temporal trends and the extent of the impact. The 2001 sampling of the shoreline of Prince William Sound showed that 15 per cent of surface oil occurrences were associated with subsurface oil. This study demonstrates the usefulness of the stratified random sampling method and shows how sampling design parameters impact statistical outcome. Power analysis based on the study results, indicate that optimum power is derived when unnecessary stratification is avoided. It was emphasized that sampling effort should be balanced between choosing sufficient beaches for sampling and the intensity of sampling

  14. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    Science.gov (United States)

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  15. Variation: Use It or Misuse It--Replication and Its Variants

    Science.gov (United States)

    Drummond, Gordon B.; Vowler, Sarah L.

    2012-01-01

    In this article, the authors talk about variation and how variation between measurements may be reduced if sampling is not random. They also talk about replication and its variants. A replicate is a repeated measurement from the same experimental unit. An experimental unit is the smallest part of an experiment or a study that can be subject to a…

  16. A simple and efficient alternative to implementing systematic random sampling in stereological designs without a motorized microscope stage.

    Science.gov (United States)

    Melvin, Neal R; Poda, Daniel; Sutherland, Robert J

    2007-10-01

    When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.

  17. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  18. A random sampling approach for robust estimation of tissue-to-plasma ratio from extremely sparse data.

    Science.gov (United States)

    Chu, Hui-May; Ette, Ene I

    2005-09-02

    his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.

  19. Adaptive filtration of speech signals in the presence of correlated noise with random variation of probabilistic characteristics

    OpenAIRE

    M. O. Partala; S. Ya. Zhuk

    2007-01-01

    On the base of mixed Markoff process in discrete time optimal and quasioptimal algorithms is designed for adaptive filtration of speech signals in the presence of correlated noise with random variation of probabilistic characteristics.

  20. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    Science.gov (United States)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  1. A modern theory of random variation with applications in stochastic calculus, financial mathematics, and Feynman integration

    CERN Document Server

    Muldowney, Patrick

    2012-01-01

    A Modern Theory of Random Variation is a new and radical re-formulation of the mathematical underpinnings of subjects as diverse as investment, communication engineering, and quantum mechanics. Setting aside the classical theory of probability measure spaces, the book utilizes a mathematically rigorous version of the theory of random variation that bases itself exclusively on finitely additive probability distribution functions. In place of twentieth century Lebesgue integration and measure theory, the author uses the simpler concept of Riemann sums, and the non-absolute Riemann-type integration of Henstock. Readers are supplied with an accessible approach to standard elements of probability theory such as the central limmit theorem and Brownian motion as well as remarkable, new results on Feynman diagrams and stochastic integrals. Throughout the book, detailed numerical demonstrations accompany the discussions of abstract mathematical theory, from the simplest elements of the subject to the most complex. I...

  2. Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Žerovnik, G., E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Kodeli, I.A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Capote, R. [International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Smith, D.L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States)

    2015-01-15

    Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.

  3. Seismic random noise attenuation using shearlet and total generalized variation

    International Nuclear Information System (INIS)

    Kong, Dehui; Peng, Zhenming

    2015-01-01

    Seismic denoising from a corrupted observation is an important part of seismic data processing which improves the signal-to-noise ratio (SNR) and resolution. In this paper, we present an effective denoising method to attenuate seismic random noise. The method takes advantage of shearlet and total generalized variation (TGV) regularization. Different regularity levels of TGV improve the quality of the final result by suppressing Gibbs artifacts caused by the shearlet. The problem is formulated as mixed constraints in a convex optimization. A Bregman algorithm is proposed to solve the proposed model. Extensive experiments based on one synthetic datum and two post-stack field data are done to compare performance. The results demonstrate that the proposed method provides superior effectiveness and preserve the structure better. (paper)

  4. Seismic random noise attenuation using shearlet and total generalized variation

    Science.gov (United States)

    Kong, Dehui; Peng, Zhenming

    2015-12-01

    Seismic denoising from a corrupted observation is an important part of seismic data processing which improves the signal-to-noise ratio (SNR) and resolution. In this paper, we present an effective denoising method to attenuate seismic random noise. The method takes advantage of shearlet and total generalized variation (TGV) regularization. Different regularity levels of TGV improve the quality of the final result by suppressing Gibbs artifacts caused by the shearlet. The problem is formulated as mixed constraints in a convex optimization. A Bregman algorithm is proposed to solve the proposed model. Extensive experiments based on one synthetic datum and two post-stack field data are done to compare performance. The results demonstrate that the proposed method provides superior effectiveness and preserve the structure better.

  5. Random Positional Variation Among the Skull, Mandible, and Cervical Spine With Treatment Progression During Head-and-Neck Radiotherapy

    International Nuclear Information System (INIS)

    Ahn, Peter H.; Ahn, Andrew I.; Lee, C. Joe; Shen Jin; Miller, Ekeni; Lukaj, Alex; Milan, Elissa; Yaparpalvi, Ravindra; Kalnicki, Shalom; Garg, Madhur K.

    2009-01-01

    Purpose: With 54 o of freedom from the skull to mandible to C7, ensuring adequate immobilization for head-and-neck radiotherapy (RT) is complex. We quantify variations in skull, mandible, and cervical spine movement between RT sessions. Methods and Materials: Twenty-three sequential head-and-neck RT patients underwent serial computed tomography. Patients underwent planned rescanning at 11, 22, and 33 fractions for a total of 93 scans. Coordinates of multiple bony elements of the skull, mandible, and cervical spine were used to calculate rotational and translational changes of bony anatomy compared with the original planning scan. Results: Mean translational and rotational variations on rescanning were negligible, but showed a wide range. Changes in scoliosis and lordosis of the cervical spine between fractions showed similar variability. There was no correlation between positional variation and fraction number and no strong correlation with weight loss or skin separation. Semi-independent rotational and translation movement of the skull in relation to the lower cervical spine was shown. Positioning variability measured by means of vector displacement was largest in the mandible and lower cervical spine. Conclusions: Although only small overall variations in position between head-and-neck RT sessions exist on average, there is significant random variation in patient positioning of the skull, mandible, and cervical spine elements. Such variation is accentuated in the mandible and lower cervical spine. These random semirigid variations in positioning of the skull and spine point to a need for improved immobilization and/or confirmation of patient positioning in RT of the head and neck

  6. Random Positional Variation Among the Skull, Mandible, and Cervical Spine With Treatment Progression During Head-and-Neck Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Ahn, Peter H. [Department of Radiation Oncology, Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, NY (United States)], E-mail: phahn@mdanderson.org; Ahn, Andrew I [Albert Einstein College of Medicine of Yeshiva University, Bronx, NY (United States); Lee, C Joe; Jin, Shen; Miller, Ekeni; Lukaj, Alex; Milan, Elissa; Yaparpalvi, Ravindra; Kalnicki, Shalom; Garg, Madhur K [Department of Radiation Oncology, Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, NY (United States)

    2009-02-01

    Purpose: With 54{sup o} of freedom from the skull to mandible to C7, ensuring adequate immobilization for head-and-neck radiotherapy (RT) is complex. We quantify variations in skull, mandible, and cervical spine movement between RT sessions. Methods and Materials: Twenty-three sequential head-and-neck RT patients underwent serial computed tomography. Patients underwent planned rescanning at 11, 22, and 33 fractions for a total of 93 scans. Coordinates of multiple bony elements of the skull, mandible, and cervical spine were used to calculate rotational and translational changes of bony anatomy compared with the original planning scan. Results: Mean translational and rotational variations on rescanning were negligible, but showed a wide range. Changes in scoliosis and lordosis of the cervical spine between fractions showed similar variability. There was no correlation between positional variation and fraction number and no strong correlation with weight loss or skin separation. Semi-independent rotational and translation movement of the skull in relation to the lower cervical spine was shown. Positioning variability measured by means of vector displacement was largest in the mandible and lower cervical spine. Conclusions: Although only small overall variations in position between head-and-neck RT sessions exist on average, there is significant random variation in patient positioning of the skull, mandible, and cervical spine elements. Such variation is accentuated in the mandible and lower cervical spine. These random semirigid variations in positioning of the skull and spine point to a need for improved immobilization and/or confirmation of patient positioning in RT of the head and neck.

  7. The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.

    Science.gov (United States)

    Rodgers, J L

    1999-10-01

    A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.

  8. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality

    Science.gov (United States)

    Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne

    2018-01-01

    Introduction Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. Materials and methods The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. Results The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. Conclusions The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in

  9. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality.

    Science.gov (United States)

    L'Engle, Kelly; Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne

    2018-01-01

    Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in comparison to household surveys. Random digit dialing of mobile

  10. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality.

    Directory of Open Access Journals (Sweden)

    Kelly L'Engle

    Full Text Available Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample.The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census.The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample.The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in comparison to household surveys. Random digit

  11. Neutron monitor generated data distributions in quantum variational Monte Carlo

    Science.gov (United States)

    Kussainov, A. S.; Pya, N.

    2016-08-01

    We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.

  12. Modeling stimulus variation in three common implicit attitude tasks.

    Science.gov (United States)

    Wolsiefer, Katie; Westfall, Jacob; Judd, Charles M

    2017-08-01

    We explored the consequences of ignoring the sampling variation due to stimuli in the domain of implicit attitudes. A large literature in psycholinguistics has examined the statistical treatment of random stimulus materials, but the recommendations from this literature have not been applied to the social psychological literature on implicit attitudes. This is partly because of inherent complications in applying crossed random-effect models to some of the most common implicit attitude tasks, and partly because no work to date has demonstrated that random stimulus variation is in fact consequential in implicit attitude measurement. We addressed this problem by laying out statistically appropriate and practically feasible crossed random-effect models for three of the most commonly used implicit attitude measures-the Implicit Association Test, affect misattribution procedure, and evaluative priming task-and then applying these models to large datasets (average N = 3,206) that assess participants' implicit attitudes toward race, politics, and self-esteem. We showed that the test statistics from the traditional analyses are substantially (about 60 %) inflated relative to the more-appropriate analyses that incorporate stimulus variation. Because all three tasks used the same stimulus words and faces, we could also meaningfully compare the relative contributions of stimulus variation across the tasks. In an appendix, we give syntax in R, SAS, and SPSS for fitting the recommended crossed random-effects models to data from all three tasks, as well as instructions on how to structure the data file.

  13. Phenotypic variation in California populations of valley oak (Quercus lobata Née) sampled along elevational gradients

    Science.gov (United States)

    Ana L. Albarrán-Lara; Jessica W. Wright; Paul F. Gugger; Annette Delfino-Mix; Juan Manuel Peñaloza-Ramírez; Victoria L. Sork

    2015-01-01

    California oaks exhibit tremendous phenotypic variation throughout their range. This variation reflects phenotypic plasticity in tree response to local environmental conditions as well as genetic differences underlying those phenotypes. In this study, we analyze phenotypic variation in leaf traits for valley oak adults sampled along three elevational transects and in...

  14. Occupational position and its relation to mental distress in a random sample of Danish residents

    DEFF Research Database (Denmark)

    Rugulies, Reiner Ernst; Madsen, Ida E H; Nielsen, Maj Britt D

    2010-01-01

    PURPOSE: To analyze the distribution of depressive, anxiety, and somatization symptoms across different occupational positions in a random sample of Danish residents. METHODS: The study sample consisted of 591 Danish residents (50% women), aged 20-65, drawn from an age- and gender-stratified random...... sample of the Danish population. Participants filled out a survey that included the 92 item version of the Hopkins Symptom Checklist (SCL-92). We categorized occupational position into seven groups: high- and low-grade non-manual workers, skilled and unskilled manual workers, high- and low-grade self...

  15. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  16. Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling

    Directory of Open Access Journals (Sweden)

    Bo Yu

    2015-01-01

    Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.

  17. Novel Complete Probabilistic Models of Random Variation in High Frequency Performance of Nanoscale MOSFET

    Directory of Open Access Journals (Sweden)

    Rawid Banchuin

    2013-01-01

    Full Text Available The novel probabilistic models of the random variations in nanoscale MOSFET's high frequency performance defined in terms of gate capacitance and transition frequency have been proposed. As the transition frequency variation has also been considered, the proposed models are considered as complete unlike the previous one which take only the gate capacitance variation into account. The proposed models have been found to be both analytic and physical level oriented as they are the precise mathematical expressions in terms of physical parameters. Since the up-to-date model of variation in MOSFET's characteristic induced by physical level fluctuation has been used, part of the proposed models for gate capacitance is more accurate and physical level oriented than its predecessor. The proposed models have been verified based on the 65 nm CMOS technology by using the Monte-Carlo SPICE simulations of benchmark circuits and Kolmogorov-Smirnov tests as highly accurate since they fit the Monte-Carlo-based analysis results with 99% confidence. Hence, these novel models have been found to be versatile for the statistical/variability aware analysis/design of nanoscale MOSFET-based analog/mixed signal circuits and systems.

  18. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  19. Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.

    Science.gov (United States)

    You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary

    2011-02-01

    The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure

  20. Adaptive importance sampling of random walks on continuous state spaces

    International Nuclear Information System (INIS)

    Baggerly, K.; Cox, D.; Picard, R.

    1998-01-01

    The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material

  1. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    Science.gov (United States)

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with

  2. Brief communication: Is variation in the cranial capacity of the Dmanisi sample too high to be from a single species?

    Science.gov (United States)

    Lee, Sang-Hee

    2005-07-01

    This study uses data resampling to test the null hypothesis that the degree of variation in the cranial capacity of the Dmanisi hominid sample is within the range variation of a single species. The statistical significance of the variation in the Dmanisi sample is examined using simulated distributions based on comparative samples of modern humans, chimpanzees, and gorillas. Results show that it is unlikely to find the maximum difference observed in the Dmanisi sample in distributions of female-female pairs from comparative single-species samples. Given that two sexes are represented, the difference in the Dmanisi sample is not enough to reject the null hypothesis of a single species. Results of this study suggest no compelling reason to invoke multiple taxa to explain variation in the cranial capacity of the Dmanisi hominids. (c) 2004 Wiley-Liss, Inc

  3. Enhanced, targeted sampling of high-dimensional free-energy landscapes using variationally enhanced sampling, with an application to chignolin.

    Science.gov (United States)

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-02-02

    The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin.

  4. Enhanced, targeted sampling of high-dimensional free-energy landscapes using variationally enhanced sampling, with an application to chignolin

    Science.gov (United States)

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-01-01

    The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868

  5. A Variational Approach to Enhanced Sampling and Free Energy Calculations

    Science.gov (United States)

    Parrinello, Michele

    2015-03-01

    The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.

  6. A Table-Based Random Sampling Simulation for Bioluminescence Tomography

    Directory of Open Access Journals (Sweden)

    Xiaomeng Zhang

    2006-01-01

    Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.

  7. CNV-RF Is a Random Forest-Based Copy Number Variation Detection Method Using Next-Generation Sequencing.

    Science.gov (United States)

    Onsongo, Getiria; Baughn, Linda B; Bower, Matthew; Henzler, Christine; Schomaker, Matthew; Silverstein, Kevin A T; Thyagarajan, Bharat

    2016-11-01

    Simultaneous detection of small copy number variations (CNVs) (<0.5 kb) and single-nucleotide variants in clinically significant genes is of great interest for clinical laboratories. The analytical variability in next-generation sequencing (NGS) and artifacts in coverage data because of issues with mappability along with lack of robust bioinformatics tools for CNV detection have limited the utility of targeted NGS data to identify CNVs. We describe the development and implementation of a bioinformatics algorithm, copy number variation-random forest (CNV-RF), that incorporates a machine learning component to identify CNVs from targeted NGS data. Using CNV-RF, we identified 12 of 13 deletions in samples with known CNVs, two cases with duplications, and identified novel deletions in 22 additional cases. Furthermore, no CNVs were identified among 60 genes in 14 cases with normal copy number and no CNVs were identified in another 104 patients with clinical suspicion of CNVs. All positive deletions and duplications were confirmed using a quantitative PCR method. CNV-RF also detected heterozygous deletions and duplications with a specificity of 50% across 4813 genes. The ability of CNV-RF to detect clinically relevant CNVs with a high degree of sensitivity along with confirmation using a low-cost quantitative PCR method provides a framework for providing comprehensive NGS-based CNV/single-nucleotide variant detection in a clinical molecular diagnostics laboratory. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.

  8. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...

  9. Variation in Incentive Effects across Neighbourhoods

    Directory of Open Access Journals (Sweden)

    Mark J Hanly

    2014-03-01

    Full Text Available Small monetary incentives increase survey cooperation rates, however evidence suggests that the appeal of incentives may vary across sample subgroups. Fieldwork budgets can be most effectively distributed by targeting those subgroups where incentives will have the strongest appeal. We examine data from a randomised experiment implemented in the pilot phase of the Irish Longitudinal Study of Ageing, which randomly assigned households to receive a higher (€25 or lower (€10 incentive amount. Using a random effects logistic regression model, we observe a variable effect of the higher incentive across geographic neighbourhoods. The higher incentive has the largest impact in neighbourhoods where baseline cooperation is low, as predicted by Leverage-Saliency theory. Auxiliary neighbourhood-level variables are linked to the sample frame to explore this variation further, however none of these moderate the incentive effect, suggesting that richer information is needed to identify sample subgroups where incentive budgets should be directed.

  10. A Nationwide Random Sampling Survey of Potential Complicated Grief in Japan

    Science.gov (United States)

    Mizuno, Yasunao; Kishimoto, Junji; Asukai, Nozomu

    2012-01-01

    To investigate the prevalence of significant loss, potential complicated grief (CG), and its contributing factors, we conducted a nationwide random sampling survey of Japanese adults aged 18 or older (N = 1,343) using a self-rating Japanese-language version of the Complicated Grief Brief Screen. Among them, 37.0% experienced their most significant…

  11. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  12. Randomness at the root of things 1: Random walks

    Science.gov (United States)

    Ogborn, Jon; Collins, Simon; Brown, Mick

    2003-09-01

    This is the first of a pair of articles about randomness in physics. In this article, we use some variations on the idea of a `random walk' to consider first the path of a particle in Brownian motion, and then the random variation to be expected in radioactive decay. The arguments are set in the context of the general importance of randomness both in physics and in everyday life. We think that the ideas could usefully form part of students' A-level work on random decay and quantum phenomena, as well as being good for their general education. In the second article we offer a novel and simple approach to Poisson sequences.

  13. Randomization tests

    CERN Document Server

    Edgington, Eugene

    2007-01-01

    Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani

  14. Random Intercept and Random Slope 2-Level Multilevel Models

    Directory of Open Access Journals (Sweden)

    Rehan Ahmad Khan

    2012-11-01

    Full Text Available Random intercept model and random intercept & random slope model carrying two-levels of hierarchy in the population are presented and compared with the traditional regression approach. The impact of students’ satisfaction on their grade point average (GPA was explored with and without controlling teachers influence. The variation at level-1 can be controlled by introducing the higher levels of hierarchy in the model. The fanny movement of the fitted lines proves variation of student grades around teachers.

  15. Variation-aware advanced CMOS devices and SRAM

    CERN Document Server

    Shin, Changhwan

    2016-01-01

    This book provides a comprehensive overview of contemporary issues in complementary metal-oxide semiconductor (CMOS) device design, describing how to overcome process-induced random variations such as line-edge-roughness, random-dopant-fluctuation, and work-function variation, and the applications of novel CMOS devices to cache memory (or Static Random Access Memory, SRAM). The author places emphasis on the physical understanding of process-induced random variation as well as the introduction of novel CMOS device structures and their application to SRAM. The book outlines the technical predicament facing state-of-the-art CMOS technology development, due to the effect of ever-increasing process-induced random/intrinsic variation in transistor performance at the sub-30-nm technology nodes. Therefore, the physical understanding of process-induced random/intrinsic variations and the technical solutions to address these issues plays a key role in new CMOS technology development. This book aims to provide the reade...

  16. The study of combining Latin Hypercube Sampling method and LU decomposition method (LULHS method) for constructing spatial random field

    Science.gov (United States)

    WANG, P. T.

    2015-12-01

    Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.

  17. The Dirichet-Multinomial model for multivariate randomized response data and small samples

    NARCIS (Netherlands)

    Avetisyan, Marianna; Fox, Gerardus J.A.

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The

  18. Accounting for between-study variation in incremental net benefit in value of information methodology.

    Science.gov (United States)

    Willan, Andrew R; Eckermann, Simon

    2012-10-01

    Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.

  19. X-ray speckle contrast variation at a sample-specific absorption edges

    International Nuclear Information System (INIS)

    Retsch, C. C.; Wang, Y.; Frigo, S. P.; Stephenson, G. B.; McNulty, I.

    2000-01-01

    The authors measured static x-ray speckle contrast variation with the incident photon energy across sample-specific absorption edges. They propose that the variation depends strongly on the spectral response function of the monochromator. Speckle techniques have been introduced to the x-ray regime during recent years. Most of these experiments, however, were done at photon energies above 5 keV. They are working on this technique in the 1 to 4 keV range, an energy range that includes many important x-ray absorption edges, e.g., in Al, Si, P, S, the rare-earths, and others. To their knowledge, the effect of absorption edges on speckle contrast has not yet been studied. In this paper, they present their initial measurements and understanding of the observed phenomena

  20. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    Science.gov (United States)

    Ma, Y. C.; Liu, H. Y.; Yan, S. B.; Yang, Y. H.; Yang, M. W.; Li, J. M.; Tang, J.

    2013-05-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency.

  1. Path integral methods for primordial density perturbations - sampling of constrained Gaussian random fields

    International Nuclear Information System (INIS)

    Bertschinger, E.

    1987-01-01

    Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references

  2. An efficient method of randomly sampling the coherent angular scatter distribution

    International Nuclear Information System (INIS)

    Williamson, J.F.; Morin, R.L.

    1983-01-01

    Monte Carlo simulations of photon transport phenomena require random selection of an interaction process at each collision site along the photon track. Possible choices are usually limited to photoelectric absorption and incoherent scatter as approximated by the Klein-Nishina distribution. A technique is described for sampling the coherent angular scatter distribution, for the benefit of workers in medical physics. (U.K.)

  3. Quantum chemistry by random walk: Higher accuracy

    International Nuclear Information System (INIS)

    Anderson, J.B.

    1980-01-01

    The random walk method of solving the Schroedinger equation is extended to allow the calculation of eigenvalues of atomic and molecular systems with higher accuracy. The combination of direct calculation of the difference delta between a true wave function psi and a trial wave function psi/sub o/ with importance sampling greatly reduces systematic and statistical error. The method is illustrated with calculations for ground-state hydrogen and helium atoms using trial wave functions from variational calculations. The energies obtained are 20 to 100 times more accurate than those of the corresponding variational calculations

  4. Random vs. systematic sampling from administrative databases involving human subjects.

    Science.gov (United States)

    Hagino, C; Lo, R J

    1998-09-01

    Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.

  5. Genetic variation of seedling traits in a random mating population of sunflower

    International Nuclear Information System (INIS)

    Habib, S.

    2004-01-01

    Forty S/sub 1/ families obtained from a random mating population of sunflower were evaluated in the laboratory for various seedling traits. The objectives of this study were to investigate the extent and nature of genetic variability and to determine the estimates of genotypic and phenotypic correlations among ten seedling traits prevailing in a random mating population of sunflower. The results indicated that significant differences existed among the 40 S/sub 1/ families for all the traits evaluated. Genotypic and phenotypic coefficients of variation were comparatively high for emergence rate index, root/shoot ratio, dry root weight, fresh root weight and fresh shoot weight. The estimates of broad-sense heritability were high and significant for all the traits. The study of genotypic and phenotypic correlations among these traits revealed that generally, the seedlings which took more time to emerge were vigorous for most of the traits except fresh shoot length. However, rapidly emerging seedlings had higher emergence percentage. The root traits appeared to be better indicators of seedling vigour compared to other traits as these traits exhibited strong and positive genotypic and phenotypic correlations among them. (author)

  6. Variations in reporting of outcomes in randomized trials on diet and physical activity in pregnancy

    DEFF Research Database (Denmark)

    Rogozińska, Ewelina; Marlin, Nadine; Yang, Fen

    2017-01-01

    AIM: Trials on diet and physical activity in pregnancy report on various outcomes. We aimed to assess the variations in outcomes reported and their quality in trials on lifestyle interventions in pregnancy. METHODS: We searched major databases without language restrictions for randomized controlled...... trials on diet and physical activity-based interventions in pregnancy up to March 2015. Two independent reviewers undertook study selection and data extraction. We estimated the percentage of papers reporting 'critically important' and 'important' outcomes. We defined the quality of reporting...... as a proportion using a six-item questionnaire. Regression analysis was used to identify factors affecting this quality. RESULTS: Sixty-six randomized controlled trials were published in 78 papers (66 main, 12 secondary). Gestational diabetes (57.6%, 38/66), preterm birth (48.5%, 32/66) and cesarian section (60...

  7. Modeling response variation for radiometric calorimeters

    International Nuclear Information System (INIS)

    Mayer, R.L. II.

    1986-01-01

    Radiometric calorimeters are widely used in the DOE complex for accountability measurements of plutonium and tritium. Proper characterization of response variation for these instruments is, therefore, vital for accurate assessment of measurement control as well as for propagation of error calculations. This is not difficult for instruments used to measure items within a narrow range of power values; however, when a single instrument is used to measure items over a wide range of power values, improper estimates of uncertainty can result since traditional error models for radiometric calorimeters assume that uncertainty is not a function of sample power. This paper describes methods which can be used to accurately estimate random response variation for calorimeters used to measure items over a wide range of sample powers. The model is applicable to the two most common modes of calorimeter operation: heater replacement and servo control. 5 refs., 4 figs., 1 tab

  8. Statistical issues in reporting quality data: small samples and casemix variation.

    Science.gov (United States)

    Zaslavsky, A M

    2001-12-01

    To present two key statistical issues that arise in analysis and reporting of quality data. Casemix variation is relevant to quality reporting when the units being measured have differing distributions of patient characteristics that also affect the quality outcome. When this is the case, adjustment using stratification or regression may be appropriate. Such adjustments may be controversial when the patient characteristic does not have an obvious relationship to the outcome. Stratified reporting poses problems for sample size and reporting format, but may be useful when casemix effects vary across units. Although there are no absolute standards of reliability, high reliabilities (interunit F > or = 10 or reliability > or = 0.9) are desirable for distinguishing above- and below-average units. When small or unequal sample sizes complicate reporting, precision may be improved using indirect estimation techniques that incorporate auxiliary information, and 'shrinkage' estimation can help to summarize the strength of evidence about units with small samples. With broader understanding of casemix adjustment and methods for analyzing small samples, quality data can be analysed and reported more accurately.

  9. Effects of a random spatial variation of the plasma density on the mode conversion in cold, unmagnetized, and stratified plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Jung Yu, Dae [School of Space Research, Kyung Hee University, Yongin 446-701 (Korea, Republic of); Kim, Kihong [Department of Energy Systems Research, Ajou University, Suwon 443-749 (Korea, Republic of)

    2013-12-15

    We study the effects of a random spatial variation of the plasma density on the mode conversion of electromagnetic waves into electrostatic oscillations in cold, unmagnetized, and stratified plasmas. Using the invariant imbedding method, we calculate precisely the electromagnetic field distribution and the mode conversion coefficient, which is defined to be the fraction of the incident wave power converted into electrostatic oscillations, for the configuration where a numerically generated random density variation is added to the background linear density profile. We repeat similar calculations for a large number of random configurations and take an average of the results. We obtain a peculiar nonmonotonic dependence of the mode conversion coefficient on the strength of randomness. As the disorder increases from zero, the maximum value of the mode conversion coefficient decreases initially, then increases to a maximum, and finally decreases towards zero. The range of the incident angle in which mode conversion occurs increases monotonically as the disorder increases. We present numerical results suggesting that the decrease of mode conversion mainly results from the increased reflection due to the Anderson localization effect originating from disorder, whereas the increase of mode conversion of the intermediate disorder regime comes from the appearance of many resonance points and the enhanced tunneling between the resonance points and the cutoff point. We also find a very large local enhancement of the magnetic field intensity for particular random configurations. In order to obtain high mode conversion efficiency, it is desirable to restrict the randomness close to the resonance region.

  10. Effects of a random spatial variation of the plasma density on the mode conversion in cold, unmagnetized, and stratified plasmas

    International Nuclear Information System (INIS)

    Jung Yu, Dae; Kim, Kihong

    2013-01-01

    We study the effects of a random spatial variation of the plasma density on the mode conversion of electromagnetic waves into electrostatic oscillations in cold, unmagnetized, and stratified plasmas. Using the invariant imbedding method, we calculate precisely the electromagnetic field distribution and the mode conversion coefficient, which is defined to be the fraction of the incident wave power converted into electrostatic oscillations, for the configuration where a numerically generated random density variation is added to the background linear density profile. We repeat similar calculations for a large number of random configurations and take an average of the results. We obtain a peculiar nonmonotonic dependence of the mode conversion coefficient on the strength of randomness. As the disorder increases from zero, the maximum value of the mode conversion coefficient decreases initially, then increases to a maximum, and finally decreases towards zero. The range of the incident angle in which mode conversion occurs increases monotonically as the disorder increases. We present numerical results suggesting that the decrease of mode conversion mainly results from the increased reflection due to the Anderson localization effect originating from disorder, whereas the increase of mode conversion of the intermediate disorder regime comes from the appearance of many resonance points and the enhanced tunneling between the resonance points and the cutoff point. We also find a very large local enhancement of the magnetic field intensity for particular random configurations. In order to obtain high mode conversion efficiency, it is desirable to restrict the randomness close to the resonance region

  11. Effect of electrical stimulation and cooking temperature on the within-sample variation of cooking loss and shear force of lamb.

    Science.gov (United States)

    Lewis, P K; Babiker, S A

    1983-01-01

    Electrical stimulation decreased the shear force and increased the cooking loss in seven paired lamb Longissimus dorsi (LD) muscles. This treatment did not have any effect on the within-sample variation. Cooking in 55°, 65° and 75°C water baths for 90 min caused a linear increase in the cooking loss and shear force. There was no stimulation-cooking temperature interaction observed. Cooking temperature also had no effect on the within-sample variation. A possible explanation as to why electrical stimulation did not affect the within-sample variation is given. Copyright © 1983. Published by Elsevier Ltd.

  12. Sources of pre-analytical variations in yield of DNA extracted from blood samples: analysis of 50,000 DNA samples in EPIC.

    Directory of Open Access Journals (Sweden)

    Elodie Caboux

    Full Text Available The European Prospective Investigation into Cancer and nutrition (EPIC is a long-term, multi-centric prospective study in Europe investigating the relationships between cancer and nutrition. This study has served as a basis for a number of Genome-Wide Association Studies (GWAS and other types of genetic analyses. Over a period of 5 years, 52,256 EPIC DNA samples have been extracted using an automated DNA extraction platform. Here we have evaluated the pre-analytical factors affecting DNA yield, including anthropometric, epidemiological and technical factors such as center of subject recruitment, age, gender, body-mass index, disease case or control status, tobacco consumption, number of aliquots of buffy coat used for DNA extraction, extraction machine or procedure, DNA quantification method, degree of haemolysis and variations in the timing of sample processing. We show that the largest significant variations in DNA yield were observed with degree of haemolysis and with center of subject recruitment. Age, gender, body-mass index, cancer case or control status and tobacco consumption also significantly impacted DNA yield. Feedback from laboratories which have analyzed DNA with different SNP genotyping technologies demonstrate that the vast majority of samples (approximately 88% performed adequately in different types of assays. To our knowledge this study is the largest to date to evaluate the sources of pre-analytical variations in DNA extracted from peripheral leucocytes. The results provide a strong evidence-based rationale for standardized recommendations on blood collection and processing protocols for large-scale genetic studies.

  13. Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge

    International Nuclear Information System (INIS)

    Ma, Y C; Liu, H Y; Yan, S B; Li, J M; Tang, J; Yang, Y H; Yang, M W

    2013-01-01

    This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency. (paper)

  14. The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples

    Science.gov (United States)

    Avetisyan, Marianna; Fox, Jean-Paul

    2012-01-01

    In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…

  15. Measuring larval nematode contamination on cattle pastures: Comparing two herbage sampling methods.

    Science.gov (United States)

    Verschave, S H; Levecke, B; Duchateau, L; Vercruysse, J; Charlier, J

    2015-06-15

    Assessing levels of pasture larval contamination is frequently used to study the population dynamics of the free-living stages of parasitic nematodes of livestock. Direct quantification of infective larvae (L3) on herbage is the most applied method to measure pasture larval contamination. However, herbage collection remains labour intensive and there is a lack of studies addressing the variation induced by the sampling method and the required sample size. The aim of this study was (1) to compare two different sampling methods in terms of pasture larval count results and time required to sample, (2) to assess the amount of variation in larval counts at the level of sample plot, pasture and season, respectively and (3) to calculate the required sample size to assess pasture larval contamination with a predefined precision using random plots across pasture. Eight young stock pastures of different commercial dairy herds were sampled in three consecutive seasons during the grazing season (spring, summer and autumn). On each pasture, herbage samples were collected through both a double-crossed W-transect with samples taken every 10 steps (method 1) and four random located plots of 0.16 m(2) with collection of all herbage within the plot (method 2). The average (± standard deviation (SD)) pasture larval contamination using sampling methods 1 and 2 was 325 (± 479) and 305 (± 444)L3/kg dry herbage (DH), respectively. Large discrepancies in pasture larval counts of the same pasture and season were often seen between methods, but no significant difference (P = 0.38) in larval counts between methods was found. Less time was required to collect samples with method 2. This difference in collection time between methods was most pronounced for pastures with a surface area larger than 1 ha. The variation in pasture larval counts from samples generated by random plot sampling was mainly due to the repeated measurements on the same pasture in the same season (residual variance

  16. 222Rn in water: A comparison of two sample collection methods and two sample transport methods, and the determination of temporal variation in North Carolina ground water

    International Nuclear Information System (INIS)

    Hightower, J.H. III

    1994-01-01

    Objectives of this field experiment were: (1) determine whether there was a statistically significant difference between the radon concentrations of samples collected by EPA's standard method, using a syringe, and an alternative, slow-flow method; (2) determine whether there was a statistically significant difference between the measured radon concentrations of samples mailed vs samples not mailed; and (3) determine whether there was a temporal variation of water radon concentration over a 7-month period. The field experiment was conducted at 9 sites, 5 private wells, and 4 public wells, at various locations in North Carolina. Results showed that a syringe is not necessary for sample collection, there was generally no significant radon loss due to mailing samples, and there was statistically significant evidence of temporal variations in water radon concentrations

  17. Seasonal Variation, Chemical Composition and Antioxidant Activity of Brazilian Propolis Samples

    Directory of Open Access Journals (Sweden)

    Érica Weinstein Teixeira

    2010-01-01

    Full Text Available Total phenolic contents, antioxidant activity and chemical composition of propolis samples from three localities of Minas Gerais state (southeast Brazil were determined. Total phenolic contents were determined by the Folin–Ciocalteau method, antioxidant activity was evaluated by DPPH, using BHT as reference, and chemical composition was analyzed by GC/MS. Propolis from Itapecerica and Paula Cândido municipalities were found to have high phenolic contents and pronounced antioxidant activity. From these extracts, 40 substances were identified, among them were simple phenylpropanoids, prenylated phenylpropanoids, sesqui- and diterpenoids. Quantitatively, the main constituent of both samples was allyl-3-prenylcinnamic acid. A sample from Virginópolis municipality had no detectable phenolic substances and contained mainly triterpenoids, the main constituents being α- and β-amyrins. Methanolic extracts from Itapecerica and Paula Cândido exhibited pronounced scavenging activity towards DPPH, indistinguishable from BHT activity. However, extracts from Virginópolis sample exhibited no antioxidant activity. Total phenolic substances, GC/MS analyses and antioxidant activity of samples from Itapecerica collected monthly over a period of 1 year revealed considerable variation. No correlation was observed between antioxidant activity and either total phenolic contents or contents of artepillin C and other phenolic substances, as assayed by CG/MS analysis.

  18. Landslide Susceptibility Assessment Using Frequency Ratio Technique with Iterative Random Sampling

    Directory of Open Access Journals (Sweden)

    Hyun-Joo Oh

    2017-01-01

    Full Text Available This paper assesses the performance of the landslide susceptibility analysis using frequency ratio (FR with an iterative random sampling. A pair of before-and-after digital aerial photographs with 50 cm spatial resolution was used to detect landslide occurrences in Yongin area, Korea. Iterative random sampling was run ten times in total and each time it was applied to the training and validation datasets. Thirteen landslide causative factors were derived from the topographic, soil, forest, and geological maps. The FR scores were calculated from the causative factors and training occurrences repeatedly ten times. The ten landslide susceptibility maps were obtained from the integration of causative factors that assigned FR scores. The landslide susceptibility maps were validated by using each validation dataset. The FR method achieved susceptibility accuracies from 89.48% to 93.21%. And the landslide susceptibility accuracy of the FR method is higher than 89%. Moreover, the ten times iterative FR modeling may contribute to a better understanding of a regularized relationship between the causative factors and landslide susceptibility. This makes it possible to incorporate knowledge-driven considerations of the causative factors into the landslide susceptibility analysis and also be extensively used to other areas.

  19. How precise is the finite sample approximation of the asymptotic distribution of realised variation measures in the presence of jumps?

    DEFF Research Database (Denmark)

    Veraart, Almut

    and present a new estimator for the asymptotic ‘variance’ of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies where we study the impact of the jump activity, the jump size of the jumps......This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures...... in the price and the presence of additional independent or dependent jumps in the volatility on the finite sample performance of the various estimators. We find that the finite sample performance of realised variance, and in particular of the log–transformed realised variance, is generally good, whereas...

  20. Pu239 Cross-Section Variations Based on Experimental Uncertainties and Covariances

    Energy Technology Data Exchange (ETDEWEB)

    Sigeti, David Edward [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Brian J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parsons, D. Kent [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-10-18

    Algorithms and software have been developed for producing variations in plutonium-239 neutron cross sections based on experimental uncertainties and covariances. The varied cross-section sets may be produced as random samples from the multi-variate normal distribution defined by an experimental mean vector and covariance matrix, or they may be produced as Latin-Hypercube/Orthogonal-Array samples (based on the same means and covariances) for use in parametrized studies. The variations obey two classes of constraints that are obligatory for cross-section sets and which put related constraints on the mean vector and covariance matrix that detemine the sampling. Because the experimental means and covariances do not obey some of these constraints to sufficient precision, imposing the constraints requires modifying the experimental mean vector and covariance matrix. Modification is done with an algorithm based on linear algebra that minimizes changes to the means and covariances while insuring that the operations that impose the different constraints do not conflict with each other.

  1. Estimation of population mean under systematic sampling

    Science.gov (United States)

    Noor-ul-amin, Muhammad; Javaid, Amjad

    2017-11-01

    In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.

  2. Mendelian breeding units versus standard sampling strategies: mitochondrial DNA variation in southwest Sardinia

    Directory of Open Access Journals (Sweden)

    Daria Sanna

    2011-01-01

    Full Text Available We report a sampling strategy based on Mendelian Breeding Units (MBUs, representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits.

  3. Contributions from the data samples in NOC technique on the extracting of the Sq variation

    Science.gov (United States)

    Wu, Yingyan; Xu, Wenyao

    2015-04-01

    The solar quiet daily variation, Sq, a rather regular variation is usually observed at mid-low latitudes on magnetic quiet days or less-disturbed days. It is mainly resulted from the dynamo currents in the ionospheric E region, which are driven by the atmospheric tidal wind and different processes and flow as two current whorls in each of the northern and southern hemispheres[1]. The Sq exhibits a conspicuous day-to-day (DTD) variability in daily range (or strength), shape (or phase) and its current focus. This variability is mainly attributed to changes in the ionospheric conductivity and tidal winds, varying with solar radiation and ionospheric conditions. Furthermore, it presents a seasonal variation and solar cycle variation[2-4]. In generally, Sq is expressed with the average value of the five international magnetic quiet days. Using data from global magnetic stations, equivalent current system of daily variation can be constructed to reveal characteristics of the currents[5]. In addition, using the differences of H component at two stations on north and south side of the Sq currents of focus, Sq is extracted much better[6]. Recently, the method of Natural Orthoganal Components (NOC) is used to decompose the magnetic daily variation and express it as the summation of eigenmodes, and indicate the first NOC eigenmode as the solar quiet daily variation, the second as the disturbance daily variation[7-9]. As we know, the NOC technique can help reveal simpler patterns within a complex set of variables, without designed basic-functions such as FFT technique. But the physical explanation of the NOC eigenmodes is greatly depends on the number of data samples and data regular-quality. Using the NOC method, we focus our present study on the analysis of the hourly means of the H component at BMT observatory in China from 2001 to 2008. The contributions of the number and the regular-quality of the data samples on which eigenmode corresponds to the Sq are analyzed, by

  4. A simple sample size formula for analysis of covariance in cluster randomized trials.

    NARCIS (Netherlands)

    Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.

    2012-01-01

    For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An

  5. Climatologies from satellite measurements: the impact of orbital sampling on the standard error of the mean

    Directory of Open Access Journals (Sweden)

    M. Toohey

    2013-04-01

    Full Text Available Climatologies of atmospheric observations are often produced by binning measurements according to latitude and calculating zonal means. The uncertainty in these climatological means is characterised by the standard error of the mean (SEM. However, the usual estimator of the SEM, i.e., the sample standard deviation divided by the square root of the sample size, holds only for uncorrelated randomly sampled measurements. Measurements of the atmospheric state along a satellite orbit cannot always be considered as independent because (a the time-space interval between two nearest observations is often smaller than the typical scale of variations in the atmospheric state, and (b the regular time-space sampling pattern of a satellite instrument strongly deviates from random sampling. We have developed a numerical experiment where global chemical fields from a chemistry climate model are sampled according to real sampling patterns of satellite-borne instruments. As case studies, the model fields are sampled using sampling patterns of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS and Atmospheric Chemistry Experiment Fourier-Transform Spectrometer (ACE-FTS satellite instruments. Through an iterative subsampling technique, and by incorporating information on the random errors of the MIPAS and ACE-FTS measurements, we produce empirical estimates of the standard error of monthly mean zonal mean model O3 in 5° latitude bins. We find that generally the classic SEM estimator is a conservative estimate of the SEM, i.e., the empirical SEM is often less than or approximately equal to the classic estimate. Exceptions occur only when natural variability is larger than the random measurement error, and specifically in instances where the zonal sampling distribution shows non-uniformity with a similar zonal structure as variations in the sampled field, leading to maximum sensitivity to arbitrary phase shifts between the sample distribution and

  6. Application of a stratified random sampling technique to the estimation and minimization of respirable quartz exposure to underground miners

    International Nuclear Information System (INIS)

    Makepeace, C.E.; Horvath, F.J.; Stocker, H.

    1981-11-01

    The aim of a stratified random sampling plan is to provide the best estimate (in the absence of full-shift personal gravimetric sampling) of personal exposure to respirable quartz among underground miners. One also gains information of the exposure distribution of all the miners at the same time. Three variables (or strata) are considered in the present scheme: locations, occupations and times of sampling. Random sampling within each stratum ensures that each location, occupation and time of sampling has equal opportunity of being selected without bias. Following implementation of the plan and analysis of collected data, one can determine the individual exposures and the mean. This information can then be used to identify those groups whose exposure contributes significantly to the collective exposure. In turn, this identification, along with other considerations, allows the mine operator to carry out a cost-benefit optimization and eventual implementation of engineering controls for these groups. This optimization and engineering control procedure, together with the random sampling plan, can then be used in an iterative manner to minimize the mean value of the distribution and collective exposures

  7. Probabilistic finite element stiffness of a laterally loaded monopile based on an improved asymptotic sampling method

    DEFF Research Database (Denmark)

    Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard

    2015-01-01

    shear strength of clay. Normal and Sobol sampling are employed to provide the asymptotic sampling method to generate the probability distribution of the foundation stiffnesses. Monte Carlo simulation is used as a benchmark. Asymptotic sampling accompanied with Sobol quasi random sampling demonstrates......The mechanical responses of an offshore monopile foundation mounted in over-consolidated clay are calculated by employing a stochastic approach where a nonlinear p–y curve is incorporated with a finite element scheme. The random field theory is applied to represent a spatial variation for undrained...... an efficient method for estimating the probability distribution of stiffnesses for the offshore monopile foundation....

  8. Event-triggered synchronization for reaction-diffusion complex networks via random sampling

    Science.gov (United States)

    Dong, Tao; Wang, Aijuan; Zhu, Huiyun; Liao, Xiaofeng

    2018-04-01

    In this paper, the synchronization problem of the reaction-diffusion complex networks (RDCNs) with Dirichlet boundary conditions is considered, where the data is sampled randomly. An event-triggered controller based on the sampled data is proposed, which can reduce the number of controller and the communication load. Under this strategy, the synchronization problem of the diffusion complex network is equivalently converted to the stability of a of reaction-diffusion complex dynamical systems with time delay. By using the matrix inequality technique and Lyapunov method, the synchronization conditions of the RDCNs are derived, which are dependent on the diffusion term. Moreover, it is found the proposed control strategy can get rid of the Zeno behavior naturally. Finally, a numerical example is given to verify the obtained results.

  9. RNA-seq: technical variability and sampling

    Science.gov (United States)

    2011-01-01

    Background RNA-seq is revolutionizing the way we study transcriptomes. mRNA can be surveyed without prior knowledge of gene transcripts. Alternative splicing of transcript isoforms and the identification of previously unknown exons are being reported. Initial reports of differences in exon usage, and splicing between samples as well as quantitative differences among samples are beginning to surface. Biological variation has been reported to be larger than technical variation. In addition, technical variation has been reported to be in line with expectations due to random sampling. However, strategies for dealing with technical variation will differ depending on the magnitude. The size of technical variance, and the role of sampling are examined in this manuscript. Results In this study three independent Solexa/Illumina experiments containing technical replicates are analyzed. When coverage is low, large disagreements between technical replicates are apparent. Exon detection between technical replicates is highly variable when the coverage is less than 5 reads per nucleotide and estimates of gene expression are more likely to disagree when coverage is low. Although large disagreements in the estimates of expression are observed at all levels of coverage. Conclusions Technical variability is too high to ignore. Technical variability results in inconsistent detection of exons at low levels of coverage. Further, the estimate of the relative abundance of a transcript can substantially disagree, even when coverage levels are high. This may be due to the low sampling fraction and if so, it will persist as an issue needing to be addressed in experimental design even as the next wave of technology produces larger numbers of reads. We provide practical recommendations for dealing with the technical variability, without dramatic cost increases. PMID:21645359

  10. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10-4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.

  11. Adaptive force produced by stress-induced regulation of random variation intensity.

    Science.gov (United States)

    Shimansky, Yury P

    2010-08-01

    The Darwinian theory of life evolution is capable of explaining the majority of related phenomena. At the same time, the mechanisms of optimizing traits beneficial to a population as a whole but not directly to an individual remain largely unclear. There are also significant problems with explaining the phenomenon of punctuated equilibrium. From another perspective, multiple mechanisms for the regulation of the rate of genetic mutations according to the environmental stress have been discovered, but their precise functional role is not well understood yet. Here a novel mathematical paradigm called a Kinetic-Force Principle (KFP), which can serve as a general basis for biologically plausible optimization methods, is introduced and its rigorous derivation is provided. Based on this principle, it is shown that, if the rate of random changes in a biological system is proportional, even only roughly, to the amount of environmental stress, a virtual force is created, acting in the direction of stress relief. It is demonstrated that KFP can provide important insights into solving the above problems. Evidence is presented in support of a hypothesis that the nature employs KFP for accelerating adaptation in biological systems. A detailed comparison between KFP and the principle of variation and natural selection is presented and their complementarity is revealed. It is concluded that KFP is not a competing alternative, but a powerful addition to the principle of variation and natural selection. It is also shown KFP can be used in multiple ways for adaptation of individual biological organisms.

  12. Analytical and between-subject variation of thrombin generation measured by calibrated automated thrombography on plasma samples.

    Science.gov (United States)

    Kristensen, Anne F; Kristensen, Søren R; Falkmer, Ursula; Münster, Anna-Marie B; Pedersen, Shona

    2018-05-01

    The Calibrated Automated Thrombography (CAT) is an in vitro thrombin generation (TG) assay that holds promise as a valuable tool within clinical diagnostics. However, the technique has a considerable analytical variation, and we therefore, investigated the analytical and between-subject variation of CAT systematically. Moreover, we assess the application of an internal standard for normalization to diminish variation. 20 healthy volunteers donated one blood sample which was subsequently centrifuged, aliquoted and stored at -80 °C prior to analysis. The analytical variation was determined on eight runs, where plasma from the same seven volunteers was processed in triplicates, and for the between-subject variation, TG analysis was performed on plasma from all 20 volunteers. The trigger reagents used for the TG assays included both PPP reagent containing 5 pM tissue factor (TF) and PPPlow with 1 pM TF. Plasma, drawn from a single donor, was applied to all plates as an internal standard for each TG analysis, which subsequently was used for normalization. The total analytical variation for TG analysis performed with PPPlow reagent is 3-14% and 9-13% for PPP reagent. This variation can be minimally reduced by using an internal standard but mainly for ETP (endogenous thrombin potential). The between-subject variation is higher when using PPPlow than PPP and this variation is considerable higher than the analytical variation. TG has a rather high inherent analytical variation but considerable lower than the between-subject variation when using PPPlow as reagent.

  13. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  14. Variation in marital quality in a national sample of divorced women.

    Science.gov (United States)

    James, Spencer L

    2015-06-01

    Previous work has compared marital quality between stably married and divorced individuals. Less work has examined the possibility of variation among divorcés in trajectories of marital quality as divorce approaches. This study addressed that hole by first examining whether distinct trajectories of marital quality can be discerned among women whose marriages ended in divorce and, second, the profile of women who experienced each trajectory. Latent class growth analyses with longitudinal data from a nationally representative sample were used to "look backward" from the time of divorce. Although demographic and socioeconomic variables from this national sample did not predict the trajectories well, nearly 66% of divorced women reported relatively high levels of both happiness and communication and either low or moderate levels of conflict. Future research including personality or interactional patterns may lead to theoretical insights about patterns of marital quality in the years leading to divorce. (c) 2015 APA, all rights reserved).

  15. A systematic examination of a random sampling strategy for source apportionment calculations.

    Science.gov (United States)

    Andersson, August

    2011-12-15

    Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Levels of dioxin (PCDD/F) and PCBs in a random sample of Australian aquaculture-produced Southern Bluefin Tuna (Thunnus maccoyii)

    Energy Technology Data Exchange (ETDEWEB)

    Padula, D.; Madigan, T.; Kiermeier, A.; Daughtry, B.; Pointon, A. [South Australian Research and Development Inst. (Australia)

    2004-09-15

    To date there has been no published information available on the levels of dioxin (PCDD/F) and PCBs in Australian aquaculture-produced Southern Bluefin Tuna (Thunnus maccoyii). Southern Bluefin Tuna are commercially farmed off the coast of Port Lincoln in the state of South Australia, Australia. This paper reports the levels of dioxin (PCDD/F) and PCBs in muscle tissue samples from 11 randomly sampled aquaculture-produced Southern Bluefin Tuna collected in 2003. Little published data exists on the levels of dioxin (PCDD/F) and PCBs in Australian aquacultureproduced seafood. Wild tuna are first caught in the Great Australian Bight in South Australian waters, and are then brought back to Port Lincoln where they are ranched in sea-cages before being harvested and exported to Japan. The aim of the study was to identify pathways whereby contaminants such as dioxin (PCDD/F) and PCBs may enter the aquaculture production system. This involved undertaking a through chain analysis of the levels of dioxin (PCDD/F) and PCBs in wild caught tuna, seafloor sediment samples from the marine environment, levels in feeds and final harvested exported product. Detailed study was also undertaken on the variation of dioxin (PCDD/F) and PCBs across individual tuna carcases. This paper addresses the levels found in final harvested product. Details on levels found in other studies will be published elsewhere shortly.

  17. A Systematic Review of Surgical Randomized Controlled Trials: Part 2. Funding Source, Conflict of Interest, and Sample Size in Plastic Surgery.

    Science.gov (United States)

    Voineskos, Sophocles H; Coroneos, Christopher J; Ziolkowski, Natalia I; Kaur, Manraj N; Banfield, Laura; Meade, Maureen O; Chung, Kevin C; Thoma, Achilleas; Bhandari, Mohit

    2016-02-01

    The authors examined industry support, conflict of interest, and sample size in plastic surgery randomized controlled trials that compared surgical interventions. They hypothesized that industry-funded trials demonstrate statistically significant outcomes more often, and randomized controlled trials with small sample sizes report statistically significant results more frequently. An electronic search identified randomized controlled trials published between 2000 and 2013. Independent reviewers assessed manuscripts and performed data extraction. Funding source, conflict of interest, primary outcome direction, and sample size were examined. Chi-squared and independent-samples t tests were used in the analysis. The search identified 173 randomized controlled trials, of which 100 (58 percent) did not acknowledge funding status. A relationship between funding source and trial outcome direction was not observed. Both funding status and conflict of interest reporting improved over time. Only 24 percent (six of 25) of industry-funded randomized controlled trials reported authors to have independent control of data and manuscript contents. The mean number of patients randomized was 73 per trial (median, 43, minimum, 3, maximum, 936). Small trials were not found to be positive more often than large trials (p = 0.87). Randomized controlled trials with small sample size were common; however, this provides great opportunity for the field to engage in further collaboration and produce larger, more definitive trials. Reporting of trial funding and conflict of interest is historically poor, but it greatly improved over the study period. Underreporting at author and journal levels remains a limitation when assessing the relationship between funding source and trial outcomes. Improved reporting and manuscript control should be goals that both authors and journals can actively achieve.

  18. Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling

    Science.gov (United States)

    Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing

    2018-05-01

    The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.

  19. A random effects meta-analysis model with Box-Cox transformation.

    Science.gov (United States)

    Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D

    2017-07-19

    In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The

  20. A random effects meta-analysis model with Box-Cox transformation

    Directory of Open Access Journals (Sweden)

    Yusuke Yamaguchi

    2017-07-01

    Full Text Available Abstract Background In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. Methods We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. Results A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and

  1. Generalization of Random Intercept Multilevel Models

    Directory of Open Access Journals (Sweden)

    Rehan Ahmad Khan

    2013-10-01

    Full Text Available The concept of random intercept models in a multilevel model developed by Goldstein (1986 has been extended for k-levels. The random variation in intercepts at individual level is marginally split into components by incorporating higher levels of hierarchy in the single level model. So, one can control the random variation in intercepts by incorporating the higher levels in the model.

  2. Geographic variation in expenditures for Workers' Compensation hospitalized claims.

    Science.gov (United States)

    Miller, T R; Levy, D T

    1999-02-01

    Past literature finds considerable variation in the cost of physician care and in the utilization of medical procedures. Variation in the cost of hospitalized care has received little attention. We examine injury costs of hospitalized claims across states. Multivariate regression analysis is used to isolate state variations, while controlling for personal and injury characteristics, and state characteristics. Injuries to workers filing Workers' Compensation lost workday claims. About 35,000 randomly sampled Workers' Compensation claims from 17 states filed between 1979 and 1988. Medical payments per episode of three injury groups: upper and lower extremity fractures and dislocations, other upper extremity injuries, and back strains and sprains. Statistical analyses reveal considerable variation in expenditures for hospitalized injuries across states, even after controlling for case mix and state characteristics. A substantial portion of the variation is explained by state rate regulations; regulated states have lower costs. The large variation in costs suggests a potential to affect the costs of hospitalized care. Efforts should be directed at those areas that have higher costs without sufficient input price, quality, or case mix justification.

  3. Control Charts for Processes with an Inherent Between-Sample Variation

    Directory of Open Access Journals (Sweden)

    Eva Jarošová

    2018-06-01

    Full Text Available A number of processes to which statistical control is applied are subject to various effects that cause random changes in the mean value. The removal of these fluctuations is either technologically impossible or economically disadvantageous under current conditions. The frequent occurrence of signals in the Shewhart chart due to these fluctuations is then undesirable and therefore the conventional control limits need to be extended. Several approaches to the design of the control charts with extended limits are presented in the paper and applied on the data from a real production process. The methods assume samples of size greater than 1. The performance of the charts is examined using the operating characteristic and average run length. The study reveals that in many cases, reducing the risk of false alarms is insufficient.

  4. Tobacco smoking surveillance: is quota sampling an efficient tool for monitoring national trends? A comparison with a random cross-sectional survey.

    Directory of Open Access Journals (Sweden)

    Romain Guignard

    Full Text Available OBJECTIVES: It is crucial for policy makers to monitor the evolution of tobacco smoking prevalence. In France, this monitoring is based on a series of cross-sectional general population surveys, the Health Barometers, conducted every five years and based on random samples. A methodological study has been carried out to assess the reliability of a monitoring system based on regular quota sampling surveys for smoking prevalence. DESIGN / OUTCOME MEASURES: In 2010, current and daily tobacco smoking prevalences obtained in a quota survey on 8,018 people were compared with those of the 2010 Health Barometer carried out on 27,653 people. Prevalences were assessed separately according to the telephone equipment of the interviewee (landline phone owner vs "mobile-only", and logistic regressions were conducted in the pooled database to assess the impact of the telephone equipment and of the survey mode on the prevalences found. Finally, logistic regressions adjusted for sociodemographic characteristics were conducted in the random sample in order to determine the impact of the needed number of calls to interwiew "hard-to-reach" people on the prevalence found. RESULTS: Current and daily prevalences were higher in the random sample (respectively 33.9% and 27.5% in 15-75 years-old than in the quota sample (respectively 30.2% and 25.3%. In both surveys, current and daily prevalences were lower among landline phone owners (respectively 31.8% and 25.5% in the random sample and 28.9% and 24.0% in the quota survey. The required number of calls was slightly related to the smoking status after adjustment for sociodemographic characteristics. CONCLUSION: Random sampling appears to be more effective than quota sampling, mainly by making it possible to interview hard-to-reach populations.

  5. An integrable low-cost hardware random number generator

    Science.gov (United States)

    Ranasinghe, Damith C.; Lim, Daihyun; Devadas, Srinivas; Jamali, Behnam; Zhu, Zheng; Cole, Peter H.

    2005-02-01

    A hardware random number generator is different from a pseudo-random number generator; a pseudo-random number generator approximates the assumed behavior of a real hardware random number generator. Simple pseudo random number generators suffices for most applications, however for demanding situations such as the generation of cryptographic keys, requires an efficient and a cost effective source of random numbers. Arbiter-based Physical Unclonable Functions (PUFs) proposed for physical authentication of ICs exploits statistical delay variation of wires and transistors across integrated circuits, as a result of process variations, to build a secret key unique to each IC. Experimental results and theoretical studies show that a sufficient amount of variation exits across IC"s. This variation enables each IC to be identified securely. It is possible to exploit the unreliability of these PUF responses to build a physical random number generator. There exists measurement noise, which comes from the instability of an arbiter when it is in a racing condition. There exist challenges whose responses are unpredictable. Without environmental variations, the responses of these challenges are random in repeated measurements. Compared to other physical random number generators, the PUF-based random number generators can be a compact and a low-power solution since the generator need only be turned on when required. A 64-stage PUF circuit costs less than 1000 gates and the circuit can be implemented using a standard IC manufacturing processes. In this paper we have presented a fast and an efficient random number generator, and analysed the quality of random numbers produced using an array of tests used by the National Institute of Standards and Technology to evaluate the randomness of random number generators designed for cryptographic applications.

  6. Design of Energy Aware Adder Circuits Considering Random Intra-Die Process Variations

    Directory of Open Access Journals (Sweden)

    Marco Lanuzza

    2011-04-01

    Full Text Available Energy consumption is one of the main barriers to current high-performance designs. Moreover, the increased variability experienced in advanced process technologies implies further timing yield concerns and therefore intensifies this obstacle. Thus, proper techniques to achieve robust designs are a critical requirement for integrated circuit success. In this paper, the influence of intra-die random process variations is analyzed considering the particular case of the design of energy aware adder circuits. Five well known adder circuits were designed exploiting an industrial 45 nm static complementary metal-oxide semiconductor (CMOS standard cell library. The designed adders were comparatively evaluated under different energy constraints. As a main result, the performed analysis demonstrates that, for a given energy budget, simpler circuits (which are conventionally identified as low-energy slow architectures operating at higher power supply voltages can achieve a timing yield significantly better than more complex faster adders when used in low-power design with supply voltages lower than nominal.

  7. Iterative algorithm of discrete Fourier transform for processing randomly sampled NMR data sets

    International Nuclear Information System (INIS)

    Stanek, Jan; Kozminski, Wiktor

    2010-01-01

    Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15 N- and 13 C-edited NOESY-HSQC spectra of human ubiquitin.

  8. Techniques to assess biological variation in destructive data

    NARCIS (Netherlands)

    Tijskens, L.M.M.; Schouten, R.E.; Jongbloed, G.; Konopacki, P.J.

    2018-01-01

    Variation is present in all measured data, due to variation between individuals (biological variation) and variation induced by the measuring system (technical variation). Biological variation present in experimental data is not the result of a random process but strictly subject to deterministic

  9. Measurement of the natural variation of 13C/12C isotope ratio in organic samples

    International Nuclear Information System (INIS)

    Ducatti, C.

    1977-01-01

    The isotopic ratio analysis for 13 C/ 12 C by mass spectrometry using a 'Working standard' allows the study of 13 C natural variation in organic material, with a total analytical error of less than 0,2%. Equations were derived in order to determine 13 C/ 12 C and 18 O/ 16 O ratios related to the 'working standard' CENA-std and to the international standard PDB. Isotope ratio values obtained with samples prepared in two different combustion apparatus were compared; also the values obtained preparing samples by acid decomposition of carbonaceous materials were compared with the values obtained in different international laboratories. Utilizing the methodology proposed, several leaves collected at different heights of different vegetal species, found 'inside' and 'outside' of the Ducke Forest Reserve, located in the Amazon region, are analysed. It is found that the 13 C natural variation depends upon metabolic process and environmental factors, both being factors which may be qualified as parcial influences on the CO 2 cycle in the forest. (author) [pt

  10. The international Genome sample resource (IGSR): A worldwide collection of genome variation incorporating the 1000 Genomes Project data.

    Science.gov (United States)

    Clarke, Laura; Fairley, Susan; Zheng-Bradley, Xiangqun; Streeter, Ian; Perry, Emily; Lowy, Ernesto; Tassé, Anne-Marie; Flicek, Paul

    2017-01-04

    The International Genome Sample Resource (IGSR; http://www.internationalgenome.org) expands in data type and population diversity the resources from the 1000 Genomes Project. IGSR represents the largest open collection of human variation data and provides easy access to these resources. IGSR was established in 2015 to maintain and extend the 1000 Genomes Project data, which has been widely used as a reference set of human variation and by researchers developing analysis methods. IGSR has mapped all of the 1000 Genomes sequence to the newest human reference (GRCh38), and will release updated variant calls to ensure maximal usefulness of the existing data. IGSR is collecting new structural variation data on the 1000 Genomes samples from long read sequencing and other technologies, and will collect relevant functional data into a single comprehensive resource. IGSR is extending coverage with new populations sequenced by collaborating groups. Here, we present the new data and analysis that IGSR has made available. We have also introduced a new data portal that increases discoverability of our data-previously only browseable through our FTP site-by focusing on particular samples, populations or data sets of interest. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Computer code ENDSAM for random sampling and validation of the resonance parameters covariance matrices of some major nuclear data libraries

    International Nuclear Information System (INIS)

    Plevnik, Lucijan; Žerovnik, Gašper

    2016-01-01

    Highlights: • Methods for random sampling of correlated parameters. • Link to open-source code for sampling of resonance parameters in ENDF-6 format. • Validation of the code on realistic and artificial data. • Validation of covariances in three major contemporary nuclear data libraries. - Abstract: Methods for random sampling of correlated parameters are presented. The methods are implemented for sampling of resonance parameters in ENDF-6 format and a link to the open-source code ENDSAM is given. The code has been validated on realistic data. Additionally, consistency of covariances of resonance parameters of three major contemporary nuclear data libraries (JEFF-3.2, ENDF/B-VII.1 and JENDL-4.0u2) has been checked.

  12. How precise is the finite sample approximation of the asymptotic distribution of realised variation measures in the presence of jumps?

    DEFF Research Database (Denmark)

    Veraart, Almut

    2011-01-01

    and present a new estimator for the asymptotic "variance" of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies. Here we study the impact of the jump activity, of the jump size of the jumps......This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures...... in the price and of the presence of additional independent or dependent jumps in the volatility. We find that the finite sample performance of realised variance and, in particular, of log--transformed realised variance is generally good, whereas the jump--robust statistics tend to struggle in the presence...

  13. Random sampling of quantum states: a survey of methods and some issues regarding the Overparametrized Method

    International Nuclear Information System (INIS)

    Maziero, Jonas

    2015-01-01

    The numerical generation of random quantum states (RQS) is an important procedure for investigations in quantum information science. Here, we review some methods that may be used for performing that task. We start by presenting a simple procedure for generating random state vectors, for which the main tool is the random sampling of unbiased discrete probability distributions (DPD). Afterwards, the creation of random density matrices is addressed. In this context, we first present the standard method, which consists in using the spectral decomposition of a quantum state for getting RQS from random DPDs and random unitary matrices. In the sequence, the Bloch vector parametrization method is described. This approach, despite being useful in several instances, is not in general convenient for RQS generation. In the last part of the article, we regard the overparametrized method (OPM) and the related Ginibre and Bures techniques. The OPM can be used to create random positive semidefinite matrices with unit trace from randomly produced general complex matrices in a simple way that is friendly for numerical implementations. We consider a physically relevant issue related to the possible domains that may be used for the real and imaginary parts of the elements of such general complex matrices. Subsequently, a too fast concentration of measure in the quantum state space that appears in this parametrization is noticed. (author)

  14. Improving ambulatory saliva-sampling compliance in pregnant women: a randomized controlled study.

    Directory of Open Access Journals (Sweden)

    Julian Moeller

    Full Text Available OBJECTIVE: Noncompliance with scheduled ambulatory saliva sampling is common and has been associated with biased cortisol estimates in nonpregnant subjects. This study is the first to investigate in pregnant women strategies to improve ambulatory saliva-sampling compliance, and the association between sampling noncompliance and saliva cortisol estimates. METHODS: We instructed 64 pregnant women to collect eight scheduled saliva samples on two consecutive days each. Objective compliance with scheduled sampling times was assessed with a Medication Event Monitoring System and self-reported compliance with a paper-and-pencil diary. In a randomized controlled study, we estimated whether a disclosure intervention (informing women about objective compliance monitoring and a reminder intervention (use of acoustical reminders improved compliance. A mixed model analysis was used to estimate associations between women's objective compliance and their diurnal cortisol profiles, and between deviation from scheduled sampling and the cortisol concentration measured in the related sample. RESULTS: Self-reported compliance with a saliva-sampling protocol was 91%, and objective compliance was 70%. The disclosure intervention was associated with improved objective compliance (informed: 81%, noninformed: 60%, F(1,60  = 17.64, p<0.001, but not the reminder intervention (reminders: 68%, without reminders: 72%, F(1,60 = 0.78, p = 0.379. Furthermore, a woman's increased objective compliance was associated with a higher diurnal cortisol profile, F(2,64  = 8.22, p<0.001. Altered cortisol levels were observed in less objective compliant samples, F(1,705  = 7.38, p = 0.007, with delayed sampling associated with lower cortisol levels. CONCLUSIONS: The results suggest that in pregnant women, objective noncompliance with scheduled ambulatory saliva sampling is common and is associated with biased cortisol estimates. To improve sampling compliance, results suggest

  15. Field-based random sampling without a sampling frame: control selection for a case-control study in rural Africa.

    Science.gov (United States)

    Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E

    2001-01-01

    Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.

  16. Towards an optimal sampling strategy for assessing genetic variation within and among white clover (Trifolium repens L. cultivars using AFLP

    Directory of Open Access Journals (Sweden)

    Khosro Mehdi Khanlou

    2011-01-01

    Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.

  17. Tree phyllosphere bacterial communities: exploring the magnitude of intra- and inter-individual variation among host species

    Directory of Open Access Journals (Sweden)

    Isabelle Laforest-Lapointe

    2016-08-01

    Full Text Available Background The diversity and composition of the microbial community of tree leaves (the phyllosphere varies among trees and host species and along spatial, temporal, and environmental gradients. Phyllosphere community variation within the canopy of an individual tree exists but the importance of this variation relative to among-tree and among-species variation is poorly understood. Sampling techniques employed for phyllosphere studies include picking leaves from one canopy location to mixing randomly selected leaves from throughout the canopy. In this context, our goal was to characterize the relative importance of intra-individual variation in phyllosphere communities across multiple species, and compare this variation to inter-individual and interspecific variation of phyllosphere epiphytic bacterial communities in a natural temperate forest in Quebec, Canada. Methods We targeted five dominant temperate forest tree species including angiosperms and gymnosperms: Acer saccharum, Acer rubrum, Betula papyrifera, Abies balsamea and Picea glauca. For one randomly selected tree of each species, we sampled microbial communities at six distinct canopy locations: bottom-canopy (1–2 m height, the four cardinal points of mid-canopy (2–4 m height, and the top-canopy (4–6 m height. We also collected bottom-canopy leaves from five additional trees from each species. Results Based on an analysis of bacterial community structure measured via Illumina sequencing of the bacterial 16S gene, we demonstrate that 65% of the intra-individual variation in leaf bacterial community structure could be attributed to the effect of inter-individual and inter-specific differences while the effect of canopy location was not significant. In comparison, host species identity explains 47% of inter-individual and inter-specific variation in leaf bacterial community structure followed by individual identity (32% and canopy location (6%. Discussion Our results suggest that

  18. Using Multisite Experiments to Study Cross-Site Variation in Treatment Effects: A Hybrid Approach with Fixed Intercepts and A Random Treatment Coefficient

    Science.gov (United States)

    Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin

    2017-01-01

    The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…

  19. Plasma creatinine in dogs: intra- and inter-laboratory variation in 10 European veterinary laboratories

    Directory of Open Access Journals (Sweden)

    Ulleberg Thomas

    2011-04-01

    Full Text Available Abstract Background There is substantial variation in reported reference intervals for canine plasma creatinine among veterinary laboratories, thereby influencing the clinical assessment of analytical results. The aims of the study was to determine the inter- and intra-laboratory variation in plasma creatinine among 10 veterinary laboratories, and to compare results from each laboratory with the upper limit of its reference interval. Methods Samples were collected from 10 healthy dogs, 10 dogs with expected intermediate plasma creatinine concentrations, and 10 dogs with azotemia. Overlap was observed for the first two groups. The 30 samples were divided into 3 batches and shipped in random order by postal delivery for plasma creatinine determination. Statistical testing was performed in accordance with ISO standard methodology. Results Inter- and intra-laboratory variation was clinically acceptable as plasma creatinine values for most samples were usually of the same magnitude. A few extreme outliers caused three laboratories to fail statistical testing for consistency. Laboratory sample means above or below the overall sample mean, did not unequivocally reflect high or low reference intervals in that laboratory. Conclusions In spite of close analytical results, further standardization among laboratories is warranted. The discrepant reference intervals seem to largely reflect different populations used in establishing the reference intervals, rather than analytical variation due to different laboratory methods.

  20. Meiotic sex ratio variation in natural populations of Ceratodon purpureus (Ditrichaceae).

    Science.gov (United States)

    Norrell, Tatum E; Jones, Kelly S; Payton, Adam C; McDaniel, Stuart F

    2014-09-01

    • Sex ratio variation is a common but often unexplained phenomenon in species across the tree of life. Here we evaluate the hypothesis that meiotic sex ratio variation can contribute to the biased sex ratios found in natural populations of the moss Ceratodon purpureus.• We obtained sporophytes from several populations of C. purpureus from eastern North America. From each sporophyte, we estimated the mean spore viability by germinating replicate samples on agar plates. We estimated the meiotic sex ratio of each sporophyte by inferring the sex of a random sample of germinated spores (mean = 77) using a PCR-RFLP test. We tested for among-sporophyte variation in viability using an ANOVA and for deviations from 1:1 sex ratio using a χ(2)-test and evaluated the relationship between these quantities using a linear regression.• We found among-sporophyte variation in spore viability and meiotic sex ratio, suggesting that genetic variants that contribute to variation in both of these traits segregate within populations of this species. However, we found no relationship between these quantities, suggesting that factors other than sex ratio distorters contribute to variation in spore viability within populations.• These results demonstrate that sex ratio distortion may partially explain the population sex ratio variation seen in C. purpureus, but more generally that genetic conflict over meiotic segregation may contribute to fitness variation in this species. Overall, this study lays the groundwork for future studies on the genetic basis of meiotic sex ratio variation. © 2014 Botanical Society of America, Inc.

  1. Variations in the response of AECL random coil seals as a function of the angular position of the probe

    International Nuclear Information System (INIS)

    Silk, M.G.

    1986-04-01

    The AECL random coil seal is to be used as a Nuclear Safeguards seal to deter and detect tampering with nuclear material in store. To be effective the ultrasonic signature from the seal must remain constant and be different from that of other seals. Angular variations in the ultrasonic response from certain seals have, however, been observed and the programme of study reported here has been carried out in order to clarify the source of this variation. It is shown that the variation observed may most probably be attributed to the ultrasonic probes used in the investigation and, in particular, to deviation of the probe beam from circularity. However it is probable that the angle of the beam with respect to the probe case (squint) is also a contributory factor. In addition, to reduce the degree of angular variation it is important to exclude air bubbles and to ensure that the coil is placed as centrally in the beam as possible. It is anticipated that the exclusion of air bubbles will be easier in the field than in the laboratory studies. The need to place the seal reasonably centrally with respect to the beam places some minor limits on the coil design and also makes it essential that the probe fits closely into its holder in the seal as any slackness may give rise to signature variations. (author)

  2. Stratified Sampling to Define Levels of Petrographic Variation in Coal Beds: Examples from Indonesia and New Zealand

    Directory of Open Access Journals (Sweden)

    Tim A. Moore

    2016-01-01

    Full Text Available DOI: 10.17014/ijog.3.1.29-51Stratified sampling of coal seams for petrographic analysis using block samples is a viable alternative to standard methods of channel sampling and particulate pellet mounts. Although petrographic analysis of particulate pellets is employed widely, it is both time consuming and does not allow variation within sampling units to be assessed - an important measure in any study whether it be for paleoenvironmental reconstruction or in obtaining estimates of industrial attributes. Also, samples taken as intact blocks provide additional information, such as texture and botanical affinity that cannot be gained using particulate pellets. Stratified sampling can be employed both on ‘fine’ and ‘coarse’ grained coal units. Fine-grained coals are defined as those coal intervals that do not contain vitrain bands greater than approximately 1 mm in thickness (as measured perpendicular to bedding. In fine-grained coal seams, a reasonable sized block sample (with a polished surface area of ~3 cm2 can be taken that encapsulates the macroscopic variability. However, for coarse-grained coals (vitrain bands >1 mm a different system has to be employed in order to accurately account for the larger particles. Macroscopic point counting of vitrain bands can accurately account for those particles>1 mm within a coal interval. This point counting method is conducted using something as simple as string on a coal face with marked intervals greater than the largest particle expected to be encountered (although new technologies are being developed to capture this type of information digitally. Comparative analyses of particulate pellets and blocks on the same interval show less than 6% variation between the two sample types when blocks are recalculated to include macroscopic counts of vitrain. Therefore even in coarse-grained coals, stratified sampling can be used effectively and representatively.

  3. Modern survey sampling

    CERN Document Server

    Chaudhuri, Arijit

    2014-01-01

    Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...

  4. Major Differences: Variations in Undergraduate and Graduate Student Mental Health and Treatment Utilization across Academic Disciplines

    Science.gov (United States)

    Lipson, Sarah Ketchen; Zhou, Sasha; Wagner, Blake, III; Beck, Katie; Eisenberg, Daniel

    2016-01-01

    This article explores variations in mental health and service utilization across academic disciplines using a random sample of undergraduate and graduate students (N = 64,519) at 81 colleges and universities. We report prevalence of depression, anxiety, suicidality, and self-injury, and rates of help-seeking across disciplines, including results…

  5. Revisiting random walk based sampling in networks: evasion of burn-in period and frequent regenerations.

    Science.gov (United States)

    Avrachenkov, Konstantin; Borkar, Vivek S; Kadavankandy, Arun; Sreedharan, Jithin K

    2018-01-01

    In the framework of network sampling, random walk (RW) based estimation techniques provide many pragmatic solutions while uncovering the unknown network as little as possible. Despite several theoretical advances in this area, RW based sampling techniques usually make a strong assumption that the samples are in stationary regime, and hence are impelled to leave out the samples collected during the burn-in period. This work proposes two sampling schemes without burn-in time constraint to estimate the average of an arbitrary function defined on the network nodes, for example, the average age of users in a social network. The central idea of the algorithms lies in exploiting regeneration of RWs at revisits to an aggregated super-node or to a set of nodes, and in strategies to enhance the frequency of such regenerations either by contracting the graph or by making the hitting set larger. Our first algorithm, which is based on reinforcement learning (RL), uses stochastic approximation to derive an estimator. This method can be seen as intermediate between purely stochastic Markov chain Monte Carlo iterations and deterministic relative value iterations. The second algorithm, which we call the Ratio with Tours (RT)-estimator, is a modified form of respondent-driven sampling (RDS) that accommodates the idea of regeneration. We study the methods via simulations on real networks. We observe that the trajectories of RL-estimator are much more stable than those of standard random walk based estimation procedures, and its error performance is comparable to that of respondent-driven sampling (RDS) which has a smaller asymptotic variance than many other estimators. Simulation studies also show that the mean squared error of RT-estimator decays much faster than that of RDS with time. The newly developed RW based estimators (RL- and RT-estimators) allow to avoid burn-in period, provide better control of stability along the sample path, and overall reduce the estimation time. Our

  6. Explaining health care expenditure variation: large-sample evidence using linked survey and health administrative data.

    Science.gov (United States)

    Ellis, Randall P; Fiebig, Denzil G; Johar, Meliyanni; Jones, Glenn; Savage, Elizabeth

    2013-09-01

    Explaining individual, regional, and provider variation in health care spending is of enormous value to policymakers but is often hampered by the lack of individual level detail in universal public health systems because budgeted spending is often not attributable to specific individuals. Even rarer is self-reported survey information that helps explain this variation in large samples. In this paper, we link a cross-sectional survey of 267 188 Australians age 45 and over to a panel dataset of annual healthcare costs calculated from several years of hospital, medical and pharmaceutical records. We use this data to distinguish between cost variations due to health shocks and those that are intrinsic (fixed) to an individual over three years. We find that high fixed expenditures are positively associated with age, especially older males, poor health, obesity, smoking, cancer, stroke and heart conditions. Being foreign born, speaking a foreign language at home and low income are more strongly associated with higher time-varying expenditures, suggesting greater exposure to adverse health shocks. Copyright © 2013 John Wiley & Sons, Ltd.

  7. Extreme eigenvalues of sample covariance and correlation matrices

    DEFF Research Database (Denmark)

    Heiny, Johannes

    This thesis is concerned with asymptotic properties of the eigenvalues of high-dimensional sample covariance and correlation matrices under an infinite fourth moment of the entries. In the first part, we study the joint distributional convergence of the largest eigenvalues of the sample covariance...... matrix of a p-dimensional heavy-tailed time series when p converges to infinity together with the sample size n. We generalize the growth rates of p existing in the literature. Assuming a regular variation condition with tail index ... eigenvalues are essentially determined by the extreme order statistics from an array of iid random variables. The asymptotic behavior of the extreme eigenvalues is then derived routinely from classical extreme value theory. The resulting approximations are strikingly simple considering the high dimension...

  8. Variação geográfica de caracteres quantitativos em Ogcocephalus vespertilio (Linnaeus (Teleostei, Lophiiformes, Ogcocephalidae Geographic variation of morphometric characters in Ogcocephalus vespertilio (Linnaeus (Teleostei, Lophiiformes, Ogcocephalidae

    Directory of Open Access Journals (Sweden)

    Mauro José Cavalcanti

    1998-01-01

    Full Text Available Patterns of geographic variation in 10 morphometric characters were analyzed in a sample of 91 specimens of the batfish, Ogcocephalus vespertilio (L., from the NE and SE Brazilian coast, using multivariate statistics and randomization tests. The specimens were ordinated by principal components analysis in two groups corresponding to the regions north and south of the 23ºC isotherm, and size variation was found to account for 70.3%, whereas shape differences accounted for 23.6% of the total variation in morphometric characters. The two groups were different at the 1% significance level by multivariate analysis of variance based on the Wilk's criterion, tested by a randomization procedure. Width of illicial cavity and distance from anus to anal fin were the characters most contributing to the differentiation of the population samples.

  9. At convenience and systematic random sampling: effects on the prognostic value of nuclear area assessments in breast cancer patients.

    Science.gov (United States)

    Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P

    1995-01-01

    This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.

  10. Random selection of items. Selection of n1 samples among N items composing a stratum

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1987-02-01

    STR-224 provides generalized procedures to determine required sample sizes, for instance in the course of a Physical Inventory Verification at Bulk Handling Facilities. The present report describes procedures to generate random numbers and select groups of items to be verified in a given stratum through each of the measurement methods involved in the verification. (author). 3 refs

  11. Extreme values, regular variation and point processes

    CERN Document Server

    Resnick, Sidney I

    1987-01-01

    Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...

  12. Randomized comparison of vaginal self-sampling by standard vs. dry swabs for Human papillomavirus testing

    International Nuclear Information System (INIS)

    Eperon, Isabelle; Vassilakos, Pierre; Navarria, Isabelle; Menoud, Pierre-Alain; Gauthier, Aude; Pache, Jean-Claude; Boulvain, Michel; Untiet, Sarah; Petignat, Patrick

    2013-01-01

    To evaluate if human papillomavirus (HPV) self-sampling (Self-HPV) using a dry vaginal swab is a valid alternative for HPV testing. Women attending colposcopy clinic were recruited to collect two consecutive Self-HPV samples: a Self-HPV using a dry swab (S-DRY) and a Self-HPV using a standard wet transport medium (S-WET). These samples were analyzed for HPV using real time PCR (Roche Cobas). Participants were randomized to determine the order of the tests. Questionnaires assessing preferences and acceptability for both tests were conducted. Subsequently, women were invited for colposcopic examination; a physician collected a cervical sample (physician-sampling) with a broom-type device and placed it into a liquid-based cytology medium. Specimens were then processed for the production of cytology slides and a Hybrid Capture HPV DNA test (Qiagen) was performed from the residual liquid. Biopsies were performed if indicated. Unweighted kappa statistics (κ) and McNemar tests were used to measure the agreement among the sampling methods. A total of 120 women were randomized. Overall HPV prevalence was 68.7% (95% Confidence Interval (CI) 59.3–77.2) by S-WET, 54.4% (95% CI 44.8–63.9) by S-DRY and 53.8% (95% CI 43.8–63.7) by HC. Among paired samples (S-WET and S-DRY), the overall agreement was good (85.7%; 95% CI 77.8–91.6) and the κ was substantial (0.70; 95% CI 0.57-0.70). The proportion of positive type-specific HPV agreement was also good (77.3%; 95% CI 68.2-84.9). No differences in sensitivity for cervical intraepithelial neoplasia grade one (CIN1) or worse between the two Self-HPV tests were observed. Women reported the two Self-HPV tests as highly acceptable. Self-HPV using dry swab transfer does not appear to compromise specimen integrity. Further study in a large screening population is needed. ClinicalTrials.gov: http://clinicaltrials.gov/show/NCT01316120

  13. Determination of Initial Conditions for the Safety Analysis by Random Sampling of Operating Parameters

    International Nuclear Information System (INIS)

    Jeong, Hae-Yong; Park, Moon-Ghu

    2015-01-01

    In most existing evaluation methodologies, which follow a conservative approach, the most conservative initial conditions are searched for each transient scenario through tremendous assessment for wide operating windows or limiting conditions for operation (LCO) allowed by the operating guidelines. In this procedure, a user effect could be involved and a remarkable time and human resources are consumed. In the present study, we investigated a more effective statistical method for the selection of the most conservative initial condition by the use of random sampling of operating parameters affecting the initial conditions. A method for the determination of initial conditions based on random sampling of plant design parameters is proposed. This method is expected to be applied for the selection of the most conservative initial plant conditions in the safety analysis using a conservative evaluation methodology. In the method, it is suggested that the initial conditions of reactor coolant flow rate, pressurizer level, pressurizer pressure, and SG level are adjusted by controlling the pump rated flow, setpoints of PLCS, PPCS, and FWCS, respectively. The proposed technique is expected to contribute to eliminate the human factors introduced in the conventional safety analysis procedure and also to reduce the human resources invested in the safety evaluation of nuclear power plants

  14. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    Directory of Open Access Journals (Sweden)

    Ying Yan

    2017-01-01

    Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.

  15. PCT Uncertainty Analysis Using Unscented Transform with Random Orthogonal Matrix

    Energy Technology Data Exchange (ETDEWEB)

    Fynana, Douglas A.; Ahn, Kwang-Il [KAERI, Daejeon (Korea, Republic of); Lee, John C. [Univ. of Michigan, Michigan (United States)

    2015-05-15

    Most Best Estimate Plus Uncertainty (BEPU) methods employ nonparametric order statistics through Wilks' formula to quantify uncertainties of best estimate simulations of nuclear power plant (NPP) transients. 95%/95% limits, the 95''t{sup h} percentile at a 95% confidence level, are obtained by randomly sampling all uncertainty contributors through conventional Monte Carlo (MC). Advantages are simple implementation of MC sampling of input probability density functions (pdfs) and limited computational expense of 1''s{sup t}, 2''n{sup d}, and 3''r{sup d} order Wilks' formula requiring only 59, 93, or 124 simulations, respectively. A disadvantage of small sample size is large sample to sample variation of statistical estimators. This paper presents a new efficient sampling based algorithm for accurate estimation of mean and variance of the output parameter pdf. The algorithm combines a deterministic sampling method, the unscented transform (UT), with random sampling through the generation of a random orthogonal matrix (ROM). The UT guarantees the mean, covariance, and 3''r{sup d} order moments of the multivariate input parameter distributions are exactly preserved by the sampled input points and the orthogonal transformation of the points by a ROM guarantees the sample error of all 4''t{sup h} order and higher moments are unbiased. The UT with ROM algorithm is applied to the uncertainty quantification of the peak clad temperature (PCT) during a large break loss-of-coolant accident (LBLOCA) in an OPR1000 NPP to demonstrate the applicability of the new algorithm to BEPU. This paper presented a new algorithm combining the UT with ROM for efficient multivariate parameter sampling that ensures sample input covariance and 3''r{sup d} order moments are exactly preserved and 4''th moment errors are small and unbiased. The advantageous sample properties guarantee higher order accuracy and

  16. The effect of random dopant fluctuation on threshold voltage and drain current variation in junctionless nanotransistors

    International Nuclear Information System (INIS)

    Rezapour, Arash; Rezapour, Pegah

    2015-01-01

    We investigate the effect of dopant random fluctuation on threshold voltage and drain current variation in a two-gate nanoscale transistor. We used a quantum-corrected technology computer aided design simulation to run the simulation (10000 randomizations). With this simulation, we could study the effects of varying the dimensions (length and width), and thicknesses of oxide and dopant factors of a transistor on the threshold voltage and drain current in subthreshold region (off) and overthreshold (on). It was found that in the subthreshold region the variability of the drain current and threshold voltage is relatively fixed while in the overthreshold region the variability of the threshold voltage and drain current decreases remarkably, despite the slight reduction of gate voltage diffusion (compared with that of the subthreshold). These results have been interpreted by using previously reported models for threshold current variability, load displacement, and simple analytical calculations. Scaling analysis shows that the variability of the characteristics of this semiconductor increases as the effects of the short channel increases. Therefore, with a slight increase of length and a reduction of width, oxide thickness, and dopant factor, we could correct the effect of the short channel. (paper)

  17. Radiation Transport in Random Media With Large Fluctuations

    Science.gov (United States)

    Olson, Aaron; Prinja, Anil; Franke, Brian

    2017-09-01

    Neutral particle transport in media exhibiting large and complex material property spatial variation is modeled by representing cross sections as lognormal random functions of space and generated through a nonlinear memory-less transformation of a Gaussian process with covariance uniquely determined by the covariance of the cross section. A Karhunen-Loève decomposition of the Gaussian process is implemented to effciently generate realizations of the random cross sections and Woodcock Monte Carlo used to transport particles on each realization and generate benchmark solutions for the mean and variance of the particle flux as well as probability densities of the particle reflectance and transmittance. A computationally effcient stochastic collocation method is implemented to directly compute the statistical moments such as the mean and variance, while a polynomial chaos expansion in conjunction with stochastic collocation provides a convenient surrogate model that also produces probability densities of output quantities of interest. Extensive numerical testing demonstrates that use of stochastic reduced-order modeling provides an accurate and cost-effective alternative to random sampling for particle transport in random media.

  18. Latent spatial models and sampling design for landscape genetics

    Science.gov (United States)

    Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.

    2016-01-01

    We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.

  19. Run charts revisited: a simulation study of run chart rules for detection of non-random variation in health care processes.

    Science.gov (United States)

    Anhøj, Jacob; Olesen, Anne Vingaard

    2014-01-01

    A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.

  20. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    Science.gov (United States)

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  1. Gray bootstrap method for estimating frequency-varying random vibration signals with small samples

    Directory of Open Access Journals (Sweden)

    Wang Yanqing

    2014-04-01

    Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.

  2. Random variation in voluntary dry matter intake and effect of day length on feed intake capacity in growing cattle

    DEFF Research Database (Denmark)

    Ingvartsen, Klaus Lønne; Andersen, Refsgaard; Foldager, John

    1992-01-01

    The objective of this paper is to describe the random variation in voluntary dry matter intake (VDMI) and to discuss the application of the results for monitoring purposes. Furthermore, the objective is to review and quantify the influence of day length or photoperiod on VDMI. VDMI was recorded...... was increased by 0.32% per hour increase in day length. This is in agreement with the increase found in reviewed literature when photoperiod was manipulated artificially. Practical application of the results for monitoring purposes are exemplified and discussed....

  3. Characterization of PDMS samples with variation of its synthesis parameters for tunable optics applications

    Science.gov (United States)

    Marquez-Garcia, Josimar; Cruz-Félix, Angel S.; Santiago-Alvarado, Agustin; González-García, Jorge

    2017-09-01

    Nowadays the elastomer known as polydimethylsiloxane (PDMS, Sylgard 184), due to its physical properties, low cost and easy handle, have become a frequently used material for the elaboration of optical components such as: variable focal length liquid lenses, optical waveguides, solid elastic lenses, etc. In recent years, we have been working in the characterization of this material for applications in visual sciences; in this work, we describe the elaboration of PDMSmade samples, also, we present physical and optical properties of the samples by varying its synthesis parameters such as base: curing agent ratio, and both, curing time and temperature. In the case of mechanical properties, tensile and compression tests were carried out through a universal testing machine to obtain the respective stress-strain curves, and to obtain information regarding its optical properties, UV-vis spectroscopy is applied to the samples to obtain transmittance and absorbance curves. Index of refraction variation was obtained through an Abbe refractometer. Results from the characterization will determine the proper synthesis parameters for the elaboration of tunable refractive surfaces for potential applications in robotics.

  4. Acute stress symptoms during the second Lebanon war in a random sample of Israeli citizens.

    Science.gov (United States)

    Cohen, Miri; Yahav, Rivka

    2008-02-01

    The aims of this study were to assess prevalence of acute stress disorder (ASD) and acute stress symptoms (ASS) in Israel during the second Lebanon war. A telephone survey was conducted in July 2006 of a random sample of 235 residents of northern Israel, who were subjected to missile attacks, and of central Israel, who were not subjected to missile attacks. Results indicate that ASS scores were higher in the northern respondents; 6.8% of the northern sample and 3.9% of the central sample met ASD criteria. Appearance of each symptom ranged from 15.4% for dissociative to 88.4% for reexperiencing, with significant differences between northern and central respondents only for reexperiencing and arousal. A low ASD rate and a moderate difference between areas subjected and not subjected to attack were found.

  5. Prevalence and correlates of problematic smartphone use in a large random sample of Chinese undergraduates.

    Science.gov (United States)

    Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël

    2016-11-17

    Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.

  6. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge

    OpenAIRE

    Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Fr?d?ric; Petignat, Patrick

    2015-01-01

    Background Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. Methods A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed f...

  7. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States.

    Directory of Open Access Journals (Sweden)

    Andreas Steimer

    Full Text Available Oscillations between high and low values of the membrane potential (UP and DOWN states respectively are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs of the exponential integrate and fire (EIF model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing

  8. Random Sampling with Interspike-Intervals of the Exponential Integrate and Fire Neuron: A Computational Interpretation of UP-States.

    Science.gov (United States)

    Steimer, Andreas; Schindler, Kaspar

    2015-01-01

    Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational

  9. Understanding the cluster randomised crossover design: a graphical illustraton of the components of variation and a sample size tutorial.

    Science.gov (United States)

    Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B

    2017-08-15

    In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of

  10. Discriminative motif discovery via simulated evolution and random under-sampling.

    Directory of Open Access Journals (Sweden)

    Tao Song

    Full Text Available Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.

  11. Discriminative motif discovery via simulated evolution and random under-sampling.

    Science.gov (United States)

    Song, Tao; Gu, Hong

    2014-01-01

    Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs) training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.

  12. Multidrug resistance among new tuberculosis cases: detecting local variation through lot quality-assurance sampling.

    Science.gov (United States)

    Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted

    2012-03-01

    Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.

  13. Experimental investigation on variation of physical properties of coal samples subjected to microwave irradiation

    Science.gov (United States)

    Hu, Guozhong; Yang, Nan; Xu, Guang; Xu, Jialin

    2018-03-01

    The gas drainage rate of low-permeability coal seam is generally less than satisfactory. This leads to the gas disaster of coal mine, and largely restricts the extraction of coalbed methane (CBM), and increases the emission of greenhouse gases in the mining area. Consequently, enhancing the gas drainage rate is an urgent challenge. To solve this problem, a new approach of using microwave irradiation (MWR) as a non-contact physical field excitation method to enhance gas drainage has been attempted. In order to evaluate the feasibility of this method, the methane adsorption, diffusion and penetrability of coal subjected to MWR were experimentally investigated. The variation of methane adsorbed amount, methane diffusion speed and absorption loop for the coal sample before and after MWR were obtained. The findings show that the MWR can change the adsorption property and reduce the methane adsorption capacity of coal. Moreover, the methane diffusion characteristic curves for both the irradiated coal samples and theoriginal coal samples present the same trend. The irradiated coal samples have better methane diffusion ability than the original ones. As the adsorbed methane decreases, the methane diffusion speed increases or remain the same for the sample subjected to MWR. Furthermore, compared to the original coal samples, the area of the absorption loop for irradiated samples increases, especially for the micro-pore and medium-pore stage. This leads to the increase of open pores in the coal, thus improving the gas penetrability of coal. This study provides supports for positive MWR effects on changing the methane adsorption and improving the methane diffusion and the gas penetrability properties of coal samples.

  14. Variation in extinction risk among birds: chance or evolutionary predisposition?

    Science.gov (United States)

    Bennett, P. M.; Owens, I. P. F.

    1997-01-01

    Collar et al. (1994) estimate that of the 9,672 extant species of bird, 1,111 are threatened by extinction. Here, we test whether these threatened species are simply a random sample of birds, or whether there is something about their biology that predisposes them to extinction. We ask three specific questions. First, is extinction risk randomly distributed among families? Second, which families, if any, contain more, or less, threatened species than would be expected by chance? Third, is variation between taxa in extinction risk associated with variation in either body size or fecundity? Extinction risk is not randomly distributed among families. The families which contain significantly more threatened species than expected are the parrots (Psittacidae), pheasants and allies (Phasianidae), albatrosses and allies (Procellariidae), rails (Rallidae), cranes (Gruidae), cracids (Cracidae), megapodes (Megapodidae) and pigeons (Columbidae). The only family which contains significantly fewer threatened species than expected is the woodpeckers (Picidae). Extinction risk is also not distributed randomly with respect to fecundity or body size. Once phylogeny has been controlled for, increases in extinction risk are independently associated with increases in body size and decreases in fecundity. We suggest that this is because low rates of fecundity, which evolved many tens of millions of years ago, predisposed certain lineages to extinction. Low-fecundity populations take longer to recover if they are reduced to small sizes and are, therefore, more likely to go extinct if an external force causes an increase in the rate of mortality, thereby perturbing the natural balance between fecundity and mortality.

  15. Variation of the 18O/16O ratio in water samples from branches

    International Nuclear Information System (INIS)

    Foerstel, H.; Huetzen, H.

    1979-06-01

    The studies of the water turnover of plants may use the labelling of water by its natural variation of the 18 O/ 16 O ratio. The basic value of such a study is the isotope ratio in soil water, which is represented by the 18 O/ 16 O ratio in water samples from stem and branches, too. During the water transport from the soil water reservoir to the leaves of trees, no fractionation of the oxygen isotopes occurs. The oxygen isotope ratio within a single twig varies about +- 0 / 00 (variation given as standard deviation of the delta-values), within the stem of a large tree about +- 2 0 / 00 . The results of water from stems of different trees at the site of the Nuclear Research Center Juelich scatter about +- 1 0 / 00 . The delta-values from a larger area (Rur valley-Eifel hills-Mosel valley), which were collected in October 1978 during the end of the vegetation period, showed a standard deviation between +- 2.2 (Rur valley) and +- 3.6 0 / 00 (Eifel hills). The 18 O/ 16 O-delta-values of a beech wood from Juelich site are in the range of - 7.3 and - 10.1 0 / 00 (mean local precipitation 1974 - 1977: - 7.4 0 / 00 ). At the hill site near Cologne (Bergisches Land, late September 1978) we observed an oxygen isotope ratio of - 9.1 0 / 00 (groundwater at the neighbourhood between - 7.6 and 8.7 0 / 00 ). In October 1978 at an area from the Netherlands to the Mosel valley we found delta-values of branch water between - 13.9 (lower Ruhr valley) and - 13.1 (Eifel hills to Mosel valley) in comparison to groundwater samples from the same region: - 7.55 and - 8.39. There was no significant difference between delta-values from various species or locations within this area. Groundwater samples should normally represent the 18 O/ 16 O ratio of local precipitation. The low delta-values of branch water could be due to the rapid uptake of precipitation water of low 18 O content in autumn to the water transport system of plants. (orig.) [de

  16. Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.

    Science.gov (United States)

    Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael

    2014-10-01

    Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.

  17. RANdom SAmple Consensus (RANSAC) algorithm for material-informatics: application to photovoltaic solar cells.

    Science.gov (United States)

    Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch

    2017-06-06

    An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.

  18. Relationship between accuracy and number of samples on statistical quantity and contour map of environmental gamma-ray dose rate. Example of random sampling

    International Nuclear Information System (INIS)

    Matsuda, Hideharu; Minato, Susumu

    2002-01-01

    The accuracy of statistical quantity like the mean value and contour map obtained by measurement of the environmental gamma-ray dose rate was evaluated by random sampling of 5 different model distribution maps made by the mean slope, -1.3, of power spectra calculated from the actually measured values. The values were derived from 58 natural gamma dose rate data reported worldwide ranging in the means of 10-100 Gy/h rates and 10 -3 -10 7 km 2 areas. The accuracy of the mean value was found around ±7% even for 60 or 80 samplings (the most frequent number) and the standard deviation had the accuracy less than 1/4-1/3 of the means. The correlation coefficient of the frequency distribution was found 0.860 or more for 200-400 samplings (the most frequent number) but of the contour map, 0.502-0.770. (K.H.)

  19. Hierarchical Protein Free Energy Landscapes from Variationally Enhanced Sampling.

    Science.gov (United States)

    Shaffer, Patrick; Valsson, Omar; Parrinello, Michele

    2016-12-13

    In recent work, we demonstrated that it is possible to obtain approximate representations of high-dimensional free energy surfaces with variationally enhanced sampling ( Shaffer, P.; Valsson, O.; Parrinello, M. Proc. Natl. Acad. Sci. , 2016 , 113 , 17 ). The high-dimensional spaces considered in that work were the set of backbone dihedral angles of a small peptide, Chignolin, and the high-dimensional free energy surface was approximated as the sum of many two-dimensional terms plus an additional term which represents an initial estimate. In this paper, we build on that work and demonstrate that we can calculate high-dimensional free energy surfaces of very high accuracy by incorporating additional terms. The additional terms apply to a set of collective variables which are more coarse than the base set of collective variables. In this way, it is possible to build hierarchical free energy surfaces, which are composed of terms that act on different length scales. We test the accuracy of these free energy landscapes for the proteins Chignolin and Trp-cage by constructing simple coarse-grained models and comparing results from the coarse-grained model to results from atomistic simulations. The approach described in this paper is ideally suited for problems in which the free energy surface has important features on different length scales or in which there is some natural hierarchy.

  20. Virtual sampling in variational processing of Monte Carlo simulation in a deep neutron penetration problem

    International Nuclear Information System (INIS)

    Allagi, Mabruk O.; Lewins, Jeffery D.

    1999-01-01

    In a further study of virtually processed Monte Carlo estimates in neutron transport, a shielding problem has been studied. The use of virtual sampling to estimate the importance function at a certain point in the phase space depends on the presence of neutrons from the real source at that point. But in deep penetration problems, not many neutrons will reach regions far away from the source. In order to overcome this problem, two suggestions are considered: (1) virtual sampling is used as far as the real neutrons can reach, then fictitious sampling is introduced for the remaining regions, distributed in all the regions, or (2) only one fictitious source is placed where the real neutrons almost terminate and then virtual sampling is used in the same way as for the real source. Variational processing is again found to improve the Monte Carlo estimates, being best when using one fictitious source in the far regions with virtual sampling (option 2). When fictitious sources are used to estimate the importances in regions far away from the source, some optimization has to be performed for the proportion of fictitious to real sources, weighted against accuracy and computational costs. It has been found in this study that the optimum number of cells to be treated by fictitious sampling is problem dependent, but as a rule of thumb, fictitious sampling should be employed in regions where the number of neutrons from the real source fall below a specified limit for good statistics

  1. Choosing a suitable sample size in descriptive sampling

    International Nuclear Information System (INIS)

    Lee, Yong Kyun; Choi, Dong Hoon; Cha, Kyung Joon

    2010-01-01

    Descriptive sampling (DS) is an alternative to crude Monte Carlo sampling (CMCS) in finding solutions to structural reliability problems. It is known to be an effective sampling method in approximating the distribution of a random variable because it uses the deterministic selection of sample values and their random permutation,. However, because this method is difficult to apply to complex simulations, the sample size is occasionally determined without thorough consideration. Input sample variability may cause the sample size to change between runs, leading to poor simulation results. This paper proposes a numerical method for choosing a suitable sample size for use in DS. Using this method, one can estimate a more accurate probability of failure in a reliability problem while running a minimal number of simulations. The method is then applied to several examples and compared with CMCS and conventional DS to validate its usefulness and efficiency

  2. Coarse graining from variationally enhanced sampling applied to the Ginzburg-Landau model

    Science.gov (United States)

    Invernizzi, Michele; Valsson, Omar; Parrinello, Michele

    2017-03-01

    A powerful way to deal with a complex system is to build a coarse-grained model capable of catching its main physical features, while being computationally affordable. Inevitably, such coarse-grained models introduce a set of phenomenological parameters, which are often not easily deducible from the underlying atomistic system. We present a unique approach to the calculation of these parameters, based on the recently introduced variationally enhanced sampling method. It allows us to obtain the parameters from atomistic simulations, providing thus a direct connection between the microscopic and the mesoscopic scale. The coarse-grained model we consider is that of Ginzburg-Landau, valid around a second-order critical point. In particular, we use it to describe a Lennard-Jones fluid in the region close to the liquid-vapor critical point. The procedure is general and can be adapted to other coarse-grained models.

  3. Assessment of the effect of population and diary sampling methods on estimation of school-age children exposure to fine particles.

    Science.gov (United States)

    Che, W W; Frey, H Christopher; Lau, Alexis K H

    2014-12-01

    Population and diary sampling methods are employed in exposure models to sample simulated individuals and their daily activity on each simulation day. Different sampling methods may lead to variations in estimated human exposure. In this study, two population sampling methods (stratified-random and random-random) and three diary sampling methods (random resampling, diversity and autocorrelation, and Markov-chain cluster [MCC]) are evaluated. Their impacts on estimated children's exposure to ambient fine particulate matter (PM2.5 ) are quantified via case studies for children in Wake County, NC for July 2002. The estimated mean daily average exposure is 12.9 μg/m(3) for simulated children using the stratified population sampling method, and 12.2 μg/m(3) using the random sampling method. These minor differences are caused by the random sampling among ages within census tracts. Among the three diary sampling methods, there are differences in the estimated number of individuals with multiple days of exposures exceeding a benchmark of concern of 25 μg/m(3) due to differences in how multiday longitudinal diaries are estimated. The MCC method is relatively more conservative. In case studies evaluated here, the MCC method led to 10% higher estimation of the number of individuals with repeated exposures exceeding the benchmark. The comparisons help to identify and contrast the capabilities of each method and to offer insight regarding implications of method choice. Exposure simulation results are robust to the two population sampling methods evaluated, and are sensitive to the choice of method for simulating longitudinal diaries, particularly when analyzing results for specific microenvironments or for exposures exceeding a benchmark of concern. © 2014 Society for Risk Analysis.

  4. Spatial Variation of Soil Lead in an Urban Community Garden: Implications for Risk-Based Sampling.

    Science.gov (United States)

    Bugdalski, Lauren; Lemke, Lawrence D; McElmurry, Shawn P

    2014-01-01

    Soil lead pollution is a recalcitrant problem in urban areas resulting from a combination of historical residential, industrial, and transportation practices. The emergence of urban gardening movements in postindustrial cities necessitates accurate assessment of soil lead levels to ensure safe gardening. In this study, we examined small-scale spatial variability of soil lead within a 15 × 30 m urban garden plot established on two adjacent residential lots located in Detroit, Michigan, USA. Eighty samples collected using a variably spaced sampling grid were analyzed for total, fine fraction (less than 250 μm), and bioaccessible soil lead. Measured concentrations varied at sampling scales of 1-10 m and a hot spot exceeding 400 ppm total soil lead was identified in the northwest portion of the site. An interpolated map of total lead was treated as an exhaustive data set, and random sampling was simulated to generate Monte Carlo distributions and evaluate alternative sampling strategies intended to estimate the average soil lead concentration or detect hot spots. Increasing the number of individual samples decreases the probability of overlooking the hot spot (type II error). However, the practice of compositing and averaging samples decreased the probability of overestimating the mean concentration (type I error) at the expense of increasing the chance for type II error. The results reported here suggest a need to reconsider U.S. Environmental Protection Agency sampling objectives and consequent guidelines for reclaimed city lots where soil lead distributions are expected to be nonuniform. © 2013 Society for Risk Analysis.

  5. Monte Carlo Finite Volume Element Methods for the Convection-Diffusion Equation with a Random Diffusion Coefficient

    Directory of Open Access Journals (Sweden)

    Qian Zhang

    2014-01-01

    Full Text Available The paper presents a framework for the construction of Monte Carlo finite volume element method (MCFVEM for the convection-diffusion equation with a random diffusion coefficient, which is described as a random field. We first approximate the continuous stochastic field by a finite number of random variables via the Karhunen-Loève expansion and transform the initial stochastic problem into a deterministic one with a parameter in high dimensions. Then we generate independent identically distributed approximations of the solution by sampling the coefficient of the equation and employing finite volume element variational formulation. Finally the Monte Carlo (MC method is used to compute corresponding sample averages. Statistic error is estimated analytically and experimentally. A quasi-Monte Carlo (QMC technique with Sobol sequences is also used to accelerate convergence, and experiments indicate that it can improve the efficiency of the Monte Carlo method.

  6. Quasi optimal and adaptive sparse grids with control variates for PDEs with random diffusion coefficient

    KAUST Repository

    Tamellini, Lorenzo

    2016-01-05

    In this talk we discuss possible strategies to minimize the impact of the curse of dimensionality effect when building sparse-grid approximations of a multivariate function u = u(y1, ..., yN ). More precisely, we present a knapsack approach , in which we estimate the cost and the error reduction contribution of each possible component of the sparse grid, and then we choose the components with the highest error reduction /cost ratio. The estimates of the error reduction are obtained by either a mixed a-priori / a-posteriori approach, in which we first derive a theoretical bound and then tune it with some inexpensive auxiliary computations (resulting in the so-called quasi-optimal sparse grids ), or by a fully a-posteriori approach (obtaining the so-called adaptive sparse grids ). This framework is very general and can be used to build quasi-optimal/adaptive sparse grids on bounded and unbounded domains (e.g. u depending on uniform and normal random distributions for yn), using both nested and non-nested families of univariate collocation points. We present some theoretical convergence results as well as numerical results showing the efficiency of the proposed approach for the approximation of the solution of elliptic PDEs with random diffusion coefficients. In this context, to treat the case of rough permeability fields in which a sparse grid approach may not be suitable, we propose to use the sparse grids as a control variate in a Monte Carlo simulation.

  7. Methodology Series Module 5: Sampling Strategies.

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ' Sampling Method'. There are essentially two types of sampling methods: 1) probability sampling - based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling - based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ' random sample' when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ' generalizability' of these results. In such a scenario, the researcher may want to use ' purposive sampling' for the study.

  8. Methodology series module 5: Sampling strategies

    Directory of Open Access Journals (Sweden)

    Maninder Singh Setia

    2016-01-01

    Full Text Available Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the 'Sampling Method'. There are essentially two types of sampling methods: 1 probability sampling – based on chance events (such as random numbers, flipping a coin etc.; and 2 non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term 'random sample' when the researcher has used convenience sample. The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the 'generalizability' of these results. In such a scenario, the researcher may want to use 'purposive sampling' for the study.

  9. Prediction of soil CO2 flux in sugarcane management systems using the Random Forest approach

    Directory of Open Access Journals (Sweden)

    Rose Luiza Moraes Tavares

    Full Text Available ABSTRACT: The Random Forest algorithm is a data mining technique used for classifying attributes in order of importance to explain the variation in an attribute-target, as soil CO2 flux. This study aimed to identify prediction of soil CO2 flux variables in management systems of sugarcane through the machine-learning algorithm called Random Forest. Two different management areas of sugarcane in the state of São Paulo, Brazil, were selected: burned and green. In each area, we assembled a sampling grid with 81 georeferenced points to assess soil CO2 flux through automated portable soil gas chamber with measuring spectroscopy in the infrared during the dry season of 2011 and the rainy season of 2012. In addition, we sampled the soil to evaluate physical, chemical, and microbiological attributes. For data interpretation, we used the Random Forest algorithm, based on the combination of predicted decision trees (machine learning algorithms in which every tree depends on the values of a random vector sampled independently with the same distribution to all the trees of the forest. The results indicated that clay content in the soil was the most important attribute to explain the CO2 flux in the areas studied during the evaluated period. The use of the Random Forest algorithm originated a model with a good fit (R2 = 0.80 for predicted and observed values.

  10. Random within-herd variation in financial performance and time to financial steady-state following management changes in the dairy herd

    DEFF Research Database (Denmark)

    Kristensen, Erling Lundager; Østergaard, Søren; Krogh, Mogens Agerbo

    2008-01-01

    The manager of a dairy herd and the affiliated consultants constantly need to judge whether financial performance of the production system is satisfactory and whether financial performance relates to real (systematic) effects of changes in management. This is no easy task because the dairy herd...... is a very complex system. Thus, it is difficult to obtain empirical data that allows a valid estimation of the random (within-herd) variation in financial performance corrected for management changes. Thus, simulation seems to be the only option. This study suggests that much caution must be recommended...

  11. Methodology Series Module 5: Sampling Strategies

    Science.gov (United States)

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ‘ Sampling Method’. There are essentially two types of sampling methods: 1) probability sampling – based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling – based on researcher's choice, population that accessible & available. Some of the non-probability sampling methods are: purposive sampling, convenience sampling, or quota sampling. Random sampling method (such as simple random sample or stratified random sample) is a form of probability sampling. It is important to understand the different sampling methods used in clinical studies and mention this method clearly in the manuscript. The researcher should not misrepresent the sampling method in the manuscript (such as using the term ‘ random sample’ when the researcher has used convenience sample). The sampling method will depend on the research question. For instance, the researcher may want to understand an issue in greater detail for one particular population rather than worry about the ‘ generalizability’ of these results. In such a scenario, the researcher may want to use ‘ purposive sampling’ for the study. PMID:27688438

  12. Location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling

    Directory of Open Access Journals (Sweden)

    Alireza Goli

    2015-09-01

    Full Text Available Distribution and optimum allocation of emergency resources are the most important tasks, which need to be accomplished during crisis. When a natural disaster such as earthquake, flood, etc. takes place, it is necessary to deliver rescue efforts as quickly as possible. Therefore, it is important to find optimum location and distribution of emergency relief resources. When a natural disaster occurs, it is not possible to reach some damaged areas. In this paper, location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling is investigated. In this study, there is no need to visit all the places and some demand points receive their needs from the nearest possible location. The proposed study is implemented for some randomly generated numbers in different sizes. The preliminary results indicate that the proposed method was capable of reaching desirable solutions in reasonable amount of time.

  13. Sampling Large Graphs for Anticipatory Analytics

    Science.gov (United States)

    2015-05-15

    low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges

  14. Nonlinearity and thresholds in dose-response relationships for carcinogenicity due to sampling variation, logarithmic dose scaling, or small differences in individual susceptibility

    International Nuclear Information System (INIS)

    Lutz, W.K.; Gaylor, D.W.; Conolly, R.B.; Lutz, R.W.

    2005-01-01

    Nonlinear and threshold-like shapes of dose-response curves are often observed in tests for carcinogenicity. Here, we present three examples where an apparent threshold is spurious and can be misleading for low dose extrapolation and human cancer risk assessment. Case 1: For experiments that are not replicated, such as rodent bioassays for carcinogenicity, random variation can lead to misinterpretation of the result. This situation was simulated by 20 random binomial samplings of 50 animals per group, assuming a true linear dose response from 5% to 25% tumor incidence at arbitrary dose levels 0, 0.5, 1, 2, and 4. Linearity was suggested only by 8 of the 20 simulations. Four simulations did not reveal the carcinogenicity at all. Three exhibited thresholds, two showed a nonmonotonic behavior with a decrease at low dose, followed by a significant increase at high dose ('hormesis'). Case 2: Logarithmic representation of the dose axis transforms a straight line into a sublinear (up-bent) curve, which can be misinterpreted to indicate a threshold. This is most pronounced if the dose scale includes a wide low dose range. Linear regression of net tumor incidences and intersection with the dose axis results in an apparent threshold, even with an underlying true linear dose-incidence relationship. Case 3: Nonlinear shapes of dose-cancer incidence curves are rarely seen with epidemiological data in humans. The discrepancy to data in rodents may in part be explained by a wider span of individual susceptibilities for tumor induction in humans due to more diverse genetic background and modulation by co-carcinogenic lifestyle factors. Linear extrapolation of a human cancer risk could therefore be appropriate even if animal bioassays show nonlinearity

  15. Intraindividual variation in levels of serum testosterone and other reproductive and adrenal hormones in men.

    Science.gov (United States)

    Brambilla, Donald J; O'Donnell, Amy B; Matsumoto, Alvin M; McKinlay, John B

    2007-12-01

    Estimates of intraindividual variation in hormone levels provide the basis for interpreting hormone measurements clinically and for developing eligibility criteria for trials of hormone replacement therapy. However, reliable systematic estimates of such variation are lacking. To estimate intraindividual variation of serum total, free and bioavailable testosterone (T), dihydrotestosterone (DHT), SHBG, LH, dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulphate (DHEAS), oestrone, oestradiol and cortisol, and the contributions of biological and assay variation to the total. Paired blood samples were obtained 1-3 days apart at entry and again 3 months and 6 months later (maximum six samples per subject). Each sample consisted of a pool of equal aliquots of two blood draws 20 min apart. Men aged 30-79 years were randomly selected from the respondents to the Boston Area Community Health Survey, a study of the health of the general population of Boston, MA, USA. Analysis was based on 132 men, including 121 who completed all six visits, 8 who completed the first two visits and 3 who completed the first four visits. Day-to-day and 3-month (long-term) intraindividual standard deviations, after transforming measurements to logarithms to eliminate the contribution of hormone level to intraindividual variation. Biological variation generally accounted for more of total intraindividual variation than did assay variation. Day-to-day biological variation accounted for more of the total than did long-term biological variation. Short-term variability was greater in hormones with pulsatile secretion (e.g. LH) than those that exhibit less ultradian variation. Depending on the hormone, the intraindividual standard deviations imply that a clinician can expect to see a difference exceeding 18-28% about half the time when two measurements are made on a subject. The difference will exceed 27-54% about a quarter of the time. Given the level of intraindividual variability in hormone

  16. Coarse graining from variationally enhanced sampling applied to the Ginzburg–Landau model

    Science.gov (United States)

    Invernizzi, Michele; Valsson, Omar; Parrinello, Michele

    2017-01-01

    A powerful way to deal with a complex system is to build a coarse-grained model capable of catching its main physical features, while being computationally affordable. Inevitably, such coarse-grained models introduce a set of phenomenological parameters, which are often not easily deducible from the underlying atomistic system. We present a unique approach to the calculation of these parameters, based on the recently introduced variationally enhanced sampling method. It allows us to obtain the parameters from atomistic simulations, providing thus a direct connection between the microscopic and the mesoscopic scale. The coarse-grained model we consider is that of Ginzburg–Landau, valid around a second-order critical point. In particular, we use it to describe a Lennard–Jones fluid in the region close to the liquid–vapor critical point. The procedure is general and can be adapted to other coarse-grained models. PMID:28292890

  17. Assessment of fracture risk: value of random population-based samples--the Geelong Osteoporosis Study.

    Science.gov (United States)

    Henry, M J; Pasco, J A; Seeman, E; Nicholson, G C; Sanders, K M; Kotowicz, M A

    2001-01-01

    Fracture risk is determined by bone mineral density (BMD). The T-score, a measure of fracture risk, is the position of an individual's BMD in relation to a reference range. The aim of this study was to determine the magnitude of change in the T-score when different sampling techniques were used to produce the reference range. Reference ranges were derived from three samples, drawn from the same region: (1) an age-stratified population-based random sample, (2) unselected volunteers, and (3) a selected healthy subset of the population-based sample with no diseases or drugs known to affect bone. T-scores were calculated using the three reference ranges for a cohort of women who had sustained a fracture and as a group had a low mean BMD (ages 35-72 yr; n = 484). For most comparisons, the T-scores for the fracture cohort were more negative using the population reference range. The difference in T-scores reached 1.0 SD. The proportion of the fracture cohort classified as having osteoporosis at the spine was 26, 14, and 23% when the population, volunteer, and healthy reference ranges were applied, respectively. The use of inappropriate reference ranges results in substantial changes to T-scores and may lead to inappropriate management.

  18. Variation detection and respondents’ feedback: the cause, effect, and solution of oil spills.

    Directory of Open Access Journals (Sweden)

    Ayodele Sunday Tologbonse

    2018-01-01

    Full Text Available Centred on occurrences of pipeline explosion and oil spills in a host community; a supervised classification technique, of land use/land cover variation detection was carried-out, with Landsat imageries of three time intervals, to determine the percentage of variation between the time intervals. Also carried-out, was a random sampling of questionnaires; dispatch to acquire respondents’ feedback. It addressed respondents’ demographic and social-economic composition of the sample population, the perception on the cause and the impact, and the effect of the oil spill and finally considered the possible solutions. Information was subjected to descriptive analysis and an F-test statistical analysis in a 95% confidence interval. Reports showed that land use/land cover classification had undergone series of percentage variation within the time interval considered, indicating ‘remarks’ of a rise or a decline. While, the measure of insecurity (of about 36.7% is a prevailing element to the unceasing attack on oil pipelines and only a sustaining security measure (of about 40.8% will evidently pave a way-out. Wherefore advocating for community based policing, and a comprehensive technological sensor system, for monitoring of oil pipelines/facilities across the Nation.

  19. Parameters, test criteria and fault assessment in random sampling of waste barrels from non-qualified processes

    International Nuclear Information System (INIS)

    Martens, B.R.

    1989-01-01

    In the context of random sampling tests, parameters are checked on the waste barrels and criteria are given on which these tests are based. Also, it is shown how faulty data on the properties of the waste or faulty waste barrels should be treated. To decide the extent of testing, the properties of the waste relevant to final storage are determined based on the conditioning process used. (DG) [de

  20. SYSTEMATIC AND STOCHASTIC VARIATIONS IN PULSAR DISPERSION MEASURES

    International Nuclear Information System (INIS)

    Lam, M. T.; Cordes, J. M.; Chatterjee, S.; Jones, M. L.; McLaughlin, M. A.; Armstrong, J. W.

    2016-01-01

    We analyze deterministic and random temporal variations in the dispersion measure (DM) from the full three-dimensional velocities of pulsars with respect to the solar system, combined with electron-density variations over a wide range of length scales. Previous treatments have largely ignored pulsars’ changing distances while favoring interpretations involving changes in sky position from transverse motion. Linear trends in pulsar DMs observed over 5–10 year timescales may signify sizable DM gradients in the interstellar medium (ISM) sampled by the changing direction of the line of sight to the pulsar. We show that motions parallel to the line of sight can also account for linear trends, for the apparent excess of DM variance over that extrapolated from scintillation measurements, and for the apparent non-Kolmogorov scalings of DM structure functions inferred in some cases. Pulsar motions through atomic gas may produce bow-shock ionized gas that also contributes to DM variations. We discuss the possible causes of periodic or quasi-periodic changes in DM, including seasonal changes in the ionosphere, annual variations of the solar elongation angle, structure in the heliosphere and ISM boundary, and substructure in the ISM. We assess the solar cycle’s role on the amplitude of ionospheric and solar wind variations. Interstellar refraction can produce cyclic timing variations from the error in transforming arrival times to the solar system barycenter. We apply our methods to DM time series and DM gradient measurements in the literature and assess their consistency with a Kolmogorov medium. Finally, we discuss the implications of DM modeling in precision pulsar timing experiments

  1. Lotka-Volterra system in a random environment

    Science.gov (United States)

    Dimentberg, Mikhail F.

    2002-03-01

    Classical Lotka-Volterra (LV) model for oscillatory behavior of population sizes of two interacting species (predator-prey or parasite-host pairs) is conservative. This may imply unrealistically high sensitivity of the system's behavior to environmental variations. Thus, a generalized LV model is considered with the equation for preys' reproduction containing the following additional terms: quadratic ``damping'' term that accounts for interspecies competition, and term with white-noise random variations of the preys' reproduction factor that simulates the environmental variations. An exact solution is obtained for the corresponding Fokker-Planck-Kolmogorov equation for stationary probability densities (PDF's) of the population sizes. It shows that both population sizes are independent γ-distributed stationary random processes. Increasing level of the environmental variations does not lead to extinction of the populations. However it may lead to an intermittent behavior, whereby one or both population sizes experience very rare and violent short pulses or outbreaks while remaining on a very low level most of the time. This intermittency is described analytically by direct use of the solutions for the PDF's as well as by applying theory of excursions of random functions and by predicting PDF of peaks in the predators' population size.

  2. Coordination of Conditional Poisson Samples

    Directory of Open Access Journals (Sweden)

    Grafström Anton

    2015-12-01

    Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.

  3. Variation in ebmental quantification by X-ray fluorescence analysis in crystalline materials when applying pressure in sample preparation

    International Nuclear Information System (INIS)

    Macias B, L.R.; Garcia C, R.M.; De Ita de la Torre, A.; Chavez R, A.

    2000-01-01

    In this work making use of the diffraction and fluorescence techniques its were determined the presence of elements in a known compound ZrSiO 4 under different pressure conditions. At preparing the samples it were applied different pressures from 1600 until 350 k N/m 2 and it is detected the apparent variations in concentration in the Zr and Si elements. (Author)

  4. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge.

    Science.gov (United States)

    Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Frédéric; Petignat, Patrick

    2015-01-01

    Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY) or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA). After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET). HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic. HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2) detected by s-DRY, 56.2% (95%CI: 47.6-64.4) by Dr-WET, and 54.6% (95%CI: 46.1-62.9) by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34), and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56). Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+) were: 64.0% (95%CI: 44.5-79.8) for s-FTA, 84.6% (95%CI: 66.5-93.9) for s-DRY, and 76.9% (95%CI: 58.0-89.0) for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%). Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD)/per card vs. ~1 USD/per swab). Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA. International Standard Randomized Controlled Trial Number (ISRCTN): 43310942.

  5. Investigating the Randomness of Numbers

    Science.gov (United States)

    Pendleton, Kenn L.

    2009-01-01

    The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…

  6. Coherence characteristics of random lasing in a dye doped hybrid powder

    Energy Technology Data Exchange (ETDEWEB)

    García-Revilla, S. [Departamento de Física Aplicada I, Escuela Superior de Ingeniería, Universidad del País Vasco UPV/EHU, Alda. Urquijo s/n, 48013, Bilbao (Spain); Material Physics Center CSIC-UPV/EHU and Donostia International Physics Center, 20018, San Sebastián (Spain); Fernández, J., E-mail: wupferoj@bi.ehu.es [Departamento de Física Aplicada I, Escuela Superior de Ingeniería, Universidad del País Vasco UPV/EHU, Alda. Urquijo s/n, 48013, Bilbao (Spain); Material Physics Center CSIC-UPV/EHU and Donostia International Physics Center, 20018, San Sebastián (Spain); Barredo-Zuriarrain, M. [Departamento de Física Aplicada I, Escuela Superior de Ingeniería, Universidad del País Vasco UPV/EHU, Alda. Urquijo s/n, 48013, Bilbao (Spain); Pecoraro, E. [Instituto de Telecomunicações, University of Aveiro, 3810-193, Aveiro (Portugal); Institute of Chemisty, São Paulo State University–UNESP, 14800-900, Araraquara (Brazil); Arriandiaga, M.A. [Departamento de Física Aplicada II, Facultad de Ciencia y Tecnología, Universidad del País Vasco UPV/EHU, Apartado 644, Bilbao (Spain); Iparraguirre, I.; Azkargorta, J. [Departamento de Física Aplicada I, Escuela Superior de Ingeniería, Universidad del País Vasco UPV/EHU, Alda. Urquijo s/n, 48013, Bilbao (Spain); and others

    2016-01-15

    The photon statistics of the random laser emission of a Rhodamine B doped di-ureasil hybrid powder is investigated to evaluate its degree of coherence above threshold. Although the random laser emission is a weighted average of spatially uncorrelated radiation emitted at different positions in the sample, a spatial coherence control was achieved due to an improved detection configuration based on spatial filtering. By using this experimental approach, which also allows for fine mode discrimination and time-resolved analysis of uncoupled modes from mode competition, an area not larger than the expected coherence size of the random laser is probed. Once the spectral and temporal behavior of non-overlapping modes is characterized, an assessment of the photon-number probability distribution and the resulting second-order correlation coefficient as a function of time delay and wavelength was performed. The outcome of our single photon counting measurements revealed a high degree of temporal coherence at the time of maximum pump intensity and at wavelengths around the Rhodamine B gain maximum. - Highlights: • The photon statistics of a diffusive random laser is explored. • The laser sample is a RhB doped di-ureasil hybrid powder. • The detection configuration allows for mode discrimination and time-resolved analysis. • The time and wavelength variation of the temporal coherence is examined. • A high degree of temporal coherence is found.

  7. Sample collections from healthy volunteers for biological variation estimates' update: a new project undertaken by the Working Group on Biological Variation established by the European Federation of Clinical Chemistry and Laboratory Medicine.

    Science.gov (United States)

    Carobene, Anna; Strollo, Marta; Jonker, Niels; Barla, Gerhard; Bartlett, William A; Sandberg, Sverre; Sylte, Marit Sverresdotter; Røraas, Thomas; Sølvik, Una Ørvim; Fernandez-Calle, Pilar; Díaz-Garzón, Jorge; Tosato, Francesca; Plebani, Mario; Coşkun, Abdurrahman; Serteser, Mustafa; Unsal, Ibrahim; Ceriotti, Ferruccio

    2016-10-01

    Biological variation (BV) data have many fundamental applications in laboratory medicine. At the 1st Strategic Conference of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) the reliability and limitations of current BV data were discussed. The EFLM Working Group on Biological Variation is working to increase the quality of BV data by developing a European project to establish a biobank of samples from healthy subjects to be used to produce high quality BV data. The project involved six European laboratories (Milan, Italy; Bergen, Norway; Madrid, Spain; Padua, Italy; Istanbul, Turkey; Assen, The Netherlands). Blood samples were collected from 97 volunteers (44 men, aged 20-60 years; 43 women, aged 20-50 years; 10 women, aged 55-69 years). Initial subject inclusion required that participants completed an enrolment questionnaire to verify their health status. The volunteers provided blood specimens once per week for 10 weeks. A short questionnaire was completed and some laboratory tests were performed at each sampling consisting of blood collected under controlled conditions to provide serum, K2EDTA-plasma and citrated-plasma samples. Samples from six out of the 97 enroled subjects were discarded as a consequence of abnormal laboratory measurements. A biobank of 18,000 aliquots was established consisting of 120 aliquots of serum, 40 of EDTA-plasma, and 40 of citrated-plasma from each subject. The samples were stored at -80 °C. A biobank of well-characterised samples collected under controlled conditions has been established delivering a European resource to enable production of contemporary BV data.

  8. ON THE ESTIMATION OF RANDOM UNCERTAINTIES OF STAR FORMATION HISTORIES

    Energy Technology Data Exchange (ETDEWEB)

    Dolphin, Andrew E., E-mail: adolphin@raytheon.com [Raytheon Company, Tucson, AZ, 85734 (United States)

    2013-09-20

    The standard technique for measurement of random uncertainties of star formation histories (SFHs) is the bootstrap Monte Carlo, in which the color-magnitude diagram (CMD) is repeatedly resampled. The variation in SFHs measured from the resampled CMDs is assumed to represent the random uncertainty in the SFH measured from the original data. However, this technique systematically and significantly underestimates the uncertainties for times in which the measured star formation rate is low or zero, leading to overly (and incorrectly) high confidence in that measurement. This study proposes an alternative technique, the Markov Chain Monte Carlo (MCMC), which samples the probability distribution of the parameters used in the original solution to directly estimate confidence intervals. While the most commonly used MCMC algorithms are incapable of adequately sampling a probability distribution that can involve thousands of highly correlated dimensions, the Hybrid Monte Carlo algorithm is shown to be extremely effective and efficient for this particular task. Several implementation details, such as the handling of implicit priors created by parameterization of the SFH, are discussed in detail.

  9. ON THE ESTIMATION OF RANDOM UNCERTAINTIES OF STAR FORMATION HISTORIES

    International Nuclear Information System (INIS)

    Dolphin, Andrew E.

    2013-01-01

    The standard technique for measurement of random uncertainties of star formation histories (SFHs) is the bootstrap Monte Carlo, in which the color-magnitude diagram (CMD) is repeatedly resampled. The variation in SFHs measured from the resampled CMDs is assumed to represent the random uncertainty in the SFH measured from the original data. However, this technique systematically and significantly underestimates the uncertainties for times in which the measured star formation rate is low or zero, leading to overly (and incorrectly) high confidence in that measurement. This study proposes an alternative technique, the Markov Chain Monte Carlo (MCMC), which samples the probability distribution of the parameters used in the original solution to directly estimate confidence intervals. While the most commonly used MCMC algorithms are incapable of adequately sampling a probability distribution that can involve thousands of highly correlated dimensions, the Hybrid Monte Carlo algorithm is shown to be extremely effective and efficient for this particular task. Several implementation details, such as the handling of implicit priors created by parameterization of the SFH, are discussed in detail

  10. Characteristics of men with substance use disorder consequent to illicit drug use: comparison of a random sample and volunteers.

    Science.gov (United States)

    Reynolds, Maureen D; Tarter, Ralph E; Kirisci, Levent

    2004-09-06

    Men qualifying for substance use disorder (SUD) consequent to consumption of an illicit drug were compared according to recruitment method. It was hypothesized that volunteers would be more self-disclosing and exhibit more severe disturbances compared to randomly recruited subjects. Personal, demographic, family, social, substance use, psychiatric, and SUD characteristics of volunteers (N = 146) were compared to randomly recruited (N = 102) subjects. Volunteers had lower socioceconomic status, were more likely to be African American, and had lower IQ than randomly recruited subjects. Volunteers also evidenced greater social and family maladjustment and more frequently had received treatment for substance abuse. In addition, lower social desirability response bias was observed in the volunteers. SUD was not more severe in the volunteers; however, they reported a higher lifetime rate of opiate, diet, depressant, and analgesic drug use. Volunteers and randomly recruited subjects qualifying for SUD consequent to illicit drug use are similar in SUD severity but differ in terms of severity of psychosocial disturbance and history of drug involvement. The factors discriminating volunteers and randomly recruited subjects are well known to impact on outcome, hence they need to be considered in research design, especially when selecting a sampling strategy in treatment research.

  11. Space Use Variation in Co-Occurring Sister Species: Response to Environmental Variation or Competition?

    Science.gov (United States)

    Dufour, Claire M. S.; Meynard, Christine; Watson, Johan; Rioux, Camille; Benhamou, Simon; Perez, Julie; du Plessis, Jurie J.; Avenant, Nico; Pillay, Neville; Ganem, Guila

    2015-01-01

    Coexistence often involves niche differentiation either as the result of environmental divergence, or in response to competition. Disentangling the causes of such divergence requires that environmental variation across space is taken into account, which is rarely done in empirical studies. We address the role of environmental variation versus competition in coexistence between two rodent species: Rhabdomys bechuanae (bechuanae) and Rhabdomys dilectus dilectus (dilectus) comparing their habitat preference and home range (HR) size in areas with similar climates, where their distributions abut (allopatry) or overlap (sympatry). Using Outlying Mean Index analyses, we test whether habitat characteristics of the species deviate significantly from a random sample of available habitats. In allopatry, results suggest habitat selection: dilectus preferring grasslands with little bare soil while bechuanae occurring in open shrublands. In sympatry, shrubland type habitats dominate and differences are less marked, yet dilectus selects habitats with more cover than bechuanae. Interestingly, bechuanae shows larger HRs than dilectus, and both species display larger HRs in sympatry. Further, HR overlaps between species are lower than expected. We discuss our results in light of data on the phylogeography of the genus and propose that evolution in allopatry resulted in adaptation leading to different habitat preferences, even at their distribution margins, a divergence expected to facilitate coexistence. However, since sympatry occurs in sites where environmental characteristics do not allow complete species separation, competition may explain reduced inter-species overlap and character displacement in HR size. This study reveals that both environmental variation and competition may shape species coexistence. PMID:25693176

  12. Mental health in American colleges and universities: variation across student subgroups and across campuses.

    Science.gov (United States)

    Eisenberg, Daniel; Hunt, Justin; Speer, Nicole

    2013-01-01

    We estimated the prevalence and correlates of mental health problems among college students in the United States. In 2007 and 2009, we administered online surveys with brief mental health screens to random samples of students at 26 campuses nationwide. We used sample probability weights to adjust for survey nonresponse. A total of 14,175 students completed the survey, corresponding to a 44% participation rate. The prevalence of positive screens was 17.3% for depression, 4.1% for panic disorder, 7.0% for generalized anxiety, 6.3% for suicidal ideation, and 15.3% for nonsuicidal self-injury. Mental health problems were significantly associated with sex, race/ethnicity, religiosity, relationship status, living on campus, and financial situation. The prevalence of conditions varied substantially across the campuses, although campus-level variation was still a small proportion of overall variation in student mental health. The findings offer a starting point for identifying individual and contextual factors that may be useful to target in intervention strategies.

  13. Fimbrial phase variation

    DEFF Research Database (Denmark)

    Khandige, Surabhi; Møller-Jensen, Jakob

    2016-01-01

    Surface fimbriae of pathogenic Escherichia coli facilitate sensing, adhesion and even invasion of host epithelial cells. While it is known that the pathogen has the potential to express a plethora of fimbrial variants susceptible to rapid phase ON/OFF variation, it is an open question if the fimb......Surface fimbriae of pathogenic Escherichia coli facilitate sensing, adhesion and even invasion of host epithelial cells. While it is known that the pathogen has the potential to express a plethora of fimbrial variants susceptible to rapid phase ON/OFF variation, it is an open question...... if the fimbrial diversity seen at the population level is the product of random stochasticity or a concerted effort based on active communication. Here we discuss the possibility of a mechanism alternative to a stochastic fimbrial phase variation model affecting the dynamics of a heterogeneous population....

  14. Comparison of Batch Assay and Random Assay Using Automatic Dispenser in Radioimmunoassay

    Energy Technology Data Exchange (ETDEWEB)

    Moon, Seung Hwan; Jang, Su Jin; Kang, Ji Yeon; Lee, Dong Soo; Chung, June Key; Lee, Myung Chul [Seoul Metropolitan Government Seoul National University Boramae Medical Center, Seoul (Korea, Republic of); Lee, Ho Young; Shin, Sun Young; Min, Gyeong Sun; Lee, Hyun Joo [Seoul National University college of Medicine, Seoul (Korea, Republic of)

    2009-08-15

    Radioimmunoassay (RIA) was usually performed by the batch assay. To improve the efficiency of RIA without increase of the cost and time, random assay could be a choice. We investigated the possibility of the random assay using automatic dispenser by assessing the agreement between batch assay and random assay. The experiments were performed with four items; Triiodothyronine (T3), free thyroxine (fT4), Prostate specific antigen (PSA), Carcinoembryonic antigen (CEA). In each item, the sera of twenty patients, the standard, and the control samples were used. The measurements were done 4 times with 3 hour time intervals by random assay and batch assay. The coefficient of variation (CV) of the standard samples and patients' data in T3, fT4, PSA, and CEA were assessed. ICC (Intraclass correlation coefficient) and coefficient of correlation were measured to assessing the agreement between two methods. The CVs (%) of T3, fT4, PSA, and CEA measured by batch assay were 3.2+-1.7%, 3.9+-2.1%, 7.1+-6.2%, 11.2+-7.2%. The CVs by random assay were 2.1+-1.7%, 4.8+-3.1%, 3.6+-4.8%, and 7.4+-6.2%. The ICC between the batch assay and random assay were 0.9968 (T3), 0.9973 (fT4), 0.9996 (PSA), and 0.9901 (CEA). The coefficient of correlation between the batch assay and random assay were 0.9924(T3), 0.9974 (fT4), 0.9994 (PSA), and 0.9989 (CEA) (p<0.05). The results of random assay showed strong agreement with the batch assay in a day. These results suggest that random assay using automatic dispenser could be used in radioimmunoassay

  15. Comparison of Batch Assay and Random Assay Using Automatic Dispenser in Radioimmunoassay

    International Nuclear Information System (INIS)

    Moon, Seung Hwan; Jang, Su Jin; Kang, Ji Yeon; Lee, Dong Soo; Chung, June Key; Lee, Myung Chul; Lee, Ho Young; Shin, Sun Young; Min, Gyeong Sun; Lee, Hyun Joo

    2009-01-01

    Radioimmunoassay (RIA) was usually performed by the batch assay. To improve the efficiency of RIA without increase of the cost and time, random assay could be a choice. We investigated the possibility of the random assay using automatic dispenser by assessing the agreement between batch assay and random assay. The experiments were performed with four items; Triiodothyronine (T3), free thyroxine (fT4), Prostate specific antigen (PSA), Carcinoembryonic antigen (CEA). In each item, the sera of twenty patients, the standard, and the control samples were used. The measurements were done 4 times with 3 hour time intervals by random assay and batch assay. The coefficient of variation (CV) of the standard samples and patients' data in T3, fT4, PSA, and CEA were assessed. ICC (Intraclass correlation coefficient) and coefficient of correlation were measured to assessing the agreement between two methods. The CVs (%) of T3, fT4, PSA, and CEA measured by batch assay were 3.2±1.7%, 3.9±2.1%, 7.1±6.2%, 11.2±7.2%. The CVs by random assay were 2.1±1.7%, 4.8±3.1%, 3.6±4.8%, and 7.4±6.2%. The ICC between the batch assay and random assay were 0.9968 (T3), 0.9973 (fT4), 0.9996 (PSA), and 0.9901 (CEA). The coefficient of correlation between the batch assay and random assay were 0.9924(T3), 0.9974 (fT4), 0.9994 (PSA), and 0.9989 (CEA) (p<0.05). The results of random assay showed strong agreement with the batch assay in a day. These results suggest that random assay using automatic dispenser could be used in radioimmunoassay

  16. Random assay in radioimmunoassay: Feasibility and application compared with batch assay

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jung Min; Lee, Hwan Hee; Park, Sohyun; Kim, Tae Sung; Kim, Seok Ki [Dept. of Nuclear MedicineNational Cancer Center, Goyang (Korea, Republic of)

    2016-12-15

    The batch assay has been conventionally used for radioimmunoassay (RIA) because of its technical robustness and practical convenience. However, it has limitations in terms of the relative lag of report time due to the necessity of multiple assays in a small number of samples compared with the random assay technique. In this study, we aimed to verify whether the random assay technique can be applied in RIA and is feasible in daily practice. The coefficients of variation (CVs) of eight standard curves within a single kit were calculated in a CA-125 immunoradiometric assay (IRMA) for the reference of the practically ideal CV of the CA-125 kit. Ten standard curves of 10 kits from 2 prospectively collected lots (pLot) and 85 standard curves of 85 kits from 3 retrospectively collected lots (Lot) were obtained. Additionally, the raw measurement data of both 170 control references and 1123 patients' sera were collected retrospectively between December 2015 and January 2016. A standard curve of the first kit of each lot was used as a master standard curve for a random assay. The CVs of inter-kits were analyzed in each lot, respectively. All raw measurements were normalized by decay and radioactivity. The CA-125 values from control samples and patients' sera were compared using the original batch assay and random assay. In standard curve analysis, the CVs of inter-kits in pLots and Lots were comparable to those within a single kit. The CVs from the random assay with normalization were similar to those from the batch assay in the control samples (CVs % of low/high concentration; Lot1 2.71/1.91, Lot2 2.35/1.83, Lot3 2.83/2.08 vs. Lot1 2.05/1.21, Lot2 1.66/1.48, Lot3 2.41/2.14). The ICCs between the batch assay and random assay using patients' sera were satisfactory (Lot1 1.00, Lot2 0.999, Lot3 1.00). The random assay technique could be successfully applied to the conventional CA-125 IRMA kits. The random assay showed strong agreement with the batch assay. The

  17. Sample Size Calculation: Inaccurate A Priori Assumptions for Nuisance Parameters Can Greatly Affect the Power of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Elsa Tavernier

    Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.

  18. MUP, CEC-DES, STRADE. Codes for uncertainty propagation, experimental design and stratified random sampling techniques

    International Nuclear Information System (INIS)

    Amendola, A.; Astolfi, M.; Lisanti, B.

    1983-01-01

    The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems

  19. Learning-based stochastic object models for characterizing anatomical variations

    Science.gov (United States)

    Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua

    2018-03-01

    It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.

  20. Spatial variation and prediction of forest biomass in a heterogeneous landscape

    Institute of Scientific and Technical Information of China (English)

    S.Lamsal; D.M.Rizzo; R.K.Meentemeyer

    2012-01-01

    Large areas assessments of forest biomass distribution are a challenge in heterogeneous landscapes,where variations in tree growth and species composition occur over short distances.In this study,we use statistical and geospatial modeling on densely sampled forest biomass data to analyze the relative importance of ecological and physiographic variables as determinants of spatial variation of forest biomass in the environmentally heterogeneous region of the Big Sur,California.We estimated biomass in 280 forest plots (one plot per 2.85 km2) and measured an array of ecological (vegetation community type,distance to edge,amount of surrounding non-forest vegetation,soil properties,fire history) and physiographic drivers (elevation,potential soil moisture and solar radiation,proximity to the coast) of tree growth at each plot location.Our geostatistical analyses revealed that biomass distribution is spatially structured and autocorrelated up to 3.1 km.Regression tree (RT) models showed that both physiographic and ecological factors influenced biomass distribution.Across randomly selected sample densities (sample size 112 to 280),ecological effects of vegetation community type and distance to forest edge,and physiographic effects of elevation,potentialsoil moisture and solar radiation were the most consistent predictors of biomass.Topographic moisture index and potential solar radiation had a positive effect on biomass,indicating the importance of topographicallymediated energy and moisture on plant growth and biomass accumulation.RT model explained 35% of the variation in biomass and spatially autocorrelated variation were retained in regession residuals.Regression kriging model,developed from RT combined with kriging of regression residuals,was used to map biomass across the Big Sur.This study demonstrates how statistical and geospatial modeling can be used to discriminate the relative importance of physiographic and ecologic effects on forest biomass and develop

  1. Environmental and geographic variables are effective surrogates for genetic variation in conservation planning.

    Science.gov (United States)

    Hanson, Jeffrey O; Rhodes, Jonathan R; Riginos, Cynthia; Fuller, Richard A

    2017-11-28

    Protected areas buffer species from anthropogenic threats and provide places for the processes that generate and maintain biodiversity to continue. However, genetic variation, the raw material for evolution, is difficult to capture in conservation planning, not least because genetic data require considerable resources to obtain and analyze. Here we show that freely available environmental and geographic distance variables can be highly effective surrogates in conservation planning for representing adaptive and neutral intraspecific genetic variation. We obtained occurrence and genetic data from the IntraBioDiv project for 27 plant species collected over the European Alps using a gridded sampling scheme. For each species, we identified loci that were potentially under selection using outlier loci methods, and mapped their main gradients of adaptive and neutral genetic variation across the grid cells. We then used the cells as planning units to prioritize protected area acquisitions. First, we verified that the spatial patterns of environmental and geographic variation were correlated, respectively, with adaptive and neutral genetic variation. Second, we showed that these surrogates can predict the proportion of genetic variation secured in randomly generated solutions. Finally, we discovered that solutions based only on surrogate information secured substantial amounts of adaptive and neutral genetic variation. Our work paves the way for widespread integration of surrogates for genetic variation into conservation planning.

  2. Diurnal Variation and Spatial Distribution Effects on Sulfur Speciation in Aerosol Samples as Assessed by X-Ray Absorption Near-Edge Structure (XANES

    Directory of Open Access Journals (Sweden)

    Siwatt Pongpiachan

    2012-01-01

    Full Text Available This paper focuses on providing new results relating to the impacts of Diurnal variation, Vertical distribution, and Emission source on sulfur K-edge XANES spectrum of aerosol samples. All aerosol samples used in the diurnal variation experiment were preserved using anoxic preservation stainless cylinders (APSCs and pressure-controlled glove boxes (PCGBs, which were specially designed to prevent oxidation of the sulfur states in PM10. Further investigation of sulfur K-edge XANES spectra revealed that PM10 samples were dominated by S(VI, even when preserved in anoxic conditions. The “Emission source effect” on the sulfur oxidation state of PM10 was examined by comparing sulfur K-edge XANES spectra collected from various emission sources in southern Thailand, while “Vertical distribution effects” on the sulfur oxidation state of PM10 were made with samples collected from three different altitudes from rooftops of the highest buildings in three major cities in Thailand. The analytical results have demonstrated that neither “Emission source” nor “Vertical distribution” appreciably contribute to the characteristic fingerprint of sulfur K-edge XANES spectrum in PM10.

  3. Significant performance variation among PCR systems in diagnosing congenital toxoplasmosis in São Paulo, Brazil: analysis of 467 amniotic fluid samples

    Directory of Open Access Journals (Sweden)

    Thelma Suely Okay

    2009-03-01

    Full Text Available INTRODUCTION: Performance variation among PCR systems in detecting Toxoplasma gondii has been extensively reported and associated with target genes, primer composition, amplification parameters, treatment during pregnancy, host genetic susceptibility and genotypes of different parasites according to geographical characteristics. PATIENTS: A total of 467 amniotic fluid samples from T. gondii IgM- and IgG-positive Brazilian pregnant women being treated for 1 to 6 weeks at the time of amniocentesis (gestational ages of 14 to 25 weeks. METHODS: One nested-B1-PCR and three one-round amplification systems targeted to rDNA, AF146527 and the B1 gene were employed. RESULTS: Of the 467 samples, 189 (40.47% were positive for one-round amplifications: 120 (63.49% for the B1 gene, 24 (12.69% for AF146527, 45 (23.80% for both AF146527 and the B1 gene, and none for rDNA. Fifty previously negative one-round PCR samples were chosen by computer-assisted randomization analysis and re-tested (nested-B1-PCR, during which nine additional cases were detected (9/50 or 18%. DISCUSSION: The B1 gene PCR was far more sensitive than the AF146527 PCR, and the rDNA PCR was the least effective even though the rDNA had the most repetitive sequence. Considering that the four amplification systems were equally affected by treatment, that the amplification conditions were optimized for the target genes and that most of the primers have already been reported, it is plausible that the striking differences found among PCR performances could be associated with genetic diversity in patients and/or with different Toxoplasma gondii genotypes occurring in Brazil. CONCLUSION: The use of PCR for the diagnosis of fetal Toxoplasma infections in Brazil should be targeted to the B1 gene when only one gene can be amplified, preferably by nested amplification with primers B22/B23.

  4. Annual variation in polychlorinated biphenyl (PCB) exposure in tree swallow (Tachycineta bicolor) eggs and nestlings at Great Lakes Restoration Initiative (GLRI) study sites

    Science.gov (United States)

    Custer, Christine M.; Custer, Thomas W.; Dummer, Paul; Goldberg, Diana R.; Franson, J. Christian

    2018-01-01

    Tree swallow (Tachycineta bicolor) eggs and nestlings were collected from 16 sites across the Great Lakes to quantify normal annual variation in total polychlorinated biphenyl (PCB) exposure and to validate the sample size choice in earlier work. A sample size of five eggs or five nestlings per site was adequate to quantify exposure to PCBs in tree swallows given the current exposure levels and variation. There was no difference in PCB exposure in two randomly selected sets of five eggs collected in the same year, but analyzed in different years. Additionally, there was only modest annual variation in exposure, with between 69% (nestlings) and 73% (eggs) of sites having no differences between years. There was a tendency, both statistically and qualitatively, for there to be less exposure in the second year compared to the first year.

  5. Particulate organic nitrates: Sampling and night/day variation

    DEFF Research Database (Denmark)

    Nielsen, T.; Platz, J.; Granby, K.

    1998-01-01

    Atmospheric day and night concentrations of particulate organic nitrates (PON) and several other air pollutants were measured in the summer 1995 over an open-land area in Denmark. The sampling of PON was evaluated comparing 24 h samples with two sets of 12 h samples. These results indicate...... that the observed low contribution of PON to NO, is real and not the result of an extensive loss during the sampling. Empirical relationships between the vapour pressure and chemical formula of organic compounds were established in order to evaluate the gas/particle distribution of organic nitrates. A positive...

  6. Random Fields

    Science.gov (United States)

    Vanmarcke, Erik

    1983-03-01

    Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems" confront astronomers, physicists, geologists, meteorologists, biologists, and other natural scientists. They appear in the artifacts developed by electrical, mechanical, civil, and other engineers. They even underlie the processes of social and economic change. The purpose of this book is to bring together existing and new methodologies of random field theory and indicate how they can be applied to these diverse areas where a "deterministic treatment is inefficient and conventional statistics insufficient." Many new results and methods are included. After outlining the extent and characteristics of the random field approach, the book reviews the classical theory of multidimensional random processes and introduces basic probability concepts and methods in the random field context. It next gives a concise amount of the second-order analysis of homogeneous random fields, in both the space-time domain and the wave number-frequency domain. This is followed by a chapter on spectral moments and related measures of disorder and on level excursions and extremes of Gaussian and related random fields. After developing a new framework of analysis based on local averages of one-, two-, and n-dimensional processes, the book concludes with a chapter discussing ramifications in the important areas of estimation, prediction, and control. The mathematical prerequisite has been held to basic college-level calculus.

  7. Unsupervised Ensemble Anomaly Detection Using Time-Periodic Packet Sampling

    Science.gov (United States)

    Uchida, Masato; Nawata, Shuichi; Gu, Yu; Tsuru, Masato; Oie, Yuji

    We propose an anomaly detection method for finding patterns in network traffic that do not conform to legitimate (i.e., normal) behavior. The proposed method trains a baseline model describing the normal behavior of network traffic without using manually labeled traffic data. The trained baseline model is used as the basis for comparison with the audit network traffic. This anomaly detection works in an unsupervised manner through the use of time-periodic packet sampling, which is used in a manner that differs from its intended purpose — the lossy nature of packet sampling is used to extract normal packets from the unlabeled original traffic data. Evaluation using actual traffic traces showed that the proposed method has false positive and false negative rates in the detection of anomalies regarding TCP SYN packets comparable to those of a conventional method that uses manually labeled traffic data to train the baseline model. Performance variation due to the probabilistic nature of sampled traffic data is mitigated by using ensemble anomaly detection that collectively exploits multiple baseline models in parallel. Alarm sensitivity is adjusted for the intended use by using maximum- and minimum-based anomaly detection that effectively take advantage of the performance variations among the multiple baseline models. Testing using actual traffic traces showed that the proposed anomaly detection method performs as well as one using manually labeled traffic data and better than one using randomly sampled (unlabeled) traffic data.

  8. Importance sampling of heavy-tailed iterated random functions

    NARCIS (Netherlands)

    B. Chen (Bohan); C.H. Rhee (Chang-Han); A.P. Zwart (Bert)

    2016-01-01

    textabstractWe consider a stochastic recurrence equation of the form $Z_{n+1} = A_{n+1} Z_n+B_{n+1}$, where $\\mathbb{E}[\\log A_1]<0$, $\\mathbb{E}[\\log^+ B_1]<\\infty$ and $\\{(A_n,B_n)\\}_{n\\in\\mathbb{N}}$ is an i.i.d. sequence of positive random vectors. The stationary distribution of this Markov

  9. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge.

    Directory of Open Access Journals (Sweden)

    Rosa Catarino

    Full Text Available Human papillomavirus (HPV self-sampling (self-HPV is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab.A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA. After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET. HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic.HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2 detected by s-DRY, 56.2% (95%CI: 47.6-64.4 by Dr-WET, and 54.6% (95%CI: 46.1-62.9 by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34, and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56. Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+ were: 64.0% (95%CI: 44.5-79.8 for s-FTA, 84.6% (95%CI: 66.5-93.9 for s-DRY, and 76.9% (95%CI: 58.0-89.0 for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%. Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD/per card vs. ~1 USD/per swab.Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA.International Standard Randomized Controlled Trial Number (ISRCTN: 43310942.

  10. Much of the variation in breast pathology quality assurance data in the UK can be explained by the random order in which cases arrive at individual centres, but some true outliers do exist.

    Science.gov (United States)

    Cross, Simon S; Stephenson, Timothy J; Harrison, Robert F

    2011-10-01

    To investigate the role of random temporal order of patient arrival at screening centres in the variability seen in rates of node positivity and breast cancer grade between centres in the NHS Breast Screening Programme. Computer simulations were performed of the variation in node positivity and breast cancer grade with the random temporal arrival of patients at screening centres based on national UK audit data. Cumulative mean graphs of these data were plotted. Confidence intervals for the parameters were generated, using the binomial distribution. UK audit data were plotted on these control limit graphs. The results showed that much of the variability in the audit data could be accounted for by the effects of random order of arrival of cases at the screening centres. Confidence intervals of 99.7% identified true outliers in the data. Much of the variation in breast pathology quality assurance data in the UK can be explained by the random order in which cases arrive at individual centres. Control charts with confidence intervals of 99.7% plotted against the number of reported cases are useful tools for identification of true outliers. 2011 Blackwell Publishing Limited.

  11. Efficient inference of population size histories and locus-specific mutation rates from large-sample genomic variation data.

    Science.gov (United States)

    Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S

    2015-02-01

    With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.

  12. Adaptive geostatistical sampling enables efficient identification of malaria hotspots in repeated cross-sectional surveys in rural Malawi.

    Directory of Open Access Journals (Sweden)

    Alinune N Kabaghe

    Full Text Available In the context of malaria elimination, interventions will need to target high burden areas to further reduce transmission. Current tools to monitor and report disease burden lack the capacity to continuously detect fine-scale spatial and temporal variations of disease distribution exhibited by malaria. These tools use random sampling techniques that are inefficient for capturing underlying heterogeneity while health facility data in resource-limited settings are inaccurate. Continuous community surveys of malaria burden provide real-time results of local spatio-temporal variation. Adaptive geostatistical design (AGD improves prediction of outcome of interest compared to current random sampling techniques. We present findings of continuous malaria prevalence surveys using an adaptive sampling design.We conducted repeated cross sectional surveys guided by an adaptive sampling design to monitor the prevalence of malaria parasitaemia and anaemia in children below five years old in the communities living around Majete Wildlife Reserve in Chikwawa district, Southern Malawi. AGD sampling uses previously collected data to sample new locations of high prediction variance or, where prediction exceeds a set threshold. We fitted a geostatistical model to predict malaria prevalence in the area.We conducted five rounds of sampling, and tested 876 children aged 6-59 months from 1377 households over a 12-month period. Malaria prevalence prediction maps showed spatial heterogeneity and presence of hotspots-where predicted malaria prevalence was above 30%; predictors of malaria included age, socio-economic status and ownership of insecticide-treated mosquito nets.Continuous malaria prevalence surveys using adaptive sampling increased malaria prevalence prediction accuracy. Results from the surveys were readily available after data collection. The tool can assist local managers to target malaria control interventions in areas with the greatest health impact and is

  13. Stochastic process variation in deep-submicron CMOS circuits and algorithms

    CERN Document Server

    Zjajo, Amir

    2014-01-01

    One of the most notable features of nanometer scale CMOS technology is the increasing magnitude of variability of the key device parameters affecting performance of integrated circuits. The growth of variability can be attributed to multiple factors, including the difficulty of manufacturing control, the emergence of new systematic variation-generating mechanisms, and most importantly, the increase in atomic-scale randomness, where device operation must be described as a stochastic process. In addition to wide-sense stationary stochastic device variability and temperature variation, existence of non-stationary stochastic electrical noise associated with fundamental processes in integrated-circuit devices represents an elementary limit on the performance of electronic circuits. In an attempt to address these issues, Stochastic Process Variation in Deep-Submicron CMOS: Circuits and Algorithms offers unique combination of mathematical treatment of random process variation, electrical noise and temperature and ne...

  14. Daily variations of delta 18O and delta D in daily samplings of air water vapour and rain water in the Amazon Basin

    International Nuclear Information System (INIS)

    Matsui, E.; Salati, E.; Ribeiro, M.N.G.; Tancredi, A.C.F.N.S.; Reis, C.M. dos

    1984-01-01

    The movement of rain water in the soil from 0 to 120 cm depth using delta 18 O weekly variations is studied. A study of the delta D variability in water vapour and rain water samples during precipitation was also done, the samples being collected a 3 minute intervals from the beginning to the end of precipitation. (M.A.C.) [pt

  15. Power Spectrum Estimation of Randomly Sampled Signals

    DEFF Research Database (Denmark)

    Velte, C. M.; Buchhave, P.; K. George, W.

    algorithms; sample and-hold and the direct spectral estimator without residence time weighting. The computer generated signal is a Poisson process with a sample rate proportional to velocity magnitude that consist of well-defined frequency content, which makes bias easy to spot. The idea...

  16. Long-term sampling of CO2 from waste-to-energy plants: 14C determination methodology, data variation and uncertainty

    DEFF Research Database (Denmark)

    Fuglsang, Karsten; Pedersen, Niels Hald; Larsen, Anna Warberg

    2014-01-01

    A dedicated sampling and measurement method was developed for long-term measurements of biogenic and fossil-derived CO2 from thermal waste-to-energy processes. Based on long-term sampling of CO2 and 14C determination, plant-specific emission factors can be determined more accurately, and the annual...... emission of fossil CO2 from waste-to-energy plants can be monitored according to carbon trading schemes and renewable energy certificates. Weekly and monthly measurements were performed at five Danish waste incinerators. Significant variations between fractions of biogenic CO2 emitted were observed...... was ± 4.0 pmC (95 % confidence interval) at 62 pmC. The long-term sampling method was found to be useful for waste incinerators for determination of annual fossil and biogenic CO2 emissions with relatively low uncertainty....

  17. Seasonal and temporal variation in release of antibiotics in hospital wastewater: estimation using continuous and grab sampling.

    Science.gov (United States)

    Diwan, Vishal; Stålsby Lundborg, Cecilia; Tamhankar, Ashok J

    2013-01-01

    The presence of antibiotics in the environment and their subsequent impact on resistance development has raised concerns globally. Hospitals are a major source of antibiotics released into the environment. To reduce these residues, research to improve knowledge of the dynamics of antibiotic release from hospitals is essential. Therefore, we undertook a study to estimate seasonal and temporal variation in antibiotic release from two hospitals in India over a period of two years. For this, 6 sampling sessions of 24 hours each were conducted in the three prominent seasons of India, at all wastewater outlets of the two hospitals, using continuous and grab sampling methods. An in-house wastewater sampler was designed for continuous sampling. Eight antibiotics from four major antibiotic groups were selected for the study. To understand the temporal pattern of antibiotic release, each of the 24-hour sessions were divided in three sub-sampling sessions of 8 hours each. Solid phase extraction followed by liquid chromatography/tandem mass spectrometry (LC-MS/MS) was used to determine the antibiotic residues. Six of the eight antibiotics studied were detected in the wastewater samples. Both continuous and grab sampling methods indicated that the highest quantities of fluoroquinolones were released in winter followed by the rainy season and the summer. No temporal pattern in antibiotic release was detected. In general, in a common timeframe, continuous sampling showed less concentration of antibiotics in wastewater as compared to grab sampling. It is suggested that continuous sampling should be the method of choice as grab sampling gives erroneous results, it being indicative of the quantities of antibiotics present in wastewater only at the time of sampling. Based on our studies, calculations indicate that from hospitals in India, an estimated 89, 1 and 25 ng/L/day of fluroquinolones, metronidazole and sulfamethoxazole respectively, might be getting released into the

  18. Sampling Strategies and Processing of Biobank Tissue Samples from Porcine Biomedical Models.

    Science.gov (United States)

    Blutke, Andreas; Wanke, Rüdiger

    2018-03-06

    In translational medical research, porcine models have steadily become more popular. Considering the high value of individual animals, particularly of genetically modified pig models, and the often-limited number of available animals of these models, establishment of (biobank) collections of adequately processed tissue samples suited for a broad spectrum of subsequent analyses methods, including analyses not specified at the time point of sampling, represent meaningful approaches to take full advantage of the translational value of the model. With respect to the peculiarities of porcine anatomy, comprehensive guidelines have recently been established for standardized generation of representative, high-quality samples from different porcine organs and tissues. These guidelines are essential prerequisites for the reproducibility of results and their comparability between different studies and investigators. The recording of basic data, such as organ weights and volumes, the determination of the sampling locations and of the numbers of tissue samples to be generated, as well as their orientation, size, processing and trimming directions, are relevant factors determining the generalizability and usability of the specimen for molecular, qualitative, and quantitative morphological analyses. Here, an illustrative, practical, step-by-step demonstration of the most important techniques for generation of representative, multi-purpose biobank specimen from porcine tissues is presented. The methods described here include determination of organ/tissue volumes and densities, the application of a volume-weighted systematic random sampling procedure for parenchymal organs by point-counting, determination of the extent of tissue shrinkage related to histological embedding of samples, and generation of randomly oriented samples for quantitative stereological analyses, such as isotropic uniform random (IUR) sections generated by the "Orientator" and "Isector" methods, and vertical

  19. Assessing differences in groups randomized by recruitment chain in a respondent-driven sample of Seattle-area injection drug users.

    Science.gov (United States)

    Burt, Richard D; Thiede, Hanne

    2014-11-01

    Respondent-driven sampling (RDS) is a form of peer-based study recruitment and analysis that incorporates features designed to limit and adjust for biases in traditional snowball sampling. It is being widely used in studies of hidden populations. We report an empirical evaluation of RDS's consistency and variability, comparing groups recruited contemporaneously, by identical methods and using identical survey instruments. We randomized recruitment chains from the RDS-based 2012 National HIV Behavioral Surveillance survey of injection drug users in the Seattle area into two groups and compared them in terms of sociodemographic characteristics, drug-associated risk behaviors, sexual risk behaviors, human immunodeficiency virus (HIV) status and HIV testing frequency. The two groups differed in five of the 18 variables examined (P ≤ .001): race (e.g., 60% white vs. 47%), gender (52% male vs. 67%), area of residence (32% downtown Seattle vs. 44%), an HIV test in the previous 12 months (51% vs. 38%). The difference in serologic HIV status was particularly pronounced (4% positive vs. 18%). In four further randomizations, differences in one to five variables attained this level of significance, although the specific variables involved differed. We found some material differences between the randomized groups. Although the variability of the present study was less than has been reported in serial RDS surveys, these findings indicate caution in the interpretation of RDS results. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Frequency variations of discrete cranial traits in major human populations. I. Supernumerary ossicle variations.

    Science.gov (United States)

    Hanihara, T; Ishida, H

    2001-06-01

    Four supernumerary ossicle variations-the ossicle at the lambda, the parietal notch bone, the asterionic bone, and the occipitomastoid bone-were examined for laterality differences, intertrait correlations, sex differences, and between group variations in the samples from around the world. Significant laterality differences were not detected in almost all samples. In some pairs of traits, significant association of occurrence were found. Several geographic samples were sexually dimorphic with respect to the asterionic bone and to a lesser extent for the parietal notch bone. East/Northeast Asians including the Arctic populations in general had lower frequencies of the 4 accessory ossicles. Australians, Melanesians and the majority of the New World peoples, on the other hand, generally had high frequencies. In the western hemisphere of the Old World, Subsaharan Africans had relatively high frequencies. Except for the ossicle at the lambda, the distribution pattern in incidence showed clinal variation from south to north. Any identifiable adaptive value related to environmental or subsistence factors may be expressed in such clinal variation. This may allow us to hypothesise that not only mechanical factors but a founder effect, genetic drift, and population structure could have been the underlying causes for interregional variation and possible clines in the incidences of the accessory ossicles.

  1. A random cluster survey and a convenience sample give comparable estimates of immunity to vaccine preventable diseases in children of school age in Victoria, Australia.

    Science.gov (United States)

    Kelly, Heath; Riddell, Michaela A; Gidding, Heather F; Nolan, Terry; Gilbert, Gwendolyn L

    2002-08-19

    We compared estimates of the age-specific population immunity to measles, mumps, rubella, hepatitis B and varicella zoster viruses in Victorian school children obtained by a national sero-survey, using a convenience sample of residual sera from diagnostic laboratories throughout Australia, with those from a three-stage random cluster survey. When grouped according to school age (primary or secondary school) there was no significant difference in the estimates of immunity to measles, mumps, hepatitis B or varicella. Compared with the convenience sample, the random cluster survey estimated higher immunity to rubella in samples from both primary (98.7% versus 93.6%, P = 0.002) and secondary school students (98.4% versus 93.2%, P = 0.03). Despite some limitations, this study suggests that the collection of a convenience sample of sera from diagnostic laboratories is an appropriate sampling strategy to provide population immunity data that will inform Australia's current and future immunisation policies. Copyright 2002 Elsevier Science Ltd.

  2. Using Check-All-That-Apply (CATA) method for determining product temperature-dependent sensory-attribute variations: A case study of cooked rice.

    Science.gov (United States)

    Pramudya, Ragita C; Seo, Han-Seok

    2018-03-01

    Temperatures of most hot or cold meal items change over the period of consumption, possibly influencing sensory perception of those items. Unlike temporal variations in sensory attributes, product temperature-induced variations have not received much attention. Using a Check-All-That-Apply (CATA) method, this study aimed to characterize variations in sensory attributes over a wide range of temperatures at which hot or cold foods and beverages may be consumed. Cooked milled rice, typically consumed at temperatures between 70 and 30°C in many rice-eating countries, was used as a target sample in this study. Two brands of long-grain milled rice were cooked and randomly presented at 70, 60, 50, 40, and 30°C. Thirty-five CATA terms for cooked milled rice were generated. Eighty-eight untrained panelists were asked to quickly select all the CATA terms that they considered appropriate to characterize sensory attributes of cooked rice samples presented at each temperature. Proportions of selection by panelists for 13 attributes significantly differed among the five temperature conditions. "Product temperature-dependent sensory-attribute variations" differed with two brands of milled rice grains. Such variations in sensory attributes, resulted from both product temperature and rice brand, were more pronounced among panelists who more frequently consumed rice. In conclusion, the CATA method can be useful for characterizing "product temperature-dependent sensory attribute variations" in cooked milled-rice samples. Further study is needed to examine whether the CATA method is also effective in capturing "product temperature-dependent sensory-attribute variations" in other hot or cold foods and beverages. Published by Elsevier Ltd.

  3. A tale of two "forests": random forest machine learning AIDS tropical forest carbon mapping.

    Science.gov (United States)

    Mascaro, Joseph; Asner, Gregory P; Knapp, David E; Kennedy-Bowdoin, Ty; Martin, Roberta E; Anderson, Christopher; Higgins, Mark; Chadwick, K Dana

    2014-01-01

    Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus). The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging)-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including--in the latter case--x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area) for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called "out-of-bag"), which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha(-1) when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation.

  4. A tale of two "forests": random forest machine learning AIDS tropical forest carbon mapping.

    Directory of Open Access Journals (Sweden)

    Joseph Mascaro

    Full Text Available Accurate and spatially-explicit maps of tropical forest carbon stocks are needed to implement carbon offset mechanisms such as REDD+ (Reduced Deforestation and Degradation Plus. The Random Forest machine learning algorithm may aid carbon mapping applications using remotely-sensed data. However, Random Forest has never been compared to traditional and potentially more reliable techniques such as regionally stratified sampling and upscaling, and it has rarely been employed with spatial data. Here, we evaluated the performance of Random Forest in upscaling airborne LiDAR (Light Detection and Ranging-based carbon estimates compared to the stratification approach over a 16-million hectare focal area of the Western Amazon. We considered two runs of Random Forest, both with and without spatial contextual modeling by including--in the latter case--x, and y position directly in the model. In each case, we set aside 8 million hectares (i.e., half of the focal area for validation; this rigorous test of Random Forest went above and beyond the internal validation normally compiled by the algorithm (i.e., called "out-of-bag", which proved insufficient for this spatial application. In this heterogeneous region of Northern Peru, the model with spatial context was the best preforming run of Random Forest, and explained 59% of LiDAR-based carbon estimates within the validation area, compared to 37% for stratification or 43% by Random Forest without spatial context. With the 60% improvement in explained variation, RMSE against validation LiDAR samples improved from 33 to 26 Mg C ha(-1 when using Random Forest with spatial context. Our results suggest that spatial context should be considered when using Random Forest, and that doing so may result in substantially improved carbon stock modeling for purposes of climate change mitigation.

  5. Variations in reporting of outcomes in randomized trials on diet and physical activity in pregnancy: A systematic review.

    Science.gov (United States)

    Rogozińska, Ewelina; Marlin, Nadine; Yang, Fen; Dodd, Jodie M; Guelfi, Kym; Teede, Helena; Surita, Fernanda; Jensen, Dorte M; Geiker, Nina R W; Astrup, Arne; Yeo, SeonAe; Kinnunen, Tarja I; Stafne, Signe N; Cecatti, Jose G; Bogaerts, Annick; Hauner, Hans; Mol, Ben W; Scudeller, Tânia T; Vinter, Christina A; Renault, Kristina M; Devlieger, Roland; Thangaratinam, Shakila; Khan, Khalid S

    2017-07-01

    Trials on diet and physical activity in pregnancy report on various outcomes. We aimed to assess the variations in outcomes reported and their quality in trials on lifestyle interventions in pregnancy. We searched major databases without language restrictions for randomized controlled trials on diet and physical activity-based interventions in pregnancy up to March 2015. Two independent reviewers undertook study selection and data extraction. We estimated the percentage of papers reporting 'critically important' and 'important' outcomes. We defined the quality of reporting as a proportion using a six-item questionnaire. Regression analysis was used to identify factors affecting this quality. Sixty-six randomized controlled trials were published in 78 papers (66 main, 12 secondary). Gestational diabetes (57.6%, 38/66), preterm birth (48.5%, 32/66) and cesarian section (60.6%, 40/66), were the commonly reported 'critically important' outcomes. Gestational weight gain (84.5%, 56/66) and birth weight (87.9%, 58/66) were reported in most papers, although not considered critically important. The median quality of reporting was 0.60 (interquartile range 0.25, 0.83) for a maximum score of one. Study and journal characteristics did not affect quality. Many studies on lifestyle interventions in pregnancy do not report critically important outcomes, highlighting the need for core outcome set development. © 2017 Japan Society of Obstetrics and Gynecology.

  6. EARLY HEAD START FAMILIES' EXPERIENCES WITH STRESS: UNDERSTANDING VARIATIONS WITHIN A HIGH-RISK, LOW-INCOME SAMPLE.

    Science.gov (United States)

    Hustedt, Jason T; Vu, Jennifer A; Bargreen, Kaitlin N; Hallam, Rena A; Han, Myae

    2017-09-01

    The federal Early Head Start program provides a relevant context to examine families' experiences with stress since participants qualify on the basis of poverty and risk. Building on previous research that has shown variations in demographic and economic risks even among qualifying families, we examined possible variations in families' perceptions of stress. Family, parent, and child data were collected to measure stressors and risk across a variety of domains in families' everyday lives, primarily from self-report measures, but also including assay results from child cortisol samples. A cluster analysis was employed to examine potential differences among groups of Early Head Start families. Results showed that there were three distinct subgroups of families, with some families perceiving that they experienced very high levels of stress while others perceived much lower levels of stress despite also experiencing poverty and heightened risk. These findings have important implications in that they provide an initial step toward distinguishing differences in low-income families' experiences with stress, thereby informing interventions focused on promoting responsive caregiving as a possible mechanism to buffer the effects of family and social stressors on young children. © 2017 Michigan Association for Infant Mental Health.

  7. 42 CFR 431.814 - Sampling plan and procedures.

    Science.gov (United States)

    2010-10-01

    ... reliability of the reduced sample. (4) The sample selection procedure. Systematic random sampling is... sampling, and yield estimates with the same or better precision than achieved in systematic random sampling... 42 Public Health 4 2010-10-01 2010-10-01 false Sampling plan and procedures. 431.814 Section 431...

  8. Searching for the Optimal Sampling Solution: Variation in Invertebrate Communities, Sample Condition and DNA Quality.

    Directory of Open Access Journals (Sweden)

    Martin M Gossner

    Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when

  9. Stratified random sampling plans designed to assist in the determination of radon and radon daughter concentrations in underground uranium mine atmosphere

    International Nuclear Information System (INIS)

    Makepeace, C.E.

    1981-01-01

    Sampling strategies for the monitoring of deleterious agents present in uranium mine air in underground and surface mining areas are described. These methods are designed to prevent overexposure of the lining of the respiratory system of uranium miners to ionizing radiation from radon and radon daughters, and whole body overexposure to external gamma radiation. A detailed description is provided of stratified random sampling monitoring methodology for obtaining baseline data to be used as a reference for subsequent compliance assessment

  10. Effect of Mechanical Impact Energy on the Sorption and Diffusion of Moisture in Reinforced Polymer Composite Samples on Variation of Their Sizes

    Science.gov (United States)

    Startsev, V. O.; Il'ichev, A. V.

    2018-05-01

    The effect of mechanical impact energy on the sorption and diffusion of moisture in polymer composite samples on variation of their sizes was investigated. Square samples, with sides of 40, 60, 80, and 100 mm, made of a KMKU-2m-120.E0,1 carbon-fiber and KMKS-2m.120.T10 glass-fiber plastics with different resistances to calibrated impacts, were compared. Impact loading diagrams of the samples in relation to their sizes and impact energy were analyzed. It is shown that the moisture saturation and moisture diffusion coefficient of the impact-damaged materials can be modeled by Fick's second law with account of impact energy and sample sizes.

  11. Randomized controlled trial of attention bias modification in a racially diverse, socially anxious, alcohol dependent sample.

    Science.gov (United States)

    Clerkin, Elise M; Magee, Joshua C; Wells, Tony T; Beard, Courtney; Barnett, Nancy P

    2016-12-01

    Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Adult participants (N = 86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Randomized Controlled Trial of Attention Bias Modification in a Racially Diverse, Socially Anxious, Alcohol Dependent Sample

    Science.gov (United States)

    Clerkin, Elise M.; Magee, Joshua C.; Wells, Tony T.; Beard, Courtney; Barnett, Nancy P.

    2016-01-01

    Objective Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Method Adult participants (N=86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Results Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. Conclusions These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. PMID:27591918

  13. Comparative study of microfacies variation in two samples from the Chittenango member, Marcellus shale subgroup, western New York state, USA

    Energy Technology Data Exchange (ETDEWEB)

    Balulla, Shama, E-mail: shamamohammed77@outlook.com; Padmanabhan, E., E-mail: eswaran-padmanabhan@petronas.com.my [Department of Geoscience, Faculty of Geosciencs and Petroleum Engineering Universiti Teknologi PETRONAS, Tronoh (Malaysia); Over, Jeffrey, E-mail: over@geneseo.edu [Department of geological sciences, Geneseo, NY (United States)

    2015-07-22

    This study demonstrates the significant lithologic variations that occur within the two shale samples from the Chittenango member of the Marcellus shale formation from western New York State in terms of mineralogical composition, type of lamination, pyrite occurrences and fossil content using thin section detailed description and field emission Scanning electron microscope (FESEM) with energy dispersive X-Ray Spectrum (EDX). This study is classified samples as laminated clayshale and fossiliferous carbonaceous shale. The most important detrital constituents of these shales are the clay mineral illite and chlorite, quartz, organic matter, carbonate mineral, and pyrite. The laminated clayshale has a lower amount of quartz and carbonate minerals than fossiliferous carbonaceous shale while it has a higher amount of clay minerals (chlorite and illite) and organic matter. FESEM analysis confirms the presence of chlorite and illite. The fossil content in the laminated clayshale is much lower than the fossiliferous carbonaceous shale. This can provide greater insights about variations in the depositional and environmental factors that influenced its deposition. This result can be compiled with the sufficient data to be helpful for designing the horizontal wells and placement of hydraulic fracturing in shale gas exploration and production.

  14. Determining individual variation in growth and its implication for life-history and population processes using the empirical Bayes method.

    Directory of Open Access Journals (Sweden)

    Simone Vincenzi

    2014-09-01

    Full Text Available The differences in demographic and life-history processes between organisms living in the same population have important consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the investigation of individual and shared (among homogeneous groups determinants of the observed variation in growth. We use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy growth model with random effects. To illustrate the power and generality of the method, we consider two populations of marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as potential predictors of the von Bertalanffy growth function's parameters k (rate of growth and L∞ (asymptotic size. Our results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to the Akaike Information Criterion (AIC, the best models showed different growth patterns for year-of-birth cohorts as well as the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both populations, models including density during the first year of life showed that growth tended to decrease with increasing population density early in life. Model validation showed that predictions of individual growth trajectories using the random-effects model were more accurate than predictions based on mean size-at-age of fish.

  15. Determining individual variation in growth and its implication for life-history and population processes using the empirical Bayes method.

    Science.gov (United States)

    Vincenzi, Simone; Mangel, Marc; Crivelli, Alain J; Munch, Stephan; Skaug, Hans J

    2014-09-01

    The differences in demographic and life-history processes between organisms living in the same population have important consequences for ecological and evolutionary dynamics. Modern statistical and computational methods allow the investigation of individual and shared (among homogeneous groups) determinants of the observed variation in growth. We use an Empirical Bayes approach to estimate individual and shared variation in somatic growth using a von Bertalanffy growth model with random effects. To illustrate the power and generality of the method, we consider two populations of marble trout Salmo marmoratus living in Slovenian streams, where individually tagged fish have been sampled for more than 15 years. We use year-of-birth cohort, population density during the first year of life, and individual random effects as potential predictors of the von Bertalanffy growth function's parameters k (rate of growth) and L∞ (asymptotic size). Our results showed that size ranks were largely maintained throughout marble trout lifetime in both populations. According to the Akaike Information Criterion (AIC), the best models showed different growth patterns for year-of-birth cohorts as well as the existence of substantial individual variation in growth trajectories after accounting for the cohort effect. For both populations, models including density during the first year of life showed that growth tended to decrease with increasing population density early in life. Model validation showed that predictions of individual growth trajectories using the random-effects model were more accurate than predictions based on mean size-at-age of fish.

  16. A simplified method for random vibration analysis of structures with random parameters

    International Nuclear Information System (INIS)

    Ghienne, Martin; Blanzé, Claude

    2016-01-01

    Piezoelectric patches with adapted electrical circuits or viscoelastic dissipative materials are two solutions particularly adapted to reduce vibration of light structures. To accurately design these solutions, it is necessary to describe precisely the dynamical behaviour of the structure. It may quickly become computationally intensive to describe robustly this behaviour for a structure with nonlinear phenomena, such as contact or friction for bolted structures, and uncertain variations of its parameters. The aim of this work is to propose a non-intrusive reduced stochastic method to characterize robustly the vibrational response of a structure with random parameters. Our goal is to characterize the eigenspace of linear systems with dynamic properties considered as random variables. This method is based on a separation of random aspects from deterministic aspects and allows us to estimate the first central moments of each random eigenfrequency with a single deterministic finite elements computation. The method is applied to a frame with several Young's moduli modeled as random variables. This example could be expanded to a bolted structure including piezoelectric devices. The method needs to be enhanced when random eigenvalues are closely spaced. An indicator with no additional computational cost is proposed to characterize the ’’proximity” of two random eigenvalues. (paper)

  17. Error variation in OSL palaeodose estimates from single aliquots of quartz: a factorial experiment

    International Nuclear Information System (INIS)

    Galbraith, R.F.; Roberts, R.G.; Yoshida, H.

    2005-01-01

    We use a factorial experiment to study systematic and random differences between measured OSL palaeodoses for a variety of quartz samples. These include samples that have absorbed either a large or small natural or laboratory-induced radiation dose, either with or without prior heating or bleaching. The systematic factors studied are the size of the test dose, the preheat temperature and the number of quartz grains in each multi-grain aliquot. Palaeodoses were estimated using a single-aliquot regenerative-dose protocol. The main parameter of interest is the amount of random variation, over and above that due to photon counting statistics, to be expected between estimates from aliquots that have received the same radiation dose. This over-dispersion is generally larger for natural samples than for artificially bleached ones, and it varies from about 1% in the most favourable cases to about 18% for small aliquots of a sample that had received a natural dose of about 46 Gy. The latter is comparable to the over-dispersion reported for single grains of natural quartz that are thought to have been well-bleached at the time of deposition. The factorial experiment also revealed a number of systematic effects. In particular, measured palaeodoses using a preheat temperature of 260 deg. C were systematically lower than those using 180 deg. C, by up to about 5% in some cases

  18. Random field assessment of nanoscopic inhomogeneity of bone.

    Science.gov (United States)

    Dong, X Neil; Luo, Qing; Sparkman, Daniel M; Millwater, Harry R; Wang, Xiaodu

    2010-12-01

    Bone quality is significantly correlated with the inhomogeneous distribution of material and ultrastructural properties (e.g., modulus and mineralization) of the tissue. Current techniques for quantifying inhomogeneity consist of descriptive statistics such as mean, standard deviation and coefficient of variation. However, these parameters do not describe the spatial variations of bone properties. The objective of this study was to develop a novel statistical method to characterize and quantitatively describe the spatial variation of bone properties at ultrastructural levels. To do so, a random field defined by an exponential covariance function was used to represent the spatial uncertainty of elastic modulus by delineating the correlation of the modulus at different locations in bone lamellae. The correlation length, a characteristic parameter of the covariance function, was employed to estimate the fluctuation of the elastic modulus in the random field. Using this approach, two distribution maps of the elastic modulus within bone lamellae were generated using simulation and compared with those obtained experimentally by a combination of atomic force microscopy and nanoindentation techniques. The simulation-generated maps of elastic modulus were in close agreement with the experimental ones, thus validating the random field approach in defining the inhomogeneity of elastic modulus in lamellae of bone. Indeed, generation of such random fields will facilitate multi-scale modeling of bone in more pragmatic details. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. Assessment of sampling strategies for estimation of site mean concentrations of stormwater pollutants.

    Science.gov (United States)

    McCarthy, David T; Zhang, Kefeng; Westerlund, Camilla; Viklander, Maria; Bertrand-Krajewski, Jean-Luc; Fletcher, Tim D; Deletic, Ana

    2018-02-01

    The estimation of stormwater pollutant concentrations is a primary requirement of integrated urban water management. In order to determine effective sampling strategies for estimating pollutant concentrations, data from extensive field measurements at seven different catchments was used. At all sites, 1-min resolution continuous flow measurements, as well as flow-weighted samples, were taken and analysed for total suspend solids (TSS), total nitrogen (TN) and Escherichia coli (E. coli). For each of these parameters, the data was used to calculate the Event Mean Concentrations (EMCs) for each event. The measured Site Mean Concentrations (SMCs) were taken as the volume-weighted average of these EMCs for each parameter, at each site. 17 different sampling strategies, including random and fixed strategies were tested to estimate SMCs, which were compared with the measured SMCs. The ratios of estimated/measured SMCs were further analysed to determine the most effective sampling strategies. Results indicate that the random sampling strategies were the most promising method in reproducing SMCs for TSS and TN, while some fixed sampling strategies were better for estimating the SMC of E. coli. The differences in taking one, two or three random samples were small (up to 20% for TSS, and 10% for TN and E. coli), indicating that there is little benefit in investing in collection of more than one sample per event if attempting to estimate the SMC through monitoring of multiple events. It was estimated that an average of 27 events across the studied catchments are needed for characterising SMCs of TSS with a 90% confidence interval (CI) width of 1.0, followed by E.coli (average 12 events) and TN (average 11 events). The coefficient of variation of pollutant concentrations was linearly and significantly correlated to the 90% confidence interval ratio of the estimated/measured SMCs (R 2  = 0.49; P sampling frequency needed to accurately estimate SMCs of pollutants. Crown

  20. Estimating Cross-Site Impact Variation in the Presence of Heteroscedasticity

    Science.gov (United States)

    Bloom, Howard S.; Porter, Kristin E.; Weiss, Michael J.; Raudenbush, Stephen

    2013-01-01

    To date, evaluation research and policy analysis have focused mainly on average program impacts and paid little systematic attention to their variation. Recently, the growing number of multi-site randomized trials that are being planned and conducted make it increasingly feasible to study "cross-site" variation in impacts. Important…

  1. Does the hybrid light source (LED/laser) influence temperature variation on the enamel surface during 35% hydrogen peroxide bleaching? A randomized clinical trial.

    Science.gov (United States)

    de Freitas, Patricia Moreira; Menezes, Andressa Nery; da Mota, Ana Carolina Costa; Simões, Alyne; Mendes, Fausto Medeiros; Lago, Andrea Dias Neves; Ferreira, Leila Soares; Ramos-Oliveira, Thayanne Monteiro

    2016-01-01

    The present study investigated how a hybrid light source (LED/laser) influences temperature variation on the enamel surfaces during 35% hydrogen peroxide (HP) bleaching. Effects on the whitening effectiveness and tooth sensitivity were analyzed. Twenty-two volunteers were randomly assigned to two different treatments in a split-mouth experimental model: group 1 (control), 35% HP; group 2 (experimental), 35% HP + LED/laser. Color evaluation was performed before treatment, and 7 and 14 days after completion of bleaching, using a color shade scale. Tooth sensitivity was assessed using a visual analog scale (VAS; before, immediately, and 24 hours after bleaching). During the bleaching treatment, thermocouple channels positioned on the tooth surfaces recorded the temperature. Data on color and temperature changes were subjected to statistical analysis (α = 5%). Tooth sensitivity data were evaluated descriptively. Groups 1 and 2 showed mean temperatures (± standard deviation) of 30.7 ± 1.2 °C and 34.1 ± 1.3 °C, respectively. It was found that there were statistically significant differences between the groups, with group 2 showing higher mean variation (P enamel surface. The color change results showed no differences in bleaching between the two treatment groups (P = .177). The variation of the average temperature during the treatments was not statistically associated with color variation (P = .079). Immediately after bleaching, it was found that 36.4% of the subjects in group 2 had mild to moderate sensitivity. In group 1, 45.5% showed moderate sensitivity. In both groups, the sensitivity ceased within 24 hours. Hybrid light source (LED/ laser) influences temperature variation on the enamel surface during 35% HP bleaching and is not related to greater tooth sensitivity.

  2. Pressure Stimulated Currents (PSCin marble samples

    Directory of Open Access Journals (Sweden)

    F. Vallianatos

    2004-06-01

    Full Text Available The electrical behaviour of marble samples from Penteli Mountain was studied while they were subjected to uniaxial stress. The application of consecutive impulsive variations of uniaxial stress to thirty connatural samples produced Pressure Stimulated Currents (PSC. The linear relationship between the recorded PSC and the applied variation rate was investigated. The main results are the following: as far as the samples were under pressure corresponding to their elastic region, the maximum PSC value obeyed a linear law with respect to pressure variation. In the plastic region deviations were observed which were due to variations of Young s modulus. Furthermore, a special burst form of PSC recordings during failure is presented. The latter is emitted when irregular longitudinal splitting is observed during failure.

  3. Temporal changes in randomness of bird communities across Central Europe.

    Science.gov (United States)

    Renner, Swen C; Gossner, Martin M; Kahl, Tiemo; Kalko, Elisabeth K V; Weisser, Wolfgang W; Fischer, Markus; Allan, Eric

    2014-01-01

    Many studies have examined whether communities are structured by random or deterministic processes, and both are likely to play a role, but relatively few studies have attempted to quantify the degree of randomness in species composition. We quantified, for the first time, the degree of randomness in forest bird communities based on an analysis of spatial autocorrelation in three regions of Germany. The compositional dissimilarity between pairs of forest patches was regressed against the distance between them. We then calculated the y-intercept of the curve, i.e. the 'nugget', which represents the compositional dissimilarity at zero spatial distance. We therefore assume, following similar work on plant communities, that this represents the degree of randomness in species composition. We then analysed how the degree of randomness in community composition varied over time and with forest management intensity, which we expected to reduce the importance of random processes by increasing the strength of environmental drivers. We found that a high portion of the bird community composition could be explained by chance (overall mean of 0.63), implying that most of the variation in local bird community composition is driven by stochastic processes. Forest management intensity did not consistently affect the mean degree of randomness in community composition, perhaps because the bird communities were relatively insensitive to management intensity. We found a high temporal variation in the degree of randomness, which may indicate temporal variation in assembly processes and in the importance of key environmental drivers. We conclude that the degree of randomness in community composition should be considered in bird community studies, and the high values we find may indicate that bird community composition is relatively hard to predict at the regional scale.

  4. Temporal changes in randomness of bird communities across Central Europe.

    Directory of Open Access Journals (Sweden)

    Swen C Renner

    Full Text Available Many studies have examined whether communities are structured by random or deterministic processes, and both are likely to play a role, but relatively few studies have attempted to quantify the degree of randomness in species composition. We quantified, for the first time, the degree of randomness in forest bird communities based on an analysis of spatial autocorrelation in three regions of Germany. The compositional dissimilarity between pairs of forest patches was regressed against the distance between them. We then calculated the y-intercept of the curve, i.e. the 'nugget', which represents the compositional dissimilarity at zero spatial distance. We therefore assume, following similar work on plant communities, that this represents the degree of randomness in species composition. We then analysed how the degree of randomness in community composition varied over time and with forest management intensity, which we expected to reduce the importance of random processes by increasing the strength of environmental drivers. We found that a high portion of the bird community composition could be explained by chance (overall mean of 0.63, implying that most of the variation in local bird community composition is driven by stochastic processes. Forest management intensity did not consistently affect the mean degree of randomness in community composition, perhaps because the bird communities were relatively insensitive to management intensity. We found a high temporal variation in the degree of randomness, which may indicate temporal variation in assembly processes and in the importance of key environmental drivers. We conclude that the degree of randomness in community composition should be considered in bird community studies, and the high values we find may indicate that bird community composition is relatively hard to predict at the regional scale.

  5. Accounting for medical variation: the case of prescribing activity in a New Zealand general practice sample.

    Science.gov (United States)

    Davis, P B; Yee, R L; Millar, J

    1994-08-01

    Medical practice variation is extensive and well documented, particularly for surgical interventions, and raises important questions for health policy. To date, however, little work has been carried out on interpractitioner variation in prescribing activity in the primary care setting. An analytical model of medical variation is derived from the literature and relevant indicators are identified from a study of New Zealand general practice. The data are based on nearly 9,500 completed patient encounter records drawn from over a hundred practitioners in the Waikato region of the North Island, New Zealand. The data set represents a 1% sample of all weekday general practice office encounters in the Hamilton Health District recorded over a 12-month period. Overall levels of prescribing, and the distribution of drug mentions across diagnostic groupings, are broadly comparable to results drawn from international benchmark data. A multivariate analysis is carried out on seven measures of activity in the areas of prescribing volume, script detail, and therapeutic choice. The analysis indicates that patient, practitioner and practice attributes exert little systematic influence on the prescribing task. The principal influences are diagnosis, followed by practitioner identity. The pattern of findings suggests also that the prescribing task cannot be viewed as an undifferentiated activity. It is more usefully considered as a process of decision-making in which 'core' judgements--such as the decision to prescribe and the choice of drug--are highly predictable and strongly influenced by diagnosis, while 'peripheral' features of the task--such as choosing a combination drug or prescribing generically--are less determinate and more subject to the exercise of clinical discretion.(ABSTRACT TRUNCATED AT 250 WORDS)

  6. Variational data assimilation using targetted random walks

    KAUST Repository

    Cotter, S. L.; Dashti, M.; Stuart, A. M.

    2011-01-01

    chain-Monte Carlo (MCMC) method which enables us to directly sample from the Bayesian posterior distribution on the unknown functions of interest given observations. Since we are aware that these methods are currently too computationally expensive

  7. Phenotypic and molecular variation in the green and black poison-dart frog Dendrobates auratus (Anura: Dendrobatidae from Costa Rica

    Directory of Open Access Journals (Sweden)

    Lisa D Patrick

    2009-11-01

    Full Text Available The green and black poison-dart frog Dendrobates auratus exhibits high intraspecific variation in hue color and pattern throughout its range, making it a very popular species in the pet trade. We analyzed the correspondence between color variation and molecular variation of D. auratus from Costa Rica using RAPD analysis. Twenty-six random primers were analyzed for variation in 99 individuals from seven populations. Color pattern was scored from digital images of the dorsal and ventral views. In general, frogs from the Caribbean coast had significantly more light coloration than black color but cannot be grouped by population based only on hue pattern. Only 3 RAPD primers were found to be polymorphic, representing a total of 16 loci. Most of the molecular variation encountered here occurs within populations, thus making unclear the degree of population structure and differentiation. Further examination of COI mtDNA sequences from our samples also supports these results. Partial Mantel correlations suggested that the pattern of molecular variation is not congruent with the variation in color pattern in this species, an outcome that is discussed in terms of phenotypic evolution. Rev. Biol. Trop. 57 (Suppl. 1: 313-321. Epub 2009 November 30.

  8. A probabilistic Hu-Washizu variational principle

    Science.gov (United States)

    Liu, W. K.; Belytschko, T.; Besterfield, G. H.

    1987-01-01

    A Probabilistic Hu-Washizu Variational Principle (PHWVP) for the Probabilistic Finite Element Method (PFEM) is presented. This formulation is developed for both linear and nonlinear elasticity. The PHWVP allows incorporation of the probabilistic distributions for the constitutive law, compatibility condition, equilibrium, domain and boundary conditions into the PFEM. Thus, a complete probabilistic analysis can be performed where all aspects of the problem are treated as random variables and/or fields. The Hu-Washizu variational formulation is available in many conventional finite element codes thereby enabling the straightforward inclusion of the probabilistic features into present codes.

  9. Revisiting sample size: are big trials the answer?

    Science.gov (United States)

    Lurati Buse, Giovanna A L; Botto, Fernando; Devereaux, P J

    2012-07-18

    The superiority of the evidence generated in randomized controlled trials over observational data is not only conditional to randomization. Randomized controlled trials require proper design and implementation to provide a reliable effect estimate. Adequate random sequence generation, allocation implementation, analyses based on the intention-to-treat principle, and sufficient power are crucial to the quality of a randomized controlled trial. Power, or the probability of the trial to detect a difference when a real difference between treatments exists, strongly depends on sample size. The quality of orthopaedic randomized controlled trials is frequently threatened by a limited sample size. This paper reviews basic concepts and pitfalls in sample-size estimation and focuses on the importance of large trials in the generation of valid evidence.

  10. Distribution of peak expiratory flow variability by age, gender and smoking habits in a random population sample aged 20-70 yrs

    NARCIS (Netherlands)

    Boezen, H M; Schouten, J. P.; Postma, D S; Rijcken, B

    1994-01-01

    Peak expiratory flow (PEF) variability can be considered as an index of bronchial lability. Population studies on PEF variability are few. The purpose of the current paper is to describe the distribution of PEF variability in a random population sample of adults with a wide age range (20-70 yrs),

  11. Time dependent variation of carrying capacity of prestressed precast beam

    Science.gov (United States)

    Le, Tuan D.; Konečný, Petr; Matečková, Pavlína

    2018-04-01

    The article deals with the evaluation of the precast concrete element time dependent carrying capacity. The variation of the resistance is inherited property of laboratory as well as in-situ members. Thus the specification of highest, yet possible, laboratory sample resistance is important with respect to evaluation of laboratory experiments based on the test machine loading capabilities. The ultimate capacity is evaluated through the bending moment resistance of a simply supported prestressed concrete beam. The probabilistic assessment is applied. Scatter of random variables of compressive strength of concrete and effective height of the cross section is considered. Monte Carlo simulation technique is used to investigate the performance of the cross section of the beam with changes of tendons’ positions and compressive strength of concrete.

  12. Dynamical implications of sample shape for avalanches in 2-dimensional random-field Ising model with saw-tooth domain wall

    Science.gov (United States)

    Tadić, Bosiljka

    2018-03-01

    We study dynamics of a built-in domain wall (DW) in 2-dimensional disordered ferromagnets with different sample shapes using random-field Ising model on a square lattice rotated by 45 degrees. The saw-tooth DW of the length Lx is created along one side and swept through the sample by slow ramping of the external field until the complete magnetisation reversal and the wall annihilation at the open top boundary at a distance Ly. By fixing the number of spins N =Lx ×Ly = 106 and the random-field distribution at a value above the critical disorder, we vary the ratio of the DW length to the annihilation distance in the range Lx /Ly ∈ [ 1 / 16 , 16 ] . The periodic boundary conditions are applied in the y-direction so that these ratios comprise different samples, i.e., surfaces of cylinders with the changing perimeter Lx and height Ly. We analyse the avalanches of the DW slips between following field updates, and the multifractal structure of the magnetisation fluctuation time series. Our main findings are that the domain-wall lengths materialised in different sample shapes have an impact on the dynamics at all scales. Moreover, the domain-wall motion at the beginning of the hysteresis loop (HLB) probes the disorder effects resulting in the fluctuations that are significantly different from the large avalanches in the central part of the loop (HLC), where the strong fields dominate. Specifically, the fluctuations in HLB exhibit a wide multi-fractal spectrum, which shifts towards higher values of the exponents when the DW length is reduced. The distributions of the avalanches in this segments of the loops obey power-law decay and the exponential cutoffs with the exponents firmly in the mean-field universality class for long DW. In contrast, the avalanches in the HLC obey Tsallis density distribution with the power-law tails which indicate the new categories of the scale invariant behaviour for different ratios Lx /Ly. The large fluctuations in the HLC, on the other

  13. Simultaneous escaping of explicit and hidden free energy barriers: application of the orthogonal space random walk strategy in generalized ensemble based conformational sampling.

    Science.gov (United States)

    Zheng, Lianqing; Chen, Mengen; Yang, Wei

    2009-06-21

    To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.

  14. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    International Nuclear Information System (INIS)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ 1 -minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy

  15. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    Science.gov (United States)

    Hampton, Jerrad; Doostan, Alireza

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ1-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence on the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.

  16. Methods for identifying SNP interactions: a review on variations of Logic Regression, Random Forest and Bayesian logistic regression.

    Science.gov (United States)

    Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula

    2011-01-01

    Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.

  17. Soil map disaggregation improved by soil-landscape relationships, area-proportional sampling and random forest implementation

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan

    implementation generally improved the algorithm’s ability to predict the correct soil class. The implementation of soil-landscape relationships and area-proportional sampling generally increased the calculation time, while the random forest implementation reduced the calculation time. In the most successful......Detailed soil information is often needed to support agricultural practices, environmental protection and policy decisions. Several digital approaches can be used to map soil properties based on field observations. When soil observations are sparse or missing, an alternative approach...... is to disaggregate existing conventional soil maps. At present, the DSMART algorithm represents the most sophisticated approach for disaggregating conventional soil maps (Odgers et al., 2014). The algorithm relies on classification trees trained from resampled points, which are assigned classes according...

  18. A partially reflecting random walk on spheres algorithm for electrical impedance tomography

    Energy Technology Data Exchange (ETDEWEB)

    Maire, Sylvain, E-mail: maire@univ-tln.fr [Laboratoire LSIS Equipe Signal et Image, Université du Sud Toulon-Var, Av. Georges Pompidou, BP 56, 83162 La Valette du Var Cedex (France); Simon, Martin, E-mail: simon@math.uni-mainz.de [Institute of Mathematics, Johannes Gutenberg University, 55099 Mainz (Germany)

    2015-12-15

    In this work, we develop a probabilistic estimator for the voltage-to-current map arising in electrical impedance tomography. This novel so-called partially reflecting random walk on spheres estimator enables Monte Carlo methods to compute the voltage-to-current map in an embarrassingly parallel manner, which is an important issue with regard to the corresponding inverse problem. Our method uses the well-known random walk on spheres algorithm inside subdomains where the diffusion coefficient is constant and employs replacement techniques motivated by finite difference discretization to deal with both mixed boundary conditions and interface transmission conditions. We analyze the global bias and the variance of the new estimator both theoretically and experimentally. Subsequently, the variance of the new estimator is considerably reduced via a novel control variate conditional sampling technique which yields a highly efficient hybrid forward solver coupling probabilistic and deterministic algorithms.

  19. Comparison of Address-based Sampling and Random-digit Dialing Methods for Recruiting Young Men as Controls in a Case-Control Study of Testicular Cancer Susceptibility

    OpenAIRE

    Clagett, Bartholt; Nathanson, Katherine L.; Ciosek, Stephanie L.; McDermoth, Monique; Vaughn, David J.; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A.

    2013-01-01

    Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-...

  20. Non-stationary random vibration analysis of a 3D train-bridge system using the probability density evolution method

    Science.gov (United States)

    Yu, Zhi-wu; Mao, Jian-feng; Guo, Feng-qi; Guo, Wei

    2016-03-01

    Rail irregularity is one of the main sources causing train-bridge random vibration. A new random vibration theory for the coupled train-bridge systems is proposed in this paper. First, number theory method (NTM) with 2N-dimensional vectors for the stochastic harmonic function (SHF) of rail irregularity power spectrum density was adopted to determine the representative points of spatial frequencies and phases to generate the random rail irregularity samples, and the non-stationary rail irregularity samples were modulated with the slowly varying function. Second, the probability density evolution method (PDEM) was employed to calculate the random dynamic vibration of the three-dimensional (3D) train-bridge system by a program compiled on the MATLAB® software platform. Eventually, the Newmark-β integration method and double edge difference method of total variation diminishing (TVD) format were adopted to obtain the mean value curve, the standard deviation curve and the time-history probability density information of responses. A case study was presented in which the ICE-3 train travels on a three-span simply-supported high-speed railway bridge with excitation of random rail irregularity. The results showed that compared to the Monte Carlo simulation, the PDEM has higher computational efficiency for the same accuracy, i.e., an improvement by 1-2 orders of magnitude. Additionally, the influences of rail irregularity and train speed on the random vibration of the coupled train-bridge system were discussed.

  1. Dynamic Reliability Analysis of Gear Transmission System of Wind Turbine in Consideration of Randomness of Loadings and Parameters

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2014-01-01

    Full Text Available A dynamic model of gear transmission system of wind turbine is built with consideration of randomness of loads and parameters. The dynamic response of the system is obtained using the theory of random sampling and the Runge-Kutta method. According to rain flow counting principle, the dynamic meshing forces are converted into a series of luffing fatigue load spectra. The amplitude and frequency of the equivalent stress are obtained using equivalent method of Geber quadratic curve. Moreover, the dynamic reliability model of components and system is built according to the theory of probability of cumulative fatigue damage. The system reliability with the random variation of parameters is calculated and the influence of random parameters on dynamic reliability of components is analyzed. In the end, the results of the proposed method are compared with that of Monte Carlo method. This paper can be instrumental in the design of wind turbine gear transmission system with more advantageous dynamic reliability.

  2. A comparison of observation-level random effect and Beta-Binomial models for modelling overdispersion in Binomial data in ecology & evolution

    Directory of Open Access Journals (Sweden)

    Xavier A. Harrison

    2015-07-01

    Full Text Available Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (<5 levels, I investigate the performance of both model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non

  3. Effects of gustatory stimulants of salivary secretion on salivary pH and flow: a randomized controlled trial.

    Science.gov (United States)

    da Mata, A D S P; da Silva Marques, D N; Silveira, J M L; Marques, J R O F; de Melo Campos Felino, E T; Guilherme, N F R P M

    2009-04-01

    To compare salivary pH changes and stimulation efficacy of two different gustatory stimulants of salivary secretion (GSSS). Portuguese Dental Faculty Clinic. Double blind randomized controlled trial. One hundred and twenty volunteers were randomized to two intervention groups. Sample sized was calculated using an alpha error of 0.05 and a beta of 0.20. Participants were randomly assigned to receive a new gustatory stimulant of secretory secretion containing a weaker malic acid, fluoride and xylitol or a traditionally citric acid-based one. Saliva collection was obtained by established methods at different times. The salivary pH of the samples was determined with a pH meter and a microelectrode. Salivary pH variations and counts of subjects with pH below 5.5 for over 1 min and stimulated salivary flow were the main outcome measures. Both GSSS significantly stimulated salivary output without significant differences between the two groups. The new gustatory stimulant of salivary secretion presented a risk reduction of 80 +/- 10.6% (95% CI) when compared with the traditional one. Gustatory stimulants of salivary secretion with fluoride, xylitol and lower acid content maintain similar salivary stimulation capacity while reducing significantly the dental erosion predictive potential.

  4. A comparison of observation-level random effect and Beta-Binomial models for modelling overdispersion in Binomial data in ecology & evolution.

    Science.gov (United States)

    Harrison, Xavier A

    2015-01-01

    Overdispersion is a common feature of models of biological data, but researchers often fail to model the excess variation driving the overdispersion, resulting in biased parameter estimates and standard errors. Quantifying and modeling overdispersion when it is present is therefore critical for robust biological inference. One means to account for overdispersion is to add an observation-level random effect (OLRE) to a model, where each data point receives a unique level of a random effect that can absorb the extra-parametric variation in the data. Although some studies have investigated the utility of OLRE to model overdispersion in Poisson count data, studies doing so for Binomial proportion data are scarce. Here I use a simulation approach to investigate the ability of both OLRE models and Beta-Binomial models to recover unbiased parameter estimates in mixed effects models of Binomial data under various degrees of overdispersion. In addition, as ecologists often fit random intercept terms to models when the random effect sample size is low (model types under a range of random effect sample sizes when overdispersion is present. Simulation results revealed that the efficacy of OLRE depends on the process that generated the overdispersion; OLRE failed to cope with overdispersion generated from a Beta-Binomial mixture model, leading to biased slope and intercept estimates, but performed well for overdispersion generated by adding random noise to the linear predictor. Comparison of parameter estimates from an OLRE model with those from its corresponding Beta-Binomial model readily identified when OLRE were performing poorly due to disagreement between effect sizes, and this strategy should be employed whenever OLRE are used for Binomial data to assess their reliability. Beta-Binomial models performed well across all contexts, but showed a tendency to underestimate effect sizes when modelling non-Beta-Binomial data. Finally, both OLRE and Beta-Binomial models performed

  5. Process variation in electron beam sterilization

    International Nuclear Information System (INIS)

    Beck, Jeffrey A.

    2012-01-01

    The qualification and control of electron beam sterilization can be improved by the application of proven statistical analysis techniques such as Analysis of Variance (ANOVA) and Statistical Tolerance Limits. These statistical techniques can be useful tools in: •Locating and quantifying the minimum and maximum absorbed dose in a product. •Estimating the expected process maximum dose, given a minimum sterilizing dose. •Setting a process minimum dose target, based on an allowance for random measurement and process variation. •Determining the dose relationship between a reference dosimeter and process minimum and maximum doses. This study investigates and demonstrates the application of these tools in qualifying electron beam sterilization, and compares the conclusions obtained with those obtained using practices recommended in Guide for Process Control in Radiation Sterilization. The study supports the following conclusions for electron beam processes: 1.ANOVA is a more effective tool for evaluating the equivalency of absorbed doses than methods suggested in . 2.Process limits computed using statistical tolerance limits more accurately reflect actual process variability than the AAMI method, which applies +/−2 sample standard deviations (s) regardless of sample size. 3.The use of reference dose ratios lends itself to qualification using statistical tolerance limits. The current AAMI recommended approach may result in an overly optimistic estimate of the reference dose adjustment factor, as it is based on application of +/−2(s) tolerances regardless of sample size.

  6. Unit-specific calibration of Actigraph accelerometers in a mechanical setup - is it worth the effort? The effect on random output variation caused by technical inter-instrument variability in the laboratory and in the field

    DEFF Research Database (Denmark)

    Moeller, Niels C; Korsholm, Lars; Kristensen, Peter L

    2008-01-01

    BACKGROUND: Potentially, unit-specific in-vitro calibration of accelerometers could increase field data quality and study power. However, reduced inter-unit variability would only be important if random instrument variability contributes considerably to the total variation in field data. Therefor...

  7. Variation in rank abundance replicate samples and impact of clustering

    NARCIS (Netherlands)

    Neuteboom, J.H.; Struik, P.C.

    2005-01-01

    Calculating a single-sample rank abundance curve by using the negative-binomial distribution provides a way to investigate the variability within rank abundance replicate samples and yields a measure of the degree of heterogeneity of the sampled community. The calculation of the single-sample rank

  8. Evaluating effectiveness of down-sampling for stratified designs and unbalanced prevalence in Random Forest models of tree species distributions in Nevada

    Science.gov (United States)

    Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino

    2012-01-01

    Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...

  9. Aspects of Students' Reasoning about Variation in Empirical Sampling Distributions

    Science.gov (United States)

    Noll, Jennifer; Shaughnessy, J. Michael

    2012-01-01

    Sampling tasks and sampling distributions provide a fertile realm for investigating students' conceptions of variability. A project-designed teaching episode on samples and sampling distributions was team-taught in 6 research classrooms (2 middle school and 4 high school) by the investigators and regular classroom mathematics teachers. Data…

  10. Random amplified polymorphic DNA analysis of Anopheles nuneztovari (Diptera: Culicidae from Western and Northeastern Colombia

    Directory of Open Access Journals (Sweden)

    Carmen Elisa Posso

    2003-06-01

    Full Text Available Random amplified polymorphic DNA (RAPD markers were used to analyze 119 DNA samples of three Colombian Anopheles nuneztovari populations to study genetic variation and structure. Genetic diversity, estimated from heterozygosity, averaged 0.34. Genetic flow was greater between the two populations located in Western Colombia (F ST: 0.035; Nm: 6.8 but lower between these two and the northeastern population (F ST: 0.08; Nm: 2.8. According to molecular variance analysis, the genetic distance between populations was significant (phiST 0.1131, P < 0.001. The variation among individuals within populations (phiST 0.8869, P < 0.001was also significant, suggesting a greater degree of population subdivision, not considered in this study. Both the parameters evaluated and the genetic flow suggest that Colombian An. nuneztovari populations are co-specific.

  11. Temporal and Spatial Variation of Soil Bacteria Richness, Composition, and Function in a Neotropical Rainforest.

    Science.gov (United States)

    Kivlin, Stephanie N; Hawkes, Christine V

    2016-01-01

    The high diversity of tree species has traditionally been considered an important controller of belowground processes in tropical rainforests. However, soil water availability and resources are also primary regulators of soil bacteria in many ecosystems. Separating the effects of these biotic and abiotic factors in the tropics is challenging because of their high spatial and temporal heterogeneity. To determine the drivers of tropical soil bacteria, we examined tree species effects using experimental tree monocultures and secondary forests at La Selva Biological Station in Costa Rica. A randomized block design captured spatial variation and we sampled at four dates across two years to assess temporal variation. We measured bacteria richness, phylogenetic diversity, community composition, biomass, and functional potential. All bacteria parameters varied significantly across dates. In addition, bacteria richness and phylogenetic diversity were affected by the interaction of vegetation type and date, whereas bacteria community composition was affected by the interaction of vegetation type and block. Shifts in bacteria community richness and composition were unrelated to shifts in enzyme function, suggesting physiological overlap among taxa. Based on the observed temporal and spatial heterogeneity, our understanding of tropical soil bacteria will benefit from additional work to determine the optimal temporal and spatial scales for sampling. Understanding spatial and temporal variation will facilitate prediction of how tropical soil microbes will respond to future environmental change.

  12. Interpretation of variations in MODIS-measured greenness levels of Amazon forests during 2000 to 2009

    International Nuclear Information System (INIS)

    Samanta, Arindam; Myneni, Ranga B; Ganguly, Sangram; Vermote, Eric; Nemani, Ramakrishna R

    2012-01-01

    This work investigates variations in satellite-measured greenness of Amazon forests using ten years of NASA Moderate Resolution Imaging Spectroradiometer (MODIS) enhanced vegetation index (EVI) data. Corruption of optical remote sensing data with clouds and aerosols is prevalent in this region; filtering corrupted data causes spatial sampling constraints, as well as reducing the record length, which introduces large biases in estimates of greenness anomalies. The EVI data, analyzed in multiple ways and taking into account EVI accuracy, consistently show a pattern of negligible changes in the greenness levels of forests both in the area affected by drought in 2005 and outside it. Small random patches of anomalous greening and browning—especially prominent in 2009—appear in all ten years, irrespective of contemporaneous variations in precipitation, but with no persistence over time. The fact that over 90% of the EVI anomalies are insignificantly small—within the envelope of error (95% confidence interval) in EVI—warrants cautious interpretation of these results: there were no changes in the greenness of these forests, or if there were changes, the EVI data failed to capture these either because the constituent reflectances were saturated or the moderate resolution precluded viewing small-scale variations. This suggests a need for more accurate and spatially resolved synoptic views from satellite data and corroborating comprehensive ground sampling to understand the greenness dynamics of these forests. (letter)

  13. Interpretation of Variations in Modis-Measured Greenness Levels of Amazon Forests During 2000 to 2009

    Science.gov (United States)

    Samanta, Arindam; Ganguly, Sangram; Vermote, Eric; Nemani, Ramakrishna R.; Myneni, Ranga B.

    2012-01-01

    This work investigates variations in satellite-measured greenness of Amazon forests using ten years of NASA Moderate Resolution Imaging Spectroradiometer (MODIS) enhanced vegetation index (EVI) data. Corruption of optical remote sensing data with clouds and aerosols is prevalent in this region; filtering corrupted data causes spatial sampling constraints, as well as reducing the record length, which introduces large biases in estimates of greenness anomalies. The EVI data, analyzed in multiple ways and taking into account EVI accuracy, consistently show a pattern of negligible changes in the greenness levels of forests both in the area affected by drought in 2005 and outside it. Small random patches of anomalous greening and browning-especially prominent in 2009-appear in all ten years, irrespective of contemporaneous variations in precipitation, but with no persistence over time. The fact that over 90% of the EVI anomalies are insignificantly small-within the envelope of error (95% confidence interval) in EVI-warrants cautious interpretation of these results: there were no changes in the greenness of these forests, or if there were changes, the EVI data failed to capture these either because the constituent reflectances were saturated or the moderate resolution precluded viewing small-scale variations. This suggests a need for more accurate and spatially resolved synoptic views from satellite data and corroborating comprehensive ground sampling to understand the greenness dynamics of these forests.

  14. Statistical analysis of random pulse trains

    International Nuclear Information System (INIS)

    Da Costa, G.

    1977-02-01

    Some experimental and theoretical results concerning the statistical properties of optical beams formed by a finite number of independent pulses are presented. The considered waves (corresponding to each pulse) present important spatial variations of the illumination distribution in a cross-section of the beam, due to the time-varying random refractive index distribution in the active medium. Some examples of this kind of emission are: (a) Free-running ruby laser emission; (b) Mode-locked pulse trains; (c) Randomly excited nonlinear media

  15. Blocked Randomization with Randomly Selected Block Sizes

    Directory of Open Access Journals (Sweden)

    Jimmy Efird

    2010-12-01

    Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  16. Genetic variations of Lansium domesticum Corr. accessions from Java, Sumatra and Ceram based on Random Amplified Polymorphic DNA fingerprints

    Directory of Open Access Journals (Sweden)

    KUSUMADEWI SRI YULITA

    2011-07-01

    Full Text Available Yulita KS (2011 Genetic variations of Lansium domesticum Corr. accessions from Java, Bengkulu and Ceram based on Random Amplified Polymorphic DNA fingerprints. Biodiversitas 12: 125-130. Duku (Lansium domesticum Corr. is one of popular tropical fruits in SE Asia. The spesies has three varieties, known as duku, langsat and kokosan; and duku is the most popular one for being the sweetiest fruit. Indonesia has several local varieties of duku, such as duku Condet, duku Sumber and duku Palembang. This present study aimed to assess genetic diversity of 47 accessions of duku from Java, Sumatra, and Ceram based on RAPD fingerprints. Ten RAPD’s primers were initially screened and five were selected for the analysis. These five primers (OPA 7, 13, 18, OPB 7, and OPN 12 generated 53 scorable bands with an average of 10.6 polymorphic fragment per primer. Percentage of polymorphism ranged from 16.89% (OPA 7 and OPN 12 to 24.54% (OPB 7 with an average of 20.16% polymorphism. OPB 7 at 450 bp was exclusively possessed by accession 20 (Java, OPA 18 at 500 bp was by accession 6 (Java, 550 bp by 3 clones from Bengkulu. While OPN 12 at 300 bp and OPA 13 at 450 bp were shared among the accessions. Clustering analysis was performed based on RAPD profiles using the UPGMA method. The range of genetic similarity value among accessions was 0.02-0.65 suggesting high variation of gene pool existed among accessions.

  17. Variable screening and ranking using sampling-based sensitivity measures

    International Nuclear Information System (INIS)

    Wu, Y-T.; Mohanty, Sitakanta

    2006-01-01

    This paper presents a methodology for screening insignificant random variables and ranking significant important random variables using sensitivity measures including two cumulative distribution function (CDF)-based and two mean-response based measures. The methodology features (1) using random samples to compute sensitivities and (2) using acceptance limits, derived from the test-of-hypothesis, to classify significant and insignificant random variables. Because no approximation is needed in either the form of the performance functions or the type of continuous distribution functions representing input variables, the sampling-based approach can handle highly nonlinear functions with non-normal variables. The main characteristics and effectiveness of the sampling-based sensitivity measures are investigated using both simple and complex examples. Because the number of samples needed does not depend on the number of variables, the methodology appears to be particularly suitable for problems with large, complex models that have large numbers of random variables but relatively few numbers of significant random variables

  18. The interpolation method of stochastic functions and the stochastic variational principle

    International Nuclear Information System (INIS)

    Liu Xianbin; Chen Qiu

    1993-01-01

    Uncertainties have been attaching more importance to increasingly in modern engineering structural design. Viewed on an appropriate scale, the inherent physical attributes (material properties) of many structural systems always exhibit some patterns of random variation in space and time, generally the random variation shows a small parameter fluctuation. For a linear mechanical system, the random variation is modeled as a random one of a linear partial differential operator and, in stochastic finite element method, a random variation of a stiffness matrix. Besides the stochasticity of the structural physical properties, the influences of random loads which always represent themselves as the random boundary conditions bring about much more complexities in structural analysis. Now the stochastic finite element method or the probabilistic finite element method is used to study the structural systems with random physical parameters, whether or not the loads are random. Differing from the general finite element theory, the main difficulty which the stochastic finite element method faces is the inverse operation of stochastic operators and stochastic matrices, since the inverse operators and the inverse matrices are statistically correlated to the random parameters and random loads. So far, many efforts have been made to obtain the reasonably approximate expressions of the inverse operators and inverse matrices, such as Perturbation Method, Neumann Expansion Method, Galerkin Method (in appropriate Hilbert Spaces defined for random functions), Orthogonal Expansion Method. Among these methods, Perturbation Method appear to be the most available. The advantage of these methods is that the fairly accurate response statistics can be obtained under the condition of the finite information of the input. However, the second-order statistics obtained by use of Perturbation Method and Neumann Expansion Method are not always the appropriate ones, because the relevant second

  19. A census-weighted, spatially-stratified household sampling strategy for urban malaria epidemiology

    Directory of Open Access Journals (Sweden)

    Slutsker Laurence

    2008-02-01

    Full Text Available Abstract Background Urban malaria is likely to become increasingly important as a consequence of the growing proportion of Africans living in cities. A novel sampling strategy was developed for urban areas to generate a sample simultaneously representative of population and inhabited environments. Such a strategy should facilitate analysis of important epidemiological relationships in this ecological context. Methods Census maps and summary data for Kisumu, Kenya, were used to create a pseudo-sampling frame using the geographic coordinates of census-sampled structures. For every enumeration area (EA designated as urban by the census (n = 535, a sample of structures equal to one-tenth the number of households was selected. In EAs designated as rural (n = 32, a geographically random sample totalling one-tenth the number of households was selected from a grid of points at 100 m intervals. The selected samples were cross-referenced to a geographic information system, and coordinates transferred to handheld global positioning units. Interviewers found the closest eligible household to the sampling point and interviewed the caregiver of a child aged Results 4,336 interviews were completed in 473 of the 567 study area EAs from June 2002 through February 2003. EAs without completed interviews were randomly distributed, and non-response was approximately 2%. Mean distance from the assigned sampling point to the completed interview was 74.6 m, and was significantly less in urban than rural EAs, even when controlling for number of households. The selected sample had significantly more children and females of childbearing age than the general population, and fewer older individuals. Conclusion This method selected a sample that was simultaneously population-representative and inclusive of important environmental variation. The use of a pseudo-sampling frame and pre-programmed handheld GPS units is more efficient and may yield a more complete sample than

  20. An antithetic variate to facilitate upper-stem height measurements for critical height sampling with importance sampling

    Science.gov (United States)

    Thomas B. Lynch; Jeffrey H. Gove

    2013-01-01

    Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur...

  1. Variation in Physician Practice Styles within and across Emergency Departments.

    Directory of Open Access Journals (Sweden)

    Jessica Van Parys

    Full Text Available Despite the significant responsibility that physicians have in healthcare delivery, we know surprisingly little about why physician practice styles vary within or across institutions. Estimating variation in physician practice styles is complicated by the fact that patients are rarely randomly assigned to physicians. This paper uses the quasi-random assignment of patients to physicians in emergency departments (EDs to show how physicians vary in their treatment of patients with minor injuries. The results reveal a considerable degree of variation in practice styles within EDs; physicians at the 75th percentile of the spending distribution spend 20% more than physicians at the 25th percentile. Observable physician characteristics do not explain much of the variation across physicians, but there is a significant degree of sorting between physicians and EDs over time, with high-cost physicians sorting into high-cost EDs as they gain experience. The results may shed light on why some EDs remain persistently higher-cost than others.

  2. Sampling intraspecific variability in leaf functional traits: Practical suggestions to maximize collected information.

    Science.gov (United States)

    Petruzzellis, Francesco; Palandrani, Chiara; Savi, Tadeja; Alberti, Roberto; Nardini, Andrea; Bacaro, Giovanni

    2017-12-01

    The choice of the best sampling strategy to capture mean values of functional traits for a species/population, while maintaining information about traits' variability and minimizing the sampling size and effort, is an open issue in functional trait ecology. Intraspecific variability (ITV) of functional traits strongly influences sampling size and effort. However, while adequate information is available about intraspecific variability between individuals (ITV BI ) and among populations (ITV POP ), relatively few studies have analyzed intraspecific variability within individuals (ITV WI ). Here, we provide an analysis of ITV WI of two foliar traits, namely specific leaf area (SLA) and osmotic potential (π), in a population of Quercus ilex L. We assessed the baseline ITV WI level of variation between the two traits and provided the minimum and optimal sampling size in order to take into account ITV WI , comparing sampling optimization outputs with those previously proposed in the literature. Different factors accounted for different amount of variance of the two traits. SLA variance was mostly spread within individuals (43.4% of the total variance), while π variance was mainly spread between individuals (43.2%). Strategies that did not account for all the canopy strata produced mean values not representative of the sampled population. The minimum size to adequately capture the studied functional traits corresponded to 5 leaves taken randomly from 5 individuals, while the most accurate and feasible sampling size was 4 leaves taken randomly from 10 individuals. We demonstrate that the spatial structure of the canopy could significantly affect traits variability. Moreover, different strategies for different traits could be implemented during sampling surveys. We partially confirm sampling sizes previously proposed in the recent literature and encourage future analysis involving different traits.

  3. High-speed true random number generation based on paired memristors for security electronics

    Science.gov (United States)

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-01

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ˜30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  4. Visualizing the Sample Standard Deviation

    Science.gov (United States)

    Sarkar, Jyotirmoy; Rashid, Mamunur

    2017-01-01

    The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…

  5. Network Sampling with Memory: A proposal for more efficient sampling from social networks

    Science.gov (United States)

    Mouw, Ted; Verdery, Ashton M.

    2013-01-01

    Techniques for sampling from networks have grown into an important area of research across several fields. For sociologists, the possibility of sampling from a network is appealing for two reasons: (1) A network sample can yield substantively interesting data about network structures and social interactions, and (2) it is useful in situations where study populations are difficult or impossible to survey with traditional sampling approaches because of the lack of a sampling frame. Despite its appeal, methodological concerns about the precision and accuracy of network-based sampling methods remain. In particular, recent research has shown that sampling from a network using a random walk based approach such as Respondent Driven Sampling (RDS) can result in high design effects (DE)—the ratio of the sampling variance to the sampling variance of simple random sampling (SRS). A high design effect means that more cases must be collected to achieve the same level of precision as SRS. In this paper we propose an alternative strategy, Network Sampling with Memory (NSM), which collects network data from respondents in order to reduce design effects and, correspondingly, the number of interviews needed to achieve a given level of statistical power. NSM combines a “List” mode, where all individuals on the revealed network list are sampled with the same cumulative probability, with a “Search” mode, which gives priority to bridge nodes connecting the current sample to unexplored parts of the network. We test the relative efficiency of NSM compared to RDS and SRS on 162 school and university networks from Add Health and Facebook that range in size from 110 to 16,278 nodes. The results show that the average design effect for NSM on these 162 networks is 1.16, which is very close to the efficiency of a simple random sample (DE=1), and 98.5% lower than the average DE we observed for RDS. PMID:24159246

  6. Fast integration using quasi-random numbers

    International Nuclear Information System (INIS)

    Bossert, J.; Feindt, M.; Kerzel, U.

    2006-01-01

    Quasi-random numbers are specially constructed series of numbers optimised to evenly sample a given s-dimensional volume. Using quasi-random numbers in numerical integration converges faster with a higher accuracy compared to the case of pseudo-random numbers. The basic properties of quasi-random numbers are introduced, various generators are discussed and the achieved gain is illustrated by examples

  7. Fast integration using quasi-random numbers

    Science.gov (United States)

    Bossert, J.; Feindt, M.; Kerzel, U.

    2006-04-01

    Quasi-random numbers are specially constructed series of numbers optimised to evenly sample a given s-dimensional volume. Using quasi-random numbers in numerical integration converges faster with a higher accuracy compared to the case of pseudo-random numbers. The basic properties of quasi-random numbers are introduced, various generators are discussed and the achieved gain is illustrated by examples.

  8. Positive-definite matrix processes of finite variation

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Stelzer, Robert

    2007-01-01

    Processes of finite variation, which take values in the positive semidefinite matrices and are representable as the sum of an integral with respect to time and one with respect to an extended Poisson random measure, are considered. For such processes we derive conditions for the square root (and ...

  9. Positive-Definite Matrix Processes of Finite Variation

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Stelzer, Robert

    Processes of finite variation, which take values in the positive semidefinite matrices and are representable as the sum of an integral with respect to time and one with respect to an extended Poisson random measure, are considered. For such processes we derive conditions for the square root (and ...

  10. Random and systematic sampling error when hooking fish to monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden in Australian farmed yellowtail kingfish (Seriola lalandi).

    Science.gov (United States)

    Fensham, J R; Bubner, E; D'Antignana, T; Landos, M; Caraguel, C G B

    2018-05-01

    The Australian farmed yellowtail kingfish (Seriola lalandi, YTK) industry monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden by pooling the fluke count of 10 hooked YTK. The random and systematic error of this sampling strategy was evaluated to assess potential impact on treatment decisions. Fluke abundance (fluke count per fish) in a study cage (estimated 30,502 fish) was assessed five times using the current sampling protocol and its repeatability was estimated the repeatability coefficient (CR) and the coefficient of variation (CV). Individual body weight, fork length, fluke abundance, prevalence, intensity (fluke count per infested fish) and density (fluke count per Kg of fish) were compared between 100 hooked and 100 seined YTK (assumed representative of the entire population) to estimate potential selection bias. Depending on the fluke species and age category, CR (expected difference in parasite count between 2 sampling iterations) ranged from 0.78 to 114 flukes per fish. Capturing YTK by hooking increased the selection of fish of a weight and length in the lowest 5th percentile of the cage (RR = 5.75, 95% CI: 2.06-16.03, P-value = 0.0001). These lower end YTK had on average an extra 31 juveniles and 6 adults Z. seriolae per Kg of fish and an extra 3 juvenile and 0.4 adult B. seriolae per Kg of fish, compared to the rest of the cage population (P-value sampling towards the smallest and most heavily infested fish in the population, resulting in poor repeatability (more variability amongst sampled fish) and an overestimation of parasite burden in the population. In this particular commercial situation these finding supported that health management program, where the finding of an underestimation of parasite burden could provide a production impact on the study population. In instances where fish populations and parasite burdens are more homogenous, sampling error may be less severe. Sampling error when capturing fish

  11. Circadian and longitudinal variation of serum C-telopeptide, osteocalcin, and skeletal alkaline phosphatase in C3H/HeJ mice.

    Science.gov (United States)

    Srivastava, A K; Bhattacharyya, S; Li, X; Mohan, S; Baylink, D J

    2001-10-01

    Inbred strains of mice are increasingly being used as an animal model to investigate skeletal disorders relevant to humans. In the bone field, one of the most convenient endpoints for evaluating genetic, physiological, or pharmaceutical perturbations is the use of biochemical markers. To apply biochemical markers in an effective manner, it is of key importance to establish the biological variation and appropriate sampling time. In this study, we evaluate two components: (i) circadian changes, and (ii) longitudinal variation for three serum markers, osteocalcin, C-telopeptide, and skeletal alkaline phosphatase (sALP), using 6-week-old C3H/HeJ (C3H) mice. To study circadian rhythms, the mice were randomly divided into eight groups of 15 mice each. Blood was collected at 3 h intervals, starting at 9:00 A.M. and continuing until 6:00 A.M. the next day. To determine whether circadian rhythm is intrinsically regulated or influenced by restricted food intake, it was also studied after a 12 h fasting period. Serum osteocalcin and C-telopeptide levels were measured by enzyme-linked immunoassay (ELISA) and skeletal alkaline phosphatase by a kinetic assay. The results demonstrated significant circadian variations in osteocalcin and C-telopeptide levels with a peak value between 0900 and 1200 h during daytime and a nadir between 15:00 and 18:00 h. The peak levels of C-telopeptide and osteocalcin were 26%-66% higher as compared with 24 h mean values. The pattern of the circadian variation of C-telopeptide and osteocalcin was similar in female and male animals and was not significantly affected by restricted food intake. The sALP levels were only marginally affected by the circadian rhythm. Longitudinal variations, expressed as coefficient of variation (CV), for osteocalcin, C-telopeptide, and sALP concentrations were 17%, 14%, and 16%, respectively. In addition, the longitudinal variations were not significantly influenced by the time of blood collection in sALP and osteocalcin

  12. General stochastic variational formulation for the oligopolistic market equilibrium problem with excesses

    Science.gov (United States)

    Barbagallo, Annamaria; Di Meglio, Guglielmo; Mauro, Paolo

    2017-07-01

    The aim of the paper is to study, in a Hilbert space setting, a general random oligopolistic market equilibrium problem in presence of both production and demand excesses and to characterize the random Cournot-Nash equilibrium principle by means of a stochastic variational inequality. Some existence results are presented.

  13. An improved sampling method of complex network

    Science.gov (United States)

    Gao, Qi; Ding, Xintong; Pan, Feng; Li, Weixing

    2014-12-01

    Sampling subnet is an important topic of complex network research. Sampling methods influence the structure and characteristics of subnet. Random multiple snowball with Cohen (RMSC) process sampling which combines the advantages of random sampling and snowball sampling is proposed in this paper. It has the ability to explore global information and discover the local structure at the same time. The experiments indicate that this novel sampling method could keep the similarity between sampling subnet and original network on degree distribution, connectivity rate and average shortest path. This method is applicable to the situation where the prior knowledge about degree distribution of original network is not sufficient.

  14. Random Photon Absorption Model Elucidates How Early Gain Control in Fly Photoreceptors Arises from Quantal Sampling

    Science.gov (United States)

    Song, Zhuoyi; Zhou, Yu; Juusola, Mikko

    2016-01-01

    Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779

  15. On the efficient simulation of the left-tail of the sum of correlated log-normal variates

    KAUST Repository

    Alouini, Mohamed-Slim

    2018-04-04

    The sum of log-normal variates is encountered in many challenging applications such as performance analysis of wireless communication systems and financial engineering. Several approximation methods have been reported in the literature. However, these methods are not accurate in the tail regions. These regions are of primordial interest as small probability values have to be evaluated with high precision. Variance reduction techniques are known to yield accurate, yet efficient, estimates of small probability values. Most of the existing approaches have focused on estimating the right-tail of the sum of log-normal random variables (RVs). Here, we instead consider the left-tail of the sum of correlated log-normal variates with Gaussian copula, under a mild assumption on the covariance matrix. We propose an estimator combining an existing mean-shifting importance sampling approach with a control variate technique. This estimator has an asymptotically vanishing relative error, which represents a major finding in the context of the left-tail simulation of the sum of log-normal RVs. Finally, we perform simulations to evaluate the performances of the proposed estimator in comparison with existing ones.

  16. Health indicators: eliminating bias from convenience sampling estimators.

    Science.gov (United States)

    Hedt, Bethany L; Pagano, Marcello

    2011-02-28

    Public health practitioners are often called upon to make inference about a health indicator for a population at large when the sole available information are data gathered from a convenience sample, such as data gathered on visitors to a clinic. These data may be of the highest quality and quite extensive, but the biases inherent in a convenience sample preclude the legitimate use of powerful inferential tools that are usually associated with a random sample. In general, we know nothing about those who do not visit the clinic beyond the fact that they do not visit the clinic. An alternative is to take a random sample of the population. However, we show that this solution would be wasteful if it excluded the use of available information. Hence, we present a simple annealing methodology that combines a relatively small, and presumably far less expensive, random sample with the convenience sample. This allows us to not only take advantage of powerful inferential tools, but also provides more accurate information than that available from just using data from the random sample alone. Copyright © 2011 John Wiley & Sons, Ltd.

  17. Computed tomography of the brain, hepatotoxic drugs and high alcohol consumption in male alcoholic patients and a random sample from the general male population

    Energy Technology Data Exchange (ETDEWEB)

    Muetzell, S. (Univ. Hospital of Uppsala (Sweden). Dept. of Family Medicine)

    1992-01-01

    Computed tomography (CT) of the brain was performed in a random sample of a total of 195 men and 211 male alcoholic patients admitted for the first time during a period of two years from the same geographically limited area of Greater Stockholm as the sample. Laboratory tests were performed, including liver and pancreatic tests. Toxicological screening was performed and the consumption of hepatotoxic drugs was also investigated. The groups were then subdivided with respect to alcohol consumption and use of hepatotoxic drugs: group IA, men from the random sample with low or moderate alcohol consumption and no use of hepatotoxic drugs; IB, men from the random sample with low or moderate alcohol consumption with use of hepatotoxic drugs; IIA, alcoholic inpatients with use of alcohol and no drugs; and IIB, alcoholic inpatients with use of alcohol and drugs. Group IIB was found to have a higher incidence of cortical and subcortical changes than group IA. Group IB had a higher incidence of subcortical changes than group IA, and they differed only in drug use. Groups IIN and IIA only differed in drug use, and IIB had a higher incidence of brian damage except for anterior horn index and wide cerebellar sulci indicating vermian atrophy. Significantly higher serum levels of bilirubin, GGT, ASAT, ALAT, CK LD, and amylase were found in IIB. The results indicate that drug use influences the incidence of cortical and subcortical aberrations, except anterior horn index. It is concluded that the groups with alcohol abuse who used hepatotoxic drugs showed a picture of cortical changes (wide transport sulci and clear-cut of high-grade cortical changes) and also of subcortical aberrations, expressed as an increased widening on the third ventricle.

  18. Computed tomography of the brain, hepatotoxic drugs and high alcohol consumption in male alcoholic patients and a random sample from the general male population

    International Nuclear Information System (INIS)

    Muetzell, S.

    1992-01-01

    Computed tomography (CT) of the brain was performed in a random sample of a total of 195 men and 211 male alcoholic patients admitted for the first time during a period of two years from the same geographically limited area of Greater Stockholm as the sample. Laboratory tests were performed, including liver and pancreatic tests. Toxicological screening was performed and the consumption of hepatotoxic drugs was also investigated. The groups were then subdivided with respect to alcohol consumption and use of hepatotoxic drugs: group IA, men from the random sample with low or moderate alcohol consumption and no use of hepatotoxic drugs; IB, men from the random sample with low or moderate alcohol consumption with use of hepatotoxic drugs; IIA, alcoholic inpatients with use of alcohol and no drugs; and IIB, alcoholic inpatients with use of alcohol and drugs. Group IIB was found to have a higher incidence of cortical and subcortical changes than group IA. Group IB had a higher incidence of subcortical changes than group IA, and they differed only in drug use. Groups IIN and IIA only differed in drug use, and IIB had a higher incidence of brian damage except for anterior horn index and wide cerebellar sulci indicating vermian atrophy. Significantly higher serum levels of bilirubin, GGT, ASAT, ALAT, CK LD, and amylase were found in IIB. The results indicate that drug use influences the incidence of cortical and subcortical aberrations, except anterior horn index. It is concluded that the groups with alcohol abuse who used hepatotoxic drugs showed a picture of cortical changes (wide transport sulci and clear-cut of high-grade cortical changes) and also of subcortical aberrations, expressed as an increased widening on the third ventricle

  19. A high sensitivity process variation sensor utilizing sub-threshold operation

    OpenAIRE

    Meterelliyoz, Mesut; Song, Peilin; Stellari, Franco; Kulkarni, Jaydeep P.; Roy, Kaushik

    2008-01-01

    In this paper, we propose a novel low-power, bias-free, high-sensitivity process variation sensor for monitoring random variations in the threshold voltage. The proposed sensor design utilizes the exponential current-voltage relationship of sub-threshold operation thereby improving the sensitivity by 2.3X compared to the above-threshold operation. A test-chip containing 128 PMOS and 128 NMOS devices has been fabri...

  20. Luminosity Variations in Post-AGB Stars

    Science.gov (United States)

    Mesler, Robert; Henson, G.

    2007-12-01

    Although much is known about AGB stars and planetary nebulae, relatively little is known about the phase of a star's life in which it transitions between those two states. We have measured the variations in luminosity of a sample of known Post-AGB stars (as well as several candidates) relative to nearby, non-variable stars in order to compare them with theoretical models. The typical behavior of the observed variations is described and an attempt is made to discern whether any periodicity might be present. Luminosity variations were found to be on the order of a few hundredths to a few tenths of a magnitude for the stars that were surveyed, with occasional fluctuations of up to a magnitude. This agrees with current models of Post-AGB stars. Each star fell into one of three categories, which were termed groups 1, 2, and 3. Group 1 stars showed long term, non-periodic luminosity variations on the scale of weeks or longer and were most likely to display some sort of short term, coherent luminosity oscillation (each of which lasted for only a few cycles). Group 2 stars showed erratic, short-term magnitude variations occurring on scales of several days. Group 3 stars showed little or no variation in magnitude. Of the 27 Post-AGB stars that were sampled, five fell into group 1, fifteen fell into group 2, and seven fell into group 3. The luminosity variations tended to be color-independent, and occurred on timescales ranging nearly continuously from a few days to more than a year. No clear periodic behavior was found in any star in our sample. This project was funded by a partnership between the National Science Foundation (NSF AST-0552798), Research Experiences for Undergraduates (REU), and the Department of Defense (DoD) ASSURE (Awards to Stimulate and Support Undergraduate Research Experiences) programs.

  1. Loss of genetic variation in Greek hatchery populations of the European sea bass (Dicentrarchus labrax L. as revealed by microsatellite DNA analysis

    Directory of Open Access Journals (Sweden)

    D. LOUKOVITIS

    2014-10-01

    Full Text Available Genetic variation in four reared stocks of European sea bass Dicentrarchus labrax L., originating from Greek commercial farms, was assessed using five polymorphic microsatellite markers and was compared with that of three natural populations from Greece and France. The total number of alleles per marker ranged from 8 to 22 alleles, and hatchery samples showed the same levels of observed heterozygosity with samples from the wild but substantially smaller allelic richness and expected heterozygosity. The genetic differentiation of cultivated samples between them as well as from the wild origin fish was significant as indicated by Fst analysis. All population pairwise comparisons were statistically significant, except for the pair of the two natural Greek populations. Results of microsatellite DNA analysis herein showed a 37 % reduction of the mean allele number in the hatchery samples compared to the wild ones, suggesting random genetic drift and inbreeding events operating in the hatcheries. Knowledge of the genetic variation in D. labrax cultured populations compared with that in the wild ones is essential for setting up appropriate guidelines for proper monitoring and management of the stocks either under traditional practices or for the implementation of selective breeding programmes.

  2. Toward cost-efficient sampling methods

    Science.gov (United States)

    Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie

    2015-09-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.

  3. Investigating causal associations between use of nicotine, alcohol, caffeine and cannabis: a two-sample bidirectional Mendelian randomization study.

    Science.gov (United States)

    Verweij, Karin J H; Treur, Jorien L; Vink, Jacqueline M

    2018-07-01

    Epidemiological studies consistently show co-occurrence of use of different addictive substances. Whether these associations are causal or due to overlapping underlying influences remains an important question in addiction research. Methodological advances have made it possible to use published genetic associations to infer causal relationships between phenotypes. In this exploratory study, we used Mendelian randomization (MR) to examine the causality of well-established associations between nicotine, alcohol, caffeine and cannabis use. Two-sample MR was employed to estimate bidirectional causal effects between four addictive substances: nicotine (smoking initiation and cigarettes smoked per day), caffeine (cups of coffee per day), alcohol (units per week) and cannabis (initiation). Based on existing genome-wide association results we selected genetic variants associated with the exposure measure as an instrument to estimate causal effects. Where possible we applied sensitivity analyses (MR-Egger and weighted median) more robust to horizontal pleiotropy. Most MR tests did not reveal causal associations. There was some weak evidence for a causal positive effect of genetically instrumented alcohol use on smoking initiation and of cigarettes per day on caffeine use, but these were not supported by the sensitivity analyses. There was also some suggestive evidence for a positive effect of alcohol use on caffeine use (only with MR-Egger) and smoking initiation on cannabis initiation (only with weighted median). None of the suggestive causal associations survived corrections for multiple testing. Two-sample Mendelian randomization analyses found little evidence for causal relationships between nicotine, alcohol, caffeine and cannabis use. © 2018 Society for the Study of Addiction.

  4. A human immunodeficiency virus risk reduction intervention for incarcerated youth: a randomized controlled trial.

    Science.gov (United States)

    Goldberg, Eudice; Millson, Peggy; Rivers, Stephen; Manning, Stephanie Jeanneret; Leslie, Karen; Read, Stanley; Shipley, Caitlin; Victor, J Charles

    2009-02-01

    To evaluate, by gender, the impact of a structured, comprehensive risk reduction intervention with and without boosters on human immunodeficiency virus (HIV) knowledge, attitudes and behaviors in incarcerated youth; and to determine predictors of increasing HIV knowledge and reducing high-risk attitudes and behaviors. This randomized controlled trial involved participants completing structured interviews at 1, 3, and 6 months. Repeated measures analysis of variance was used to analyze changes over time. The study was conducted in secure custody facilities and in the community. The study sample comprising 391 incarcerated youth, 102 female and 289 male aged 12-18, formed the voluntary sample. Participants were randomly assigned to one of three conditions: education intervention; education intervention with booster; or no systematic intervention. The outcome and predictor measures included the Rosenberg Self-Esteem Scale, Youth Self Report, Drug Use Inventory, and HIV Knowledge, Attitudes and Behavior Scale. The 6-month retention rate was 59.6%. At 6 months, males in the education and booster groups sustained increases in knowledge scores (p variations by gender underline the importance of gender issues in prevention interventions. Predictors of success were identified to inform future HIV education interventions.

  5. Sampling soils for 137Cs using various field-sampling volumes

    International Nuclear Information System (INIS)

    Nyhan, J.W.; Schofield, T.G.; White, G.C.; Trujillo, G.

    1981-10-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from intensive study area in the fallout pathway of Trinity were sampled for 137 Cs using 25-, 500-, 2500-, and 12 500-cm 3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137 Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137 Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, where CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137 Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2 to 4 aliquots out of an many as 30 collected need be assayed for 137 Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137 Cs concentration decreased dramatically, but decreased very little with additional labor

  6. Generating log-normally distributed random numbers by using the Ziggurat algorithm

    International Nuclear Information System (INIS)

    Choi, Jong Soo

    2016-01-01

    Uncertainty analyses are usually based on the Monte Carlo method. Using an efficient random number generator(RNG) is a key element in success of Monte Carlo simulations. Log-normal distributed variates are very typical in NPP PSAs. This paper proposes an approach to generate log normally distributed variates based on the Ziggurat algorithm and evaluates the efficiency of the proposed Ziggurat RNG. The proposed RNG can be helpful to improve the uncertainty analysis of NPP PSAs. This paper focuses on evaluating the efficiency of the Ziggurat algorithm from a NPP PSA point of view. From this study, we can draw the following conclusions. - The Ziggurat algorithm is one of perfect random number generators to product normal distributed variates. - The Ziggurat algorithm is computationally much faster than the most commonly used method, Marsaglia polar method

  7. A general product measurability theorem with applications to variational inequalities

    Directory of Open Access Journals (Sweden)

    Kenneth L. Kuttler

    2016-03-01

    Full Text Available This work establishes the existence of measurable weak solutions to evolution problems with randomness by proving and applying a novel theorem on product measurability of limits of sequences of functions. The measurability theorem is used to show that many important existence theorems within the abstract theory of evolution inclusions or equations have straightforward generalizations to settings that include random processes or coefficients. Moreover, the convex set where the solutions are sought is not fixed but may depend on the random variables. The importance of adding randomness lies in the fact that real world processes invariably involve randomness and variability. Thus, this work expands substantially the range of applications of models with variational inequalities and differential set-inclusions.

  8. Biological variation of cystatin C

    DEFF Research Database (Denmark)

    Reinhard, Mark; Erlandsen, Erland; Randers, Else

    2009-01-01

    Introduction: Cystatin C has been investigated as a marker of the glomerular filtration rate. However, previous studies have reported conflicting results concerning the biological variation of cystatin C. The aim of the present study was to evaluate the biological variation of cystatin C...... in comparison to creatinine. Methods: Eight weekly morning blood samples were taken from twenty healthy volunteers (13 females, 7 males) aged 25-61 years. Mean creatinine clearance was 99.7 ml/min/1.73 m2 (range 61.8-139.5) and mean body mass index 23.9 kg/m2 (range 20.3-28.7). A total of 155 samples were...

  9. Sample size methodology

    CERN Document Server

    Desu, M M

    2012-01-01

    One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria

  10. Sampling Methods in Cardiovascular Nursing Research: An Overview.

    Science.gov (United States)

    Kandola, Damanpreet; Banner, Davina; O'Keefe-McCarthy, Sheila; Jassal, Debbie

    2014-01-01

    Cardiovascular nursing research covers a wide array of topics from health services to psychosocial patient experiences. The selection of specific participant samples is an important part of the research design and process. The sampling strategy employed is of utmost importance to ensure that a representative sample of participants is chosen. There are two main categories of sampling methods: probability and non-probability. Probability sampling is the random selection of elements from the population, where each element of the population has an equal and independent chance of being included in the sample. There are five main types of probability sampling including simple random sampling, systematic sampling, stratified sampling, cluster sampling, and multi-stage sampling. Non-probability sampling methods are those in which elements are chosen through non-random methods for inclusion into the research study and include convenience sampling, purposive sampling, and snowball sampling. Each approach offers distinct advantages and disadvantages and must be considered critically. In this research column, we provide an introduction to these key sampling techniques and draw on examples from the cardiovascular research. Understanding the differences in sampling techniques may aid nurses in effective appraisal of research literature and provide a reference pointfor nurses who engage in cardiovascular research.

  11. Engineering practice variation through provider agreement: a cluster-randomized feasibility trial.

    Science.gov (United States)

    McCarren, Madeline; Twedt, Elaine L; Mansuri, Faizmohamed M; Nelson, Philip R; Peek, Brian T

    2014-01-01

    Minimal-risk randomized trials that can be embedded in practice could facilitate learning health-care systems. A cluster-randomized design was proposed to compare treatment strategies by assigning clusters (eg, providers) to "favor" a particular drug, with providers retaining autonomy for specific patients. Patient informed consent might be waived, broadening inclusion. However, it is not known if providers will adhere to the assignment or whether institutional review boards will waive consent. We evaluated the feasibility of this trial design. Agreeable providers were randomized to "favor" either hydrochlorothiazide or chlorthalidone when starting patients on thiazide-type therapy for hypertension. The assignment applied when the provider had already decided to start a thiazide, and providers could deviate from the strategy as needed. Prescriptions were aggregated to produce a provider strategy-adherence rate. All four institutional review boards waived documentation of patient consent. Providers (n=18) followed their assigned strategy for most of their new thiazide prescriptions (n=138 patients). In the "favor hydrochlorothiazide" group, there was 99% adherence to that strategy. In the "favor chlorthalidone" group, chlorthalidone comprised 77% of new thiazide starts, up from 1% in the pre-study period. When the assigned strategy was followed, dosing in the recommended range was 48% for hydrochlorothiazide (25-50 mg/day) and 100% for chlorthalidone (12.5-25.0 mg/day). Providers were motivated to participate by a desire to contribute to a comparative effectiveness study. A study promotional mug, provider information letter, and interactions with the site investigator were identified as most helpful in reminding providers of their study drug strategy. Providers prescribed according to an assigned drug-choice strategy most of the time for the purpose of a comparative effectiveness study. This simple design could facilitate research participation and behavior change

  12. Visual signal detection in structured backgrounds. II. Effects of contrast gain control, background variations, and white noise

    Science.gov (United States)

    Eckstein, M. P.; Ahumada, A. J. Jr; Watson, A. B.

    1997-01-01

    Studies of visual detection of a signal superimposed on one of two identical backgrounds show performance degradation when the background has high contrast and is similar in spatial frequency and/or orientation to the signal. To account for this finding, models include a contrast gain control mechanism that pools activity across spatial frequency, orientation and space to inhibit (divisively) the response of the receptor sensitive to the signal. In tasks in which the observer has to detect a known signal added to one of M different backgrounds grounds due to added visual noise, the main sources of degradation are the stochastic noise in the image and the suboptimal visual processing. We investigate how these two sources of degradation (contrast gain control and variations in the background) interact in a task in which the signal is embedded in one of M locations in a complex spatially varying background (structured background). We use backgrounds extracted from patient digital medical images. To isolate effects of the fixed deterministic background (the contrast gain control) from the effects of the background variations, we conduct detection experiments with three different background conditions: (1) uniform background, (2) a repeated sample of structured background, and (3) different samples of structured background. Results show that human visual detection degrades from the uniform background condition to the repeated background condition and degrades even further in the different backgrounds condition. These results suggest that both the contrast gain control mechanism and the background random variations degrade human performance in detection of a signal in a complex, spatially varying background. A filter model and added white noise are used to generate estimates of sampling efficiencies, an equivalent internal noise, an equivalent contrast-gain-control-induced noise, and an equivalent noise due to the variations in the structured background.

  13. Mitochondrial DNA D-loop sequence variation among 5 maternal lines of the Zemaitukai horse breed

    Directory of Open Access Journals (Sweden)

    E. Gus Cothran

    2005-12-01

    Full Text Available Genetic variation in Zemaitukai horses was investigated using mitochondrial DNA (mtDNA sequencing. The study was performed on 421 bp of the mitochondrial DNA control region, which is known to be more variable than other sections of the mitochondrial genome. Samples from each of the remaining maternal family lines of Zemaitukai horses and three random samples for other Lithuanian (Lithuanian Heavy Draught, Zemaitukai large type and ten European horse breeds were sequenced. Five distinct haplotypes were obtained for the five Zemaitukai maternal families supporting the pedigree data. The minimal difference between two different sequence haplotypes was 6 and the maximal 11 nucleotides in Zemaitukai horse breed. A total of 20 nucleotide differences compared to the reference sequence were found in Lithuanian horse breeds. Genetic cluster analysis did not shown any clear pattern of relationship among breeds of different type.

  14. Shape dependency of the extinction and absorption cross sections of dust aerosols modeled as randomly oriented spheroids

    Directory of Open Access Journals (Sweden)

    R. Wagner

    2011-09-01

    Full Text Available We present computational results on the shape dependency of the extinction and absorption cross sections of dustlike aerosol particles that were modeled as randomly oriented spheroids. Shape dependent variations in the extinction cross sections are largest in the size regime that is governed by the interference structure. Elongated spheroids best fitted measured extinction spectra of re-dispersed Saharan dust samples. For dust particles smaller than 1.5 μm in diameter and low absorption potential, shape effects on the absorption cross sections are very small.

  15. Micro-organism distribution sampling for bioassays

    Science.gov (United States)

    Nelson, B. A.

    1975-01-01

    Purpose of sampling distribution is to characterize sample-to-sample variation so statistical tests may be applied, to estimate error due to sampling (confidence limits) and to evaluate observed differences between samples. Distribution could be used for bioassays taken in hospitals, breweries, food-processing plants, and pharmaceutical plants.

  16. Sampling designs and methods for estimating fish-impingement losses at cooling-water intakes

    International Nuclear Information System (INIS)

    Murarka, I.P.; Bodeau, D.J.

    1977-01-01

    Several systems for estimating fish impingement at power plant cooling-water intakes are compared to determine the most statistically efficient sampling designs and methods. Compared to a simple random sampling scheme the stratified systematic random sampling scheme, the systematic random sampling scheme, and the stratified random sampling scheme yield higher efficiencies and better estimators for the parameters in two models of fish impingement as a time-series process. Mathematical results and illustrative examples of the applications of the sampling schemes to simulated and real data are given. Some sampling designs applicable to fish-impingement studies are presented in appendixes

  17. Variation in levels of serum inhibin B, testosterone, estradiol, luteinizing hormone, follicle-stimulating hormone, and sex hormone-binding globulin in monthly samples from healthy men during a 17-month period

    DEFF Research Database (Denmark)

    Andersson, Anna-Maria; Carlsen, Elisabeth; Petersen, Jørgen Holm

    2003-01-01

    To obtain information on the scale of the intraindividual variation in testicular hormone, blood samples for inhibin B determination were collected monthly in 27 healthy male volunteers during a 17-month period. In addition, the traditional reproductive hormones FSH, LH, testosterone, estradiol....... A seasonal variation was observed in LH and testosterone levels, but not in the levels of the other hormones. The seasonal variation in testosterone levels could be explained by the variation in LH levels. The seasonal variation in LH levels seemed to be related to the mean air temperature during the month...... levels in men. The peak levels of both LH and testosterone were observed during June-July, with minimum levels present during winter-early spring. Air temperature, rather than light exposure, seems to be a possible climatic variable explaining the seasonal variation in LH levels....

  18. Experimental phase diagram for random laser spectra

    International Nuclear Information System (INIS)

    El-Dardiry, Ramy G S; Mooiweer, Ronald; Lagendijk, Ad

    2012-01-01

    We systematically study the presence of narrow spectral features in a wide variety of random laser samples. Less gain or stronger scattering are shown to lead to a crossover from spiky to smooth spectra. A decomposition of random laser spectra into a set of Lorentzians provides unprecedented detail in the analysis of random laser spectra. We suggest an interpretation in terms of mode competition that enables an understanding of the observed experimental trends. In this interpretation, smooth random laser spectra are a consequence of competing modes for which the loss and gain are proportional. Spectral spikes are associated with modes that are uncoupled from the mode competition in the bulk of the sample. (paper)

  19. [A comparison of convenience sampling and purposive sampling].

    Science.gov (United States)

    Suen, Lee-Jen Wu; Huang, Hui-Man; Lee, Hao-Hsien

    2014-06-01

    Convenience sampling and purposive sampling are two different sampling methods. This article first explains sampling terms such as target population, accessible population, simple random sampling, intended sample, actual sample, and statistical power analysis. These terms are then used to explain the difference between "convenience sampling" and purposive sampling." Convenience sampling is a non-probabilistic sampling technique applicable to qualitative or quantitative studies, although it is most frequently used in quantitative studies. In convenience samples, subjects more readily accessible to the researcher are more likely to be included. Thus, in quantitative studies, opportunity to participate is not equal for all qualified individuals in the target population and study results are not necessarily generalizable to this population. As in all quantitative studies, increasing the sample size increases the statistical power of the convenience sample. In contrast, purposive sampling is typically used in qualitative studies. Researchers who use this technique carefully select subjects based on study purpose with the expectation that each participant will provide unique and rich information of value to the study. As a result, members of the accessible population are not interchangeable and sample size is determined by data saturation not by statistical power analysis.

  20. Outdoor radon variation in Romania

    International Nuclear Information System (INIS)

    Simion, Elena; Simion, Florin

    2008-01-01

    Full text: The results of a long-term survey (1992 - 2006) of the variations of outdoor radon concentrations in semi-natural location from Romania are reported in the present paper. Measurements, covering between two and four sessions of the day (morning, afternoon, evening and night), were performed on a daily bases by 37 Environmental Radioactivity Monitoring Stations from National Environmental Radioactivity Survey Network. The method used was based on indirect determination of outdoor radon from aerosol samples collected on glass micro-fibre filters by drawing the air through the filters. The sampling was performed in a fixed place at a height of 2 m above the ground surface. Total beta counting of aerosol samples collected was performed immediately and after 20 hours. Values recorded during the years of continuous measurement indicated the presence of several patterns in the long-term variation of outdoor radon concentration: diurnal, seasonal and annual variation. For diurnal variation, outdoor radon concentration shows a maximum values in the night (early hours) and minimum values by day (in the afternoon). On average, this maximum is a factor of 2 higher than the minimum. Late autumn - beginning of winter maximum and an early spring minimum are characteristic for seasonal patterns. In the long term a seasonal pattern was observed for diurnal variation, with an average diurnal maximum to minimum ratio of 1.33 in winter compared with 3.0 in the summer months. The variations of outdoor radon levels showed little correlation with the uranium concentration of the ground and were attributed to changes in soil moisture content. In dry seasons, because of the low precipitation, the soil was drying out in the summer allowing fractures to develop and radon to migrate easily through the ground. Depending on micro-climatic and geological conditions, outdoor radon average concentrations in different regions of Romania are from 1200 mBq/mc to 13065 mBq/mc. The smallest

  1. Axially perpendicular offset Raman scheme for reproducible measurement of housed samples in a noncircular container under variation of container orientation.

    Science.gov (United States)

    Duy, Pham K; Chang, Kyeol; Sriphong, Lawan; Chung, Hoeil

    2015-03-17

    An axially perpendicular offset (APO) scheme that is able to directly acquire reproducible Raman spectra of samples contained in an oval container under variation of container orientation has been demonstrated. This scheme utilized an axially perpendicular geometry between the laser illumination and the Raman photon detection, namely, irradiation through a sidewall of the container and gathering of the Raman photon just beneath the container. In the case of either backscattering or transmission measurements, Raman sampling volumes for an internal sample vary when the orientation of an oval container changes; therefore, the Raman intensities of acquired spectra are inconsistent. The generated Raman photons traverse the same bottom of the container in the APO scheme; the Raman sampling volumes can be relatively more consistent under the same situation. For evaluation, the backscattering, transmission, and APO schemes were simultaneously employed to measure alcohol gel samples contained in an oval polypropylene container at five different orientations and then the accuracies of the determination of the alcohol concentrations were compared. The APO scheme provided the most reproducible spectra, yielding the best accuracy when the axial offset distance was 10 mm. Monte Carlo simulations were performed to study the characteristics of photon propagation in the APO scheme and to explain the origin of the optimal offset distance that was observed. In addition, the utility of the APO scheme was further demonstrated by analyzing samples in a circular glass container.

  2. Quenched Large Deviations for Simple Random Walks on Percolation Clusters Including Long-Range Correlations

    Science.gov (United States)

    Berger, Noam; Mukherjee, Chiranjib; Okamura, Kazuki

    2018-03-01

    We prove a quenched large deviation principle (LDP) for a simple random walk on a supercritical percolation cluster (SRWPC) on {Z^d} ({d ≥ 2}). The models under interest include classical Bernoulli bond and site percolation as well as models that exhibit long range correlations, like the random cluster model, the random interlacement and the vacant set of random interlacements (for {d ≥ 3}) and the level sets of the Gaussian free field ({d≥ 3}). Inspired by the methods developed by Kosygina et al. (Commun Pure Appl Math 59:1489-1521, 2006) for proving quenched LDP for elliptic diffusions with a random drift, and by Yilmaz (Commun Pure Appl Math 62(8):1033-1075, 2009) and Rosenbluth (Quenched large deviations for multidimensional random walks in a random environment: a variational formula. Ph.D. thesis, NYU, arXiv:0804.1444v1) for similar results regarding elliptic random walks in random environment, we take the point of view of the moving particle and prove a large deviation principle for the quenched distribution of the pair empirical measures of the environment Markov chain in the non-elliptic case of SRWPC. Via a contraction principle, this reduces easily to a quenched LDP for the distribution of the mean velocity of the random walk and both rate functions admit explicit variational formulas. The main difficulty in our set up lies in the inherent non-ellipticity as well as the lack of translation-invariance stemming from conditioning on the fact that the origin belongs to the infinite cluster. We develop a unifying approach for proving quenched large deviations for SRWPC based on exploiting coercivity properties of the relative entropies in the context of convex variational analysis, combined with input from ergodic theory and invoking geometric properties of the supercritical percolation cluster.

  3. Scalability on LHS (Latin Hypercube Sampling) samples for use in uncertainty analysis of large numerical models

    International Nuclear Information System (INIS)

    Baron, Jorge H.; Nunez Mac Leod, J.E.

    2000-01-01

    The present paper deals with the utilization of advanced sampling statistical methods to perform uncertainty and sensitivity analysis on numerical models. Such models may represent physical phenomena, logical structures (such as boolean expressions) or other systems, and various of their intrinsic parameters and/or input variables are usually treated as random variables simultaneously. In the present paper a simple method to scale-up Latin Hypercube Sampling (LHS) samples is presented, starting with a small sample and duplicating its size at each step, making it possible to use the already run numerical model results with the smaller sample. The method does not distort the statistical properties of the random variables and does not add any bias to the samples. The results is a significant reduction in numerical models running time can be achieved (by re-using the previously run samples), keeping all the advantages of LHS, until an acceptable representation level is achieved in the output variables. (author)

  4. Computation of mean and variance of the radiotherapy dose for PCA-modeled random shape and position variations of the target.

    Science.gov (United States)

    Budiarto, E; Keijzer, M; Storchi, P R M; Heemink, A W; Breedveld, S; Heijmen, B J M

    2014-01-20

    Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements.

  5. Computation of mean and variance of the radiotherapy dose for PCA-modeled random shape and position variations of the target

    International Nuclear Information System (INIS)

    Budiarto, E; Keijzer, M; Heemink, A W; Storchi, P R M; Breedveld, S; Heijmen, B J M

    2014-01-01

    Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements. (paper)

  6. Lotka-Volterra systems in environments with randomly disordered temporal periodicity

    Science.gov (United States)

    Naess, Arvid; Dimentberg, Michael F.; Gaidai, Oleg

    2008-08-01

    A generalized Lotka-Volterra model for a pair of interacting populations of predators and prey is studied. The model accounts for the prey’s interspecies competition and therefore is asymptotically stable, whereas its oscillatory behavior is induced by temporal variations in environmental conditions simulated by those in the prey’s reproduction rate. Two models of the variations are considered, each of them combining randomness with “hidden” periodicity. The stationary joint probability density function (PDF) of the number of predators and prey is calculated numerically by the path integration (PI) method based on the use of characteristic functions and the fast Fourier transform. The numerical results match those for the asymptotic case of white-noise variations for which an analytical solution is available. Several examples are studied, with calculations of important characteristics of oscillations, for example the expected rate of up-crossings given the level of the predator number. The calculated PDFs may be of predominantly random (unimodal) or predominantly periodic nature (bimodal). Thus, the PI method has been demonstrated to be a powerful tool for studies of the dynamics of predator-prey pairs. The method captures the random oscillations as observed in nature, taking into account potential periodicity in the environmental conditions.

  7. Coupling methods for multistage sampling

    OpenAIRE

    Chauvet, Guillaume

    2015-01-01

    Multistage sampling is commonly used for household surveys when there exists no sampling frame, or when the population is scattered over a wide area. Multistage sampling usually introduces a complex dependence in the selection of the final units, which makes asymptotic results quite difficult to prove. In this work, we consider multistage sampling with simple random without replacement sampling at the first stage, and with an arbitrary sampling design for further stages. We consider coupling ...

  8. Systematic sampling of discrete and continuous populations: sample selection and the choice of estimator

    Science.gov (United States)

    Harry T. Valentine; David L. R. Affleck; Timothy G. Gregoire

    2009-01-01

    Systematic sampling is easy, efficient, and widely used, though it is not generally recognized that a systematic sample may be drawn from the population of interest with or without restrictions on randomization. The restrictions or the lack of them determine which estimators are unbiased, when using the sampling design as the basis for inference. We describe the...

  9. Isotope dilution analysis of environmental samples

    International Nuclear Information System (INIS)

    Tolgyessy, J.; Lesny, J.; Korenova, Z.; Klas, J.; Klehr, E.H.

    1986-01-01

    Isotope dilution analysis has been used for the determination of several trace elements - especially metals - in a variety of environmental samples, including aerosols, water, soils, biological materials and geological materials. Variations of the basic concept include classical IDA, substoichiometric IDA, and more recently, sub-superequivalence IDA. Each variation has its advantages and limitations. A periodic chart has been used to identify those elements which have been measured in environmental samples using one or more of these methods. (author)

  10. Random thermal stress in concrete containments

    International Nuclear Information System (INIS)

    Singh, M.P.; Heller, R.A.

    1980-01-01

    Currently, the overly conservative thermal design forces are obtained on the basis of simplified assumptions made about the temperature gradient across the containment wall. Using the method presented in this paper, a more rational and better estimate of the design forces can be obtained. Herein, the outside temperature is considered to consist of a constant mean on which yearly and daily harmonic changes plus a randomly varying part are superimposed. The random part is modeled as a stationary random process. To obtain the stresses due to random and harmonic temperatures, the complex frequency response function approach has been used. Numerical results obtained for a typical containment show that the higher frequency temperature variations, though of large magnitude, induce relatively small forces in a containment. Therefore, in a containment design, a rational separation of more effective, slowly varying temperatures, such as seasonal cycle from less effective but more frequently occuring daily and hourly changes, is desirable to obtain rational design forces. 7 refs

  11. Generating Realistic Labelled, Weighted Random Graphs

    Directory of Open Access Journals (Sweden)

    Michael Charles Davis

    2015-12-01

    Full Text Available Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many real-world networks also have labelled vertices and weighted edges. In this paper, we present a generative model for random graphs with discrete vertex labels and numeric edge weights. The weights are represented as a set of Beta Mixture Models (BMMs with an arbitrary number of mixtures, which are learned from real-world networks. We propose a Bayesian Variational Inference (VI approach, which yields an accurate estimation while keeping computation times tractable. We compare our approach to state-of-the-art random labelled graph generators and an earlier approach based on Gaussian Mixture Models (GMMs. Our results allow us to draw conclusions about the contribution of vertex labels and edge weights to graph structure.

  12. Influence of population versus convenience sampling on sample characteristics in studies of cognitive aging.

    Science.gov (United States)

    Brodaty, Henry; Mothakunnel, Annu; de Vel-Palumbo, Melissa; Ames, David; Ellis, Kathryn A; Reppermund, Simone; Kochan, Nicole A; Savage, Greg; Trollor, Julian N; Crawford, John; Sachdev, Perminder S

    2014-01-01

    We examined whether differences in findings of studies examining mild cognitive impairment (MCI) were associated with recruitment methods by comparing sample characteristics in two contemporaneous Australian studies, using population-based and convenience sampling. The Sydney Memory and Aging Study invited participants randomly from the electoral roll in defined geographic areas in Sydney. The Australian Imaging, Biomarkers and Lifestyle Study of Ageing recruited cognitively normal (CN) individuals via media appeals and MCI participants via referrals from clinicians in Melbourne and Perth. Demographic and cognitive variables were harmonized, and similar diagnostic criteria were applied to both samples retrospectively. CN participants recruited via convenience sampling were younger, better educated, more likely to be married and have a family history of dementia, and performed better cognitively than those recruited via population-based sampling. MCI participants recruited via population-based sampling had better memory performance and were less likely to carry the apolipoprotein E ε4 allele than clinically referred participants but did not differ on other demographic variables. A convenience sample of normal controls is likely to be younger and better functioning and that of an MCI group likely to perform worse than a purportedly random sample. Sampling bias should be considered when interpreting findings. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. Procedures for sampling and sample-reduction within quality assurance systems for solid biofuels

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2005-04-15

    The bias introduced when sampling solid biofuels from stockpiles or containers instead of from moving streams is assessed as well as the number and size of samples required to represent accurately the bulk sample, variations introduced when reducing bulk samples into samples for testing, and the usefulness of sample reduction methods. Details are given of the experimental work carried out in Sweden and Denmark using sawdust, wood chips, wood pellets, forestry residues and straw. The production of a model European Standard for quality assurance of solid biofuels is examined.

  14. Size variation in Middle Pleistocene humans.

    Science.gov (United States)

    Arsuaga, J L; Carretero, J M; Lorenzo, C; Gracia, A; Martínez, I; Bermúdez de Castro, J M; Carbonell, E

    1997-08-22

    It has been suggested that European Middle Pleistocene humans, Neandertals, and prehistoric modern humans had a greater sexual dimorphism than modern humans. Analysis of body size variation and cranial capacity variation in the large sample from the Sima de los Huesos site in Spain showed instead that the sexual dimorphism is comparable in Middle Pleistocene and modern populations.

  15. Attenuation of species abundance distributions by sampling

    Science.gov (United States)

    Shimadzu, Hideyasu; Darnell, Ross

    2015-01-01

    Quantifying biodiversity aspects such as species presence/ absence, richness and abundance is an important challenge to answer scientific and resource management questions. In practice, biodiversity can only be assessed from biological material taken by surveys, a difficult task given limited time and resources. A type of random sampling, or often called sub-sampling, is a commonly used technique to reduce the amount of time and effort for investigating large quantities of biological samples. However, it is not immediately clear how (sub-)sampling affects the estimate of biodiversity aspects from a quantitative perspective. This paper specifies the effect of (sub-)sampling as attenuation of the species abundance distribution (SAD), and articulates how the sampling bias is induced to the SAD by random sampling. The framework presented also reveals some confusion in previous theoretical studies. PMID:26064626

  16. Ensembl variation resources

    Directory of Open Access Journals (Sweden)

    Marin-Garcia Pablo

    2010-05-01

    Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.

  17. 14C measurement: effect of variations in sample preparation and storage on the counting efficiency for 14C using a carbo-sorb/permafluor E+ liquid scintillation cocktail

    International Nuclear Information System (INIS)

    Kramer, S.J.; Milton, G.M.; Repta, C.J.W.

    1995-06-01

    The effect of variations in sample preparation and storage on the counting efficiency for 14 C using a Carbo-Sorb/PermafluorE+ liquid scintillation cocktail has been studied, and optimum conditions are recommended. (author). 2 refs., 2 tabs., 4 figs

  18. Iterative random vs. Kennard-Stone sampling for IR spectrum-based classification task using PLS2-DA

    Science.gov (United States)

    Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz

    2018-04-01

    External testing (ET) is preferred over auto-prediction (AP) or k-fold-cross-validation in estimating more realistic predictive ability of a statistical model. With IR spectra, Kennard-stone (KS) sampling algorithm is often used to split the data into training and test sets, i.e. respectively for model construction and for model testing. On the other hand, iterative random sampling (IRS) has not been the favored choice though it is theoretically more likely to produce reliable estimation. The aim of this preliminary work is to compare performances of KS and IRS in sampling a representative training set from an attenuated total reflectance - Fourier transform infrared spectral dataset (of four varieties of blue gel pen inks) for PLS2-DA modeling. The `best' performance achievable from the dataset is estimated with AP on the full dataset (APF, error). Both IRS (n = 200) and KS were used to split the dataset in the ratio of 7:3. The classic decision rule (i.e. maximum value-based) is employed for new sample prediction via partial least squares - discriminant analysis (PLS2-DA). Error rate of each model was estimated repeatedly via: (a) AP on full data (APF, error); (b) AP on training set (APS, error); and (c) ET on the respective test set (ETS, error). A good PLS2-DA model is expected to produce APS, error and EVS, error that is similar to the APF, error. Bearing that in mind, the similarities between (a) APS, error vs. APF, error; (b) ETS, error vs. APF, error and; (c) APS, error vs. ETS, error were evaluated using correlation tests (i.e. Pearson and Spearman's rank test), using series of PLS2-DA models computed from KS-set and IRS-set, respectively. Overall, models constructed from IRS-set exhibits more similarities between the internal and external error rates than the respective KS-set, i.e. less risk of overfitting. In conclusion, IRS is more reliable than KS in sampling representative training set.

  19. On Angular Sampling Methods for 3-D Spatial Channel Models

    DEFF Research Database (Denmark)

    Fan, Wei; Jämsä, Tommi; Nielsen, Jesper Ødum

    2015-01-01

    This paper discusses generating three dimensional (3D) spatial channel models with emphasis on the angular sampling methods. Three angular sampling methods, i.e. modified uniform power sampling, modified uniform angular sampling, and random pairing methods are proposed and investigated in detail....... The random pairing method, which uses only twenty sinusoids in the ray-based model for generating the channels, presents good results if the spatial channel cluster is with a small elevation angle spread. For spatial clusters with large elevation angle spreads, however, the random pairing method would fail...... and the other two methods should be considered....

  20. Statistical sampling strategies

    International Nuclear Information System (INIS)

    Andres, T.H.

    1987-01-01

    Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized

  1. Effects of changing the random number stride in Monte Carlo calculations

    International Nuclear Information System (INIS)

    Hendricks, J.S.

    1991-01-01

    This paper reports on a common practice in Monte Carlo radiation transport codes which is to start each random walk a specified number of steps up the random number sequence from the previous one. This is called the stride in the random number sequence between source particles. It is used for correlated sampling or to provide tree-structured random numbers. A new random number generator algorithm for the major Monte Carlo code MCNP has been written to allow adjustment of the random number stride. This random number generator is machine portable. The effects of varying the stride for several sample problems are examined

  2. Application of random amplified polymorphic DNA (RAPD) markers ...

    African Journals Online (AJOL)

    SAM

    2014-06-11

    Jun 11, 2014 ... variety share an identical genome. In this field one of the most successful techniques is random ... To each minced sample, 350 µL of the same extraction buffer was added and the samples were ..... using fingerprints produced by random primers. J. Hort. Sci. 69:123-. 130. Levi A, Rowland LJ, Hartung JS ...

  3. Micro-Texture Synthesis by Phase Randomization

    Directory of Open Access Journals (Sweden)

    Bruno Galerne

    2011-09-01

    Full Text Available This contribution is concerned with texture synthesis by example, the process of generating new texture images from a given sample. The Random Phase Noise algorithm presented here synthesizes a texture from an original image by simply randomizing its Fourier phase. It is able to reproduce textures which are characterized by their Fourier modulus, namely the random phase textures (or micro-textures.

  4. Sample size of the reference sample in a case-augmented study.

    Science.gov (United States)

    Ghosh, Palash; Dewanji, Anup

    2017-05-01

    The case-augmented study, in which a case sample is augmented with a reference (random) sample from the source population with only covariates information known, is becoming popular in different areas of applied science such as pharmacovigilance, ecology, and econometrics. In general, the case sample is available from some source (for example, hospital database, case registry, etc.); however, the reference sample is required to be drawn from the corresponding source population. The required minimum size of the reference sample is an important issue in this regard. In this work, we address the minimum sample size calculation and discuss related issues. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  5. A variation of the housing unit method for estimating the age and gender distribution of small, rural areas: A case study of the local expert procedure

    International Nuclear Information System (INIS)

    Carlson, J.F.; Roe, L.K.; Williams, C.A.; Swanson, D.A.

    1993-01-01

    This paper describes the methodologies used in the development of a demographic data base established in support of the Yucca Mountain Site Characterization Project Radiological Monitoring Plan (RadMP). It also examines the suitability of a survey-based procedure for estimating population in small, rural areas. The procedure is a variation of the Housing Unit Method. It employs the use of local experts enlisted to provide information about the demographic characteristics of households randomly selected from residential units sample frames developed from utility records. The procedure is nonintrusive and less costly than traditional survey data collection efforts. Because the procedure is based on random sampling, confidence intervals can be constructed around the population estimated by the technique. The results of a case study are provided in which the total population, and age and gender of the population, is estimated for three unincorporated communities in rural, southern Nevada

  6. Genetic variation in the CYP1A1 gene is related to circulating PCB118 levels in a population-based sample

    Energy Technology Data Exchange (ETDEWEB)

    Lind, Lars [Department of Medical Sciences, Cardiovascular Epidemiology, Uppsala University, Uppsala (Sweden); Penell, Johanna [Department of Medical Sciences, Occupational and Environmental Medicine, Uppsala University, Uppsala (Sweden); Syvänen, Anne-Christine; Axelsson, Tomas [Department of Medical Sciences, Molecular Medicine and Science for Life Laboratory, Uppsala University, Uppsala (Sweden); Ingelsson, Erik [Department of Medical Sciences, Molecular Epidemiology and Science for Life Laboratory, Uppsala University, Uppsala (Sweden); Wellcome Trust Centre for Human Genetics, University of Oxford, Oxford (United Kingdom); Morris, Andrew P.; Lindgren, Cecilia [Wellcome Trust Centre for Human Genetics, University of Oxford, Oxford (United Kingdom); Salihovic, Samira; Bavel, Bert van [MTM Research Centre, School of Science and Technology, Örebro University, Örebro (Sweden); Lind, P. Monica, E-mail: monica.lind@medsci.uu.se [Department of Medical Sciences, Occupational and Environmental Medicine, Uppsala University, Uppsala (Sweden)

    2014-08-15

    Several of the polychlorinated biphenyls (PCBs), i.e. the dioxin-like PCBs, are known to induce the P450 enzymes CYP1A1, CYP1A2 and CYP1B1 by activating the aryl hydrocarbon receptor (Ah)-receptor. We evaluated if circulating levels of PCBs in a population sample were related to genetic variation in the genes encoding these CYPs. In the population-based Prospective Investigation of the Vasculature in Uppsala Seniors (PIVUS) study (1016 subjects all aged 70), 21 SNPs in the CYP1A1, CYP1A2 and CYP1B1 genes were genotyped. Sixteen PCB congeners were analysed by high-resolution chromatography coupled to high-resolution mass spectrometry (HRGC/ HRMS). Of the investigated relationships between SNPs in the CYP1A1, CYP1A2 and CYP1B1 and six PCBs (congeners 118, 126, 156, 169, 170 and 206) that captures >80% of the variation of all PCBs measured, only the relationship between CYP1A1 rs2470893 was significantly related to PCB118 levels following strict adjustment for multiple testing (p=0.00011). However, there were several additional SNPs in the CYP1A2 and CYP1B1 that showed nominally significant associations with PCB118 levels (p-values in the 0.003–0.05 range). Further, several SNPs in the CYP1B1 gene were related to both PCB156 and PCB206 with p-values in the 0.005–0.05 range. Very few associations with p<0.05 were seen for PCB126, PCB169 or PCB170. Genetic variation in the CYP1A1 was related to circulating PCB118 levels in the general elderly population. Genetic variation in CYP1A2 and CYP1B1 might also be associated with other PCBs. - Highlights: • We studied the relationship between PCBs and the genetic variation in the CYP genes. • Cross sectional data from a cohort of elderly were analysed. • The PCB levels were evaluated versus 21 SNPs in three CYP genes. • PCB 118 was related to variation in the CYP1A1 gene.

  7. Individual Variation in Hunger, Energy Intake, and Ghrelin Responses to Acute Exercise.

    Science.gov (United States)

    King, James A; Deighton, Kevin; Broom, David R; Wasse, Lucy K; Douglas, Jessica A; Burns, Stephen F; Cordery, Philip A; Petherick, Emily S; Batterham, Rachel L; Goltz, Fernanda R; Thackray, Alice E; Yates, Thomas; Stensel, David J

    2017-06-01

    This study aimed to characterize the immediate and extended effect of acute exercise on hunger, energy intake, and circulating acylated ghrelin concentrations using a large data set of homogenous experimental trials and to describe the variation in responses between individuals. Data from 17 of our group's experimental crossover trials were aggregated yielding a total sample of 192 young, healthy males. In these studies, single bouts of moderate to high-intensity aerobic exercise (69% ± 5% V˙O2 peak; mean ± SD) were completed with detailed participant assessments occurring during and for several hours postexercise. Mean hunger ratings were determined during (n = 178) and after (n = 118) exercise from visual analog scales completed at 30-min intervals, whereas ad libitum energy intake was measured within the first hour after exercise (n = 60) and at multiple meals (n = 128) during the remainder of trials. Venous concentrations of acylated ghrelin were determined at strategic time points during (n = 118) and after (n = 89) exercise. At group level, exercise transiently suppressed hunger (P hunger and circulating acylated ghrelin concentrations with notable diversity between individuals. Care must be taken to distinguish true interindividual variation from random differences within normal limits.

  8. Probabilistic, multi-variate flood damage modelling using random forests and Bayesian networks

    Science.gov (United States)

    Kreibich, Heidi; Schröter, Kai

    2015-04-01

    Decisions on flood risk management and adaptation are increasingly based on risk analyses. Such analyses are associated with considerable uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention recently, they are hardly applied in flood damage assessments. Most of the damage models usually applied in standard practice have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. This presentation will show approaches for probabilistic, multi-variate flood damage modelling on the micro- and meso-scale and discuss their potential and limitations. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Schröter, K., Kreibich, H., Vogel, K., Riggelsen, C., Scherbaum, F., Merz, B. (2014): How useful are complex flood damage models? - Water Resources Research, 50, 4, p. 3378-3395.

  9. Jenis Sample: Keuntungan dan Kerugiannya

    OpenAIRE

    Suprapto, Agus

    1994-01-01

    Sample is a part of a population that are used in a study for purposes of making estimation about the nature of the total population that is obtained with sampling technic. Sampling technic is more adventagous than cencus because it can reduce cost, time, and it can gather deeper information and more accurate data. It is useful to distinguish two major types of sampling technics. First, Prob bility sampling i.e. simple random sampling. Second, Non Probability sampling i.e. systematic sam­plin...

  10. Toward a Principled Sampling Theory for Quasi-Orders.

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets.

  11. Toward a Principled Sampling Theory for Quasi-Orders

    Science.gov (United States)

    Ünlü, Ali; Schrepp, Martin

    2016-01-01

    Quasi-orders, that is, reflexive and transitive binary relations, have numerous applications. In educational theories, the dependencies of mastery among the problems of a test can be modeled by quasi-orders. Methods such as item tree or Boolean analysis that mine for quasi-orders in empirical data are sensitive to the underlying quasi-order structure. These data mining techniques have to be compared based on extensive simulation studies, with unbiased samples of randomly generated quasi-orders at their basis. In this paper, we develop techniques that can provide the required quasi-order samples. We introduce a discrete doubly inductive procedure for incrementally constructing the set of all quasi-orders on a finite item set. A randomization of this deterministic procedure allows us to generate representative samples of random quasi-orders. With an outer level inductive algorithm, we consider the uniform random extensions of the trace quasi-orders to higher dimension. This is combined with an inner level inductive algorithm to correct the extensions that violate the transitivity property. The inner level correction step entails sampling biases. We propose three algorithms for bias correction and investigate them in simulation. It is evident that, on even up to 50 items, the new algorithms create close to representative quasi-order samples within acceptable computing time. Hence, the principled approach is a significant improvement to existing methods that are used to draw quasi-orders uniformly at random but cannot cope with reasonably large item sets. PMID:27965601

  12. Spatiotemporally Representative and Cost-Efficient Sampling Design for Validation Activities in Wanglang Experimental Site

    Directory of Open Access Journals (Sweden)

    Gaofei Yin

    2017-11-01

    Full Text Available Spatiotemporally representative Elementary Sampling Units (ESUs are required for capturing the temporal variations in surface spatial heterogeneity through field measurements. Since inaccessibility often coexists with heterogeneity, a cost-efficient sampling design is mandatory. We proposed a sampling strategy to generate spatiotemporally representative and cost-efficient ESUs based on the conditioned Latin hypercube sampling scheme. The proposed strategy was constrained by multi-temporal Normalized Difference Vegetation Index (NDVI imagery, and the ESUs were limited within a sampling feasible region established based on accessibility criteria. A novel criterion based on the Overlapping Area (OA between the NDVI frequency distribution histogram from the sampled ESUs and that from the entire study area was used to assess the sampling efficiency. A case study in Wanglang National Nature Reserve in China showed that the proposed strategy improves the spatiotemporally representativeness of sampling (mean annual OA = 74.7% compared to the single-temporally constrained (OA = 68.7% and the random sampling (OA = 63.1% strategies. The introduction of the feasible region constraint significantly reduces in-situ labour-intensive characterization necessities at expenses of about 9% loss in the spatiotemporal representativeness of the sampling. Our study will support the validation activities in Wanglang experimental site providing a benchmark for locating the nodes of automatic observation systems (e.g., LAINet which need a spatially distributed and temporally fixed sampling design.

  13. 680 SPATIAL VARIATION IN GROUNDWATER POLLUTION BY ...

    African Journals Online (AJOL)

    Osondu

    higher in Group A water samples, and reduced slightly in the Group B and then the Group C samples, ... Keywords: Spatial variation, Groundwater, Pollution, Abattoir, Effluents, Water quality. ... situation which may likely pose a threat to the.

  14. Towards Cost-efficient Sampling Methods

    OpenAIRE

    Peng, Luo; Yongli, Li; Chong, Wu

    2014-01-01

    The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...

  15. Reliability of radiographic observations recorded on a proforma measured using inter- and intra-observer variation: a preliminary study.

    Science.gov (United States)

    Saunders, M B; Gulabivala, K; Holt, R; Kahan, R S

    2000-05-01

    The aim of this preliminary study was to test the reliability of radiographic evaluation of features of endodontic interest using a newly devised data collection system. Twelve endodontic MSc postgraduate students and one specialist endodontist examined sample radiographs derived from a random selection of 42 patients seen previously on an Endodontic New Patient Clinic (EDI). Each student examined a random selection of 8-9 roots on periapical radiographs of single- and multirooted teeth, with and without previous root canal therapy and 3-4 dental panoramic tomograms (DPTs). A total of 100 roots were examined. A proforma was used to record observations on 67 radiographic features using predefined criteria. Intra-observer agreement was tested by asking the students to re-examine the radiographs. The principle investigator and the specialist endodontist examined the same radiographs and devised a Gold Standard using the same criteria. This was compared with the student assessments to determine inter-observer variation. The postgraduates then attended a revision session on the use of the form. Each student subsequently examined 8-9 different roots from the pool of radiographs. A further assessment of inter-observer variation was made by comparing these observations with the Gold Standard. Of the 67 radiographic features, only 25 had sufficient response to allow statistical analysis. Kappa values for intra- and inter-observer variation were estimated. These varied depending on the particular radiographic feature being assessed. Fifteen out of 25 intra-observer recordings showed 'good' or 'very good' Kappa agreement, but only three out of 25 inter-observer observations achieved 'good' or 'very good' values. Inter-observer variation was improved following the revision session with 16 out of 25 observations achieving 'good' or 'very good' Kappa agreement. Modification to the proforma, the criteria used, and training for radiographic assessment were considered necessary to

  16. Sampling in epidemiological research: issues, hazards and pitfalls

    Science.gov (United States)

    Tyrer, Stephen; Heyman, Bob

    2016-01-01

    Surveys of people's opinions are fraught with difficulties. It is easier to obtain information from those who respond to text messages or to emails than to attempt to obtain a representative sample. Samples of the population that are selected non-randomly in this way are termed convenience samples as they are easy to recruit. This introduces a sampling bias. Such non-probability samples have merit in many situations, but an epidemiological enquiry is of little value unless a random sample is obtained. If a sufficient number of those selected actually complete a survey, the results are likely to be representative of the population. This editorial describes probability and non-probability sampling methods and illustrates the difficulties and suggested solutions in performing accurate epidemiological research. PMID:27087985

  17. Temporal and spatial variations in fly ash quality

    Science.gov (United States)

    Hower, J.C.; Trimble, A.S.; Eble, C.F.

    2001-01-01

    Fly ash quality, both as the amount of petrographically distinguishable carbons and in chemistry, varies in both time and space. Temporal variations are a function of a number of variables. Variables can include variations in the coal blend organic petrography, mineralogy, and chemistry; variations in the pulverization of the coal, both as a function of the coal's Hardgrove grindability index and as a function of the maintenance and settings of the pulverizers; and variations in the operating conditions of the boiler, including changes in the pollution control system. Spatial variation, as an instantaneous measure of fly ash characteristics, should not involve changes in the first two sets of variables listed above. Spatial variations are a function of the gas flow within the boiler and ducts, certain flow conditions leading to a tendency for segregation of the less-dense carbons in one portion of the gas stream. Caution must be applied in sampling fly ash. Samples from a single bin, or series of bins, m ay not be representative of the whole fly ash, providing a biased view of the nature of the material. Further, it is generally not possible to be certain about variation until the analysis of the ash is complete. ?? 2001 Elsevier Science B.V. All rights reserved.

  18. Monte Carlo method for random surfaces

    International Nuclear Information System (INIS)

    Berg, B.

    1985-01-01

    Previously two of the authors proposed a Monte Carlo method for sampling statistical ensembles of random walks and surfaces with a Boltzmann probabilistic weight. In the present paper we work out the details for several models of random surfaces, defined on d-dimensional hypercubic lattices. (orig.)

  19. Comparison of sampling methods for hard-to-reach francophone populations: yield and adequacy of advertisement and respondent-driven sampling.

    Science.gov (United States)

    Ngwakongnwi, Emmanuel; King-Shier, Kathryn M; Hemmelgarn, Brenda R; Musto, Richard; Quan, Hude

    2014-01-01

    Francophones who live outside the primarily French-speaking province of Quebec, Canada, risk being excluded from research by lack of a sampling frame. We examined the adequacy of random sampling, advertising, and respondent-driven sampling for recruitment of francophones for survey research. We recruited francophones residing in the city of Calgary, Alberta, through advertising and respondentdriven sampling. These 2 samples were then compared with a random subsample of Calgary francophones derived from the 2006 Canadian Community Health Survey (CCHS). We assessed the effectiveness of advertising and respondent-driven sampling in relation to the CCHS sample by comparing demographic characteristics and selected items from the CCHS (specifically self-reported general health status, perceived weight, and having a family doctor). We recruited 120 francophones through advertising and 145 through respondent-driven sampling; the random sample from the CCHS consisted of 259 records. The samples derived from advertising and respondentdriven sampling differed from the CCHS in terms of age (mean ages 41.0, 37.6, and 42.5 years, respectively), sex (proportion of males 26.1%, 40.6%, and 56.6%, respectively), education (college or higher 86.7% , 77.9% , and 59.1%, respectively), place of birth (immigrants accounting for 45.8%, 55.2%, and 3.7%, respectively), and not having a regular medical doctor (16.7%, 34.5%, and 16.6%, respectively). Differences were not tested statistically because of limitations on the analysis of CCHS data imposed by Statistics Canada. The samples generated exclusively through advertising and respondent-driven sampling were not representative of the gold standard sample from the CCHS. Use of such biased samples for research studies could generate misleading results.

  20. Neurofeedback Against Binge Eating: A Randomized Controlled Trial in a Female Subclinical Threshold Sample.

    Science.gov (United States)

    Schmidt, Jennifer; Martin, Alexandra

    2016-09-01

    Brain-directed treatment techniques, such as neurofeedback, have recently been proposed as adjuncts in the treatment of eating disorders to improve therapeutic outcomes. In line with this recommendation, a cue exposure EEG-neurofeedback protocol was developed. The present study aimed at the evaluation of the specific efficacy of neurofeedback to reduce subjective binge eating in a female subthreshold sample. A total of 75 subjects were randomized to EEG-neurofeedback, mental imagery with a comparable treatment set-up or a waitlist group. At post-treatment, only EEG-neurofeedback led to a reduced frequency of binge eating (p = .015, g = 0.65). The effects remained stable to a 3-month follow-up. EEG-neurofeedback further showed particular beneficial effects on perceived stress and dietary self-efficacy. Differences in outcomes did not arise from divergent treatment expectations. Because EEG-neurofeedback showed a specific efficacy, it may be a promising brain-directed approach that should be tested as a treatment adjunct in clinical groups with binge eating. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.

  1. Creating ensembles of decision trees through sampling

    Science.gov (United States)

    Kamath, Chandrika; Cantu-Paz, Erick

    2005-08-30

    A system for decision tree ensembles that includes a module to read the data, a module to sort the data, a module to evaluate a potential split of the data according to some criterion using a random sample of the data, a module to split the data, and a module to combine multiple decision trees in ensembles. The decision tree method is based on statistical sampling techniques and includes the steps of reading the data; sorting the data; evaluating a potential split according to some criterion using a random sample of the data, splitting the data, and combining multiple decision trees in ensembles.

  2. Adaptive sampling of AEM transients

    Science.gov (United States)

    Di Massa, Domenico; Florio, Giovanni; Viezzoli, Andrea

    2016-02-01

    This paper focuses on the sampling of the electromagnetic transient as acquired by airborne time-domain electromagnetic (TDEM) systems. Typically, the sampling of the electromagnetic transient is done using a fixed number of gates whose width grows logarithmically (log-gating). The log-gating has two main benefits: improving the signal to noise (S/N) ratio at late times, when the electromagnetic signal has amplitudes equal or lower than the natural background noise, and ensuring a good resolution at the early times. However, as a result of fixed time gates, the conventional log-gating does not consider any geological variations in the surveyed area, nor the possibly varying characteristics of the measured signal. We show, using synthetic models, how a different, flexible sampling scheme can increase the resolution of resistivity models. We propose a new sampling method, which adapts the gating on the base of the slope variations in the electromagnetic (EM) transient. The use of such an alternative sampling scheme aims to get more accurate inverse models by extracting the geoelectrical information from the measured data in an optimal way.

  3. A Monte Carlo study of adsorption of random copolymers on random surfaces

    CERN Document Server

    Moghaddam, M S

    2003-01-01

    We study the adsorption problem of a random copolymer on a random surface in which a self-avoiding walk in three dimensions interacts with a plane defining a half-space to which the walk is confined. Each vertex of the walk is randomly labelled A with probability p sub p or B with probability 1 - p sub p , and only vertices labelled A are attracted to the surface plane. Each lattice site on the plane is also labelled either A with probability p sub s or B with probability 1 - p sub s , and only lattice sites labelled A interact with the walk. We study two variations of this model: in the first case the A-vertices of the walk interact only with the A-sites on the surface. In the second case the constraint of selective binding is removed; that is, any contact between the walk and the surface that involves an A-labelling, either from the surface or from the walk, is counted as a visit to the surface. The system is quenched in both cases, i.e. the labellings of the walk and of the surface are fixed as thermodynam...

  4. Determinants, reproducibility, and seasonal variation of ergosterol levels in house dust.

    Science.gov (United States)

    Leppänen, H K; Nevalainen, A; Vepsäläinen, A; Roponen, M; Täubel, M; Laine, O; Rantakokko, P; von Mutius, E; Pekkanen, J; Hyvärinen, A

    2014-06-01

    This study aimed to clarify the determinants that affect the concentrations of ergosterol and viable fungi in house dust and to examine the seasonal variation and reproducibility of ergosterol concentrations indoors. In studying the determinants, dust samples from living room floors and vacuum cleaner dust bags were collected from 107 farming and 105 non-farming homes. Ergosterol levels were determined with gas chromatography-mass spectrometry,and the dust bag dust was cultivated for enumeration of fungal genera. Lifestyle and environmental factors, for example using of the fireplace, and visible mold observations in homes, explained 20–26% of the variation of fungal concentrations. For the reproducibility study, samples were collected from five urban homes in four different seasons. The reproducibility of ergosterol determinations within a sample was excellent (ICC = 89.8) for floor dust and moderate (ICC = 63.8) for dust bag dust, but poor when sampling the same home throughout a year (ICC = 31.3 and 12.6, respectively) due to large temporal variation in ergosterol concentrations. In conclusion, environmental characteristics only partially predicted the variation of fungal concentrations. Based on these studies, we recommend repeated sampling of dust over time if one seeks to adequately describe overall fungal levels and exposure in a home. This study shows that levels of ergosterol and viable fungi in house dust are related to visible mold observations. Only 20% of the variation in fungal levels can be explained with questionnaires, and therefore, environmental samples need to be taken in addition. Reproducibility of ergosterol determination was excellent for floor dust, and thus, ergosterol measurements from floor dust samples could be suitable for assessing the fungal load in building investigations. The temporal variation needs to be taken into account when describing the ergosterol concentration of urban homes.

  5. Interobserver Variation of the Renal Length Measurement on Ultrasonography

    International Nuclear Information System (INIS)

    Jeong, Yoong Ki; Chung, Hye Weon; Kim, Tae Sung; Ryoo, Jae Wook; Kim, Tae Kyoung; Kim, Seung Hyup

    1995-01-01

    We assessed interobserver variation in the measurement of the renal length on ultrasonography. Ultrasonographic examinations were performed in randomly selected 50 patients. The maximallenhths of both kidneys were measured with calipers during the scanning from frozen images by three observers in a blinded fashion. There was a relatively constant tendency of an observer to measure a renal length either longer or shorter than the other observer(Kendall coefficient>0.05). Average interobserver variations were 0.51 cm (±0.42 cm) in right kidney and 0.53 cm (±0.41 cm) in left kidney and were within 1 cm in 91% right and 89% of left kidney. Interobserver variation about 1cm should be considered in the measurement of the renal length on ultrasonography

  6. Diural TSH variations in hypothyroidism.

    Science.gov (United States)

    Weeke, J; Laurberg, P

    1976-07-01

    There is a circadian variation in serum TSH in euthyroid subjects. A similar diurnal variation has been demonstrated in patients with hypothyroidism. In the present study the 24-hour pattern of serum TSH was investigated in eight patients with hypothyroidism of varying severity and in five hypothyroid patients treated with thyroxine (T4). There was a circadian variation in serum TSH in patients with hypothyroidism of moderate degree, and in patients treated for severe hypothyrodism with thyroxine. The pattern was similar to that found in normal subjects, i.e., low TSH levels in the daytime and higher levels at night. In severely hypothyroid patients, no diurnal variation in serum TSH was observed. A practical consequence is that blood samples for TSH measurements in patients with moderately elevated TSH levels are best taken after 1100 h, when the low day levels are reached.

  7. A random matrix approach to VARMA processes

    International Nuclear Information System (INIS)

    Burda, Zdzislaw; Jarosz, Andrzej; Nowak, Maciej A; Snarska, Malgorzata

    2010-01-01

    We apply random matrix theory to derive the spectral density of large sample covariance matrices generated by multivariate VMA(q), VAR(q) and VARMA(q 1 , q 2 ) processes. In particular, we consider a limit where the number of random variables N and the number of consecutive time measurements T are large but the ratio N/T is fixed. In this regime, the underlying random matrices are asymptotically equivalent to free random variables (FRV). We apply the FRV calculus to calculate the eigenvalue density of the sample covariance for several VARMA-type processes. We explicitly solve the VARMA(1, 1) case and demonstrate perfect agreement between the analytical result and the spectra obtained by Monte Carlo simulations. The proposed method is purely algebraic and can be easily generalized to q 1 >1 and q 2 >1.

  8. Random errors in the magnetic field coefficients of superconducting quadrupole magnets

    International Nuclear Information System (INIS)

    Herrera, J.; Hogue, R.; Prodell, A.; Thompson, P.; Wanderer, P.; Willen, E.

    1987-01-01

    The random multipole errors of superconducting quadrupoles are studied. For analyzing the multipoles which arise due to random variations in the size and locations of the current blocks, a model is outlined which gives the fractional field coefficients from the current distributions. With this approach, based on the symmetries of the quadrupole magnet, estimates are obtained of the random multipole errors for the arc quadrupoles envisioned for the Relativistic Heavy Ion Collider and for a single-layer quadrupole proposed for the Superconducting Super Collider

  9. Short-term variations of radiocarbon during the last century

    International Nuclear Information System (INIS)

    Burchuladze, A.A.; Pagava, S.V.; Jurina, V.; Povinec, P.; Usacev, S.

    1982-01-01

    Radiocarbon variations related to the 11-year solar cycle during the last century are discussed. Previous investigations on short term 14 C variations in tree rings are compared with 14 C measurements in Georgian wine samples. The amplitude of 14 C variations as obtained by various authors ranges from 0.2 to about 1%. (author)

  10. Day-to-day and within-day variation in urinary iodine excretion

    DEFF Research Database (Denmark)

    Rasmussen, Lone Banke; Ovesen, L.; Christiansen, E.

    1999-01-01

    Objective: To examine the day-to-day and within-day variation in urinary iodine excretion and the day-to-day variation in iodine intake. Design: Collection of consecutive 24-h urine samples and casual urine samples over 24 h. Setting: The study population consisted of highly motivated subjects fr...

  11. Genomic variation landscape of the human gut microbiome

    DEFF Research Database (Denmark)

    Schloissnig, Siegfried; Arumugam, Manimozhiyan; Sunagawa, Shinichi

    2013-01-01

    Whereas large-scale efforts have rapidly advanced the understanding and practical impact of human genomic variation, the practical impact of variation is largely unexplored in the human microbiome. We therefore developed a framework for metagenomic variation analysis and applied it to 252 faecal...... polymorphism rates of 0.11 was more variable between gut microbial species than across human hosts. Subjects sampled at varying time intervals exhibited individuality and temporal stability of SNP variation patterns, despite considerable composition changes of their gut microbiota. This indicates...

  12. Statistical mechanics of learning: A variational approach for real data

    International Nuclear Information System (INIS)

    Malzahn, Doerthe; Opper, Manfred

    2002-01-01

    Using a variational technique, we generalize the statistical physics approach of learning from random examples to make it applicable to real data. We demonstrate the validity and relevance of our method by computing approximate estimators for generalization errors that are based on training data alone

  13. Random Walks on Directed Networks: Inference and Respondent-Driven Sampling

    Directory of Open Access Journals (Sweden)

    Malmros Jens

    2016-06-01

    Full Text Available Respondent-driven sampling (RDS is often used to estimate population properties (e.g., sexual risk behavior in hard-to-reach populations. In RDS, already sampled individuals recruit population members to the sample from their social contacts in an efficient snowball-like sampling procedure. By assuming a Markov model for the recruitment of individuals, asymptotically unbiased estimates of population characteristics can be obtained. Current RDS estimation methodology assumes that the social network is undirected, that is, all edges are reciprocal. However, empirical social networks in general also include a substantial number of nonreciprocal edges. In this article, we develop an estimation method for RDS in populations connected by social networks that include reciprocal and nonreciprocal edges. We derive estimators of the selection probabilities of individuals as a function of the number of outgoing edges of sampled individuals. The proposed estimators are evaluated on artificial and empirical networks and are shown to generally perform better than existing estimators. This is the case in particular when the fraction of directed edges in the network is large.

  14. Sample size adjustments for varying cluster sizes in cluster randomized trials with binary outcomes analyzed with second-order PQL mixed logistic regression.

    Science.gov (United States)

    Candel, Math J J M; Van Breukelen, Gerard J P

    2010-06-30

    Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.

  15. Characterization of tuyere-level core-drill coke samples from blast furnace operation

    Energy Technology Data Exchange (ETDEWEB)

    S. Dong; N. Paterson; S.G. Kazarian; D.R. Dugwell; R. Kandiyoti [Imperial College London, London (United Kingdom). Department of Chemical Engineering

    2007-12-15

    A suite of tuyere-level coke samples have been withdrawn from a working blast furnace during coal injection, using the core-drilling technique. The samples have been characterized by size exclusion chromatography (SEC), Fourier transform Raman spectroscopy (FT-RS), and X-ray powder diffraction (XRD) spectroscopy. The 1-methyl-2-pyrrolidinone (NMP) extracts of the cokes sampled from the 'bosh', the rear of the 'bird's nest', and the 'dead man' zones were found by SEC to contain heavy soot-like materials (ca. 10{sup 7}-10{sup 8} apparent mass units). In contrast, NMP extracts of cokes taken from the raceway and the front of the 'bird's nest' only contained a small amount of material of relatively lower apparent molecular mass (up to ca. 10{sup 5} u). Since the feed coke contained no materials extractable by the present method, the soot-like materials are thought to have formed during the reactions of volatile matter released from the injectant coal, probably via dehydrogenation and repolymerization of the tars. The Raman spectra of the NMP-extracted core-drilled coke samples showed variations reflecting their temperature histories. Area ratios of D-band to G-band decreased as the exposure temperature increased, while intensity ratios of D to G band and those of 2D to G bands increased with temperature. The graphitic (G), defect (D), and random (R) fractions of the carbon structure of the cokes were also derived from the Raman spectra. The R fractions decreased with increasing temperature, whereas G fractions increased, while the D fractions showed a more complex variation with temperature. These data appear to give clues regarding the graphitization mechanism of tuyere-level cokes in the blast furnace. 41 refs., 9 figs., 6 tabs.

  16. Number-conserving random phase approximation with analytically integrated matrix elements

    International Nuclear Information System (INIS)

    Kyotoku, M.; Schmid, K.W.; Gruemmer, F.; Faessler, A.

    1990-01-01

    In the present paper a number conserving random phase approximation is derived as a special case of the recently developed random phase approximation in general symmetry projected quasiparticle mean fields. All the occurring integrals induced by the number projection are performed analytically after writing the various overlap and energy matrices in the random phase approximation equation as polynomials in the gauge angle. In the limit of a large number of particles the well-known pairing vibration matrix elements are recovered. We also present a new analytically number projected variational equation for the number conserving pairing problem

  17. Chambolle's Projection Algorithm for Total Variation Denoising

    Directory of Open Access Journals (Sweden)

    Joan Duran

    2013-12-01

    Full Text Available Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f=u+n, and n is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle's projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation.

  18. Morphometric variation in Plio-Pleistocene hominid distal humeri.

    Science.gov (United States)

    Lague, M R; Jungers, W L

    1996-11-01

    The magnitude and meaning of morphological variation among Plio-Pleistocene hominid distal humeri have been recurrent points of disagreement among paleoanthropologists. Some researchers have found noteworthy differences among fossil humeri that they believe merit taxonomic separation, while others question the possibility of accurately sorting these fossils into different species and/or functional groups. Size and shape differences among fossil distal humeri are evaluated here to determine whether the magnitude and patterns of these differences can be observed within large-bodied, living hominoids. Specimens analyzed in this study have been assigned to various taxa (Australopithecus afarensis, A. africanus, A. anamensis, Paranthropus, and early Homo) and include AL 288-1m, AL 288-1s, AL 137-48a, AL 322-1, Gomboré IB 7594, TM 1517, KNM-ER 739, KNM-ER 1504, KMN-KP 271 (Kanapoi), and Stw 431. Five extant hominoid populations are sampled to provide a standard by which to consider differences found between the fossils, including two modern human groups (Native American and African American), one group of Pan troglodytes, and two subspecies of Gorilla gorilla (G.g. beringei, G.g. gorilla). All possible pairwise d values (average Euclidena distances) are calculated within each of the reference populations using an exact randomization procedure. This is done using both raw linear measurements as well as scale-free shape data created as ratios of each measurement to the geometric mean. Differences between each pair of fossil humeri are evaluated by comparing their d values to the distribution of d values found within each of the reference populations. Principal coordinate analysis and an unweighted pair group method with arithmetic averages (UPGMA) cluster analysis are utilized to further assess similarities and differences among the fossils. Finally, canonical variates analysis and discriminant analysis are employed using all hominoid samples in order to control for

  19. Consequences for established design practice from geographical variation of historical rainfall data

    DEFF Research Database (Denmark)

    Mikkelsen, P.S.; Arnbjerg-Nielsen, K.; Harremoës, P.

    1997-01-01

    variation and variation due to (correlated) sampling errors. Further analyses indicate that the observed variation can be explained only partially by correlation with regional climatological variables and that a significant residual variation remains, especially for large return periods. The new perceptions...

  20. An evaluation of soil sampling for 137Cs using various field-sampling volumes.

    Science.gov (United States)

    Nyhan, J W; White, G C; Schofield, T G; Trujillo, G

    1983-05-01

    The sediments from a liquid effluent receiving area at the Los Alamos National Laboratory and soils from an intensive study area in the fallout pathway of Trinity were sampled for 137Cs using 25-, 500-, 2500- and 12,500-cm3 field sampling volumes. A highly replicated sampling program was used to determine mean concentrations and inventories of 137Cs at each site, as well as estimates of spatial, aliquoting, and counting variance components of the radionuclide data. The sampling methods were also analyzed as a function of soil size fractions collected in each field sampling volume and of the total cost of the program for a given variation in the radionuclide survey results. Coefficients of variation (CV) of 137Cs inventory estimates ranged from 0.063 to 0.14 for Mortandad Canyon sediments, whereas CV values for Trinity soils were observed from 0.38 to 0.57. Spatial variance components of 137Cs concentration data were usually found to be larger than either the aliquoting or counting variance estimates and were inversely related to field sampling volume at the Trinity intensive site. Subsequent optimization studies of the sampling schemes demonstrated that each aliquot should be counted once, and that only 2-4 aliquots out of as many as 30 collected need be assayed for 137Cs. The optimization studies showed that as sample costs increased to 45 man-hours of labor per sample, the variance of the mean 137Cs concentration decreased dramatically, but decreased very little with additional labor.

  1. Application of bias factor method using random sampling technique for prediction accuracy improvement of critical eigenvalue of BWR

    International Nuclear Information System (INIS)

    Ito, Motohiro; Endo, Tomohiro; Yamamoto, Akio; Kuroda, Yusuke; Yoshii, Takashi

    2017-01-01

    The bias factor method based on the random sampling technique is applied to the benchmark problem of Peach Bottom Unit 2. Validity and availability of the present method, i.e. correction of calculation results and reduction of uncertainty, are confirmed in addition to features and performance of the present method. In the present study, core characteristics in cycle 3 are corrected with the proposed method using predicted and 'measured' critical eigenvalues in cycles 1 and 2. As the source of uncertainty, variance-covariance of cross sections is considered. The calculation results indicate that bias between predicted and measured results, and uncertainty owing to cross section can be reduced. Extension to other uncertainties such as thermal hydraulics properties will be a future task. (author)

  2. Randomized, Controlled Study of Adderall XR in ADHD

    Directory of Open Access Journals (Sweden)

    J Gordon Millichap

    2002-08-01

    Full Text Available The efficacy and safety of Adderall XR in the treatment of attention deficit/hyperactivity disorder and diurnal variation in responses were assessed by a multicenter, randomized, double-blind, parallel group, placebo-controlled trial at 47 sites, and reported from the Massachusetts General Hospital, Boston, MA.

  3. Comparison of address-based sampling and random-digit dialing methods for recruiting young men as controls in a case-control study of testicular cancer susceptibility.

    Science.gov (United States)

    Clagett, Bartholt; Nathanson, Katherine L; Ciosek, Stephanie L; McDermoth, Monique; Vaughn, David J; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A

    2013-12-01

    Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-phone numbers and address-based sampling (ABS), to recruit primarily white men aged 18-55 years into a study of testicular cancer susceptibility conducted in the Philadelphia, Pennsylvania, metropolitan area between 2009 and 2012. With few exceptions, eligible and enrolled controls recruited by means of RDD and ABS were similar with regard to characteristics for which data were collected on the screening survey. While we find ABS to be a comparably effective method of recruiting young males compared with landline RDD, we acknowledge the potential impact that selection bias may have had on our results because of poor overall response rates, which ranged from 11.4% for landline RDD to 1.7% for ABS.

  4. Missing citations due to exact reference matching: Analysis of a random sample from WoS. Are publications from peripheral countries disadvantaged?

    Energy Technology Data Exchange (ETDEWEB)

    Donner, P.

    2016-07-01

    Citation counts of scientific research contributions are one fundamental data in scientometrics. Accuracy and completeness of citation links are therefore crucial data quality issues (Moed, 2005, Ch. 13). However, despite the known flaws of reference matching algorithms, usually no attempts are made to incorporate uncertainty about citation counts into indicators. This study is a step towards that goal. Particular attention is paid to the question whether publications from countries not using basic Latin script are differently affected by missed citations. The proprietary reference matching procedure of Web of Science (WoS) is based on (near) exact agreement of cited reference data (normalized during processing) to the target papers bibliographical data. Consequently, the procedure has near-optimal precision but incomplete recall - it is known to miss some slightly inaccurate reference links (Olensky, 2015). However, there has been no attempt so far to estimate the rate of missed citations by a principled method for a random sample. For this study a simple random sample of WoS source papers was drawn and it was attempted to find all reference strings of WoS indexed documents that refer to them, in particular inexact matches. The objective is to give a statistical estimate of the proportion of missed citations and to describe the relationship of the number of found citations to the number of missed citations, i.e. the conditional error distribution. The empirical error distribution is statistically analyzed and modelled. (Author)

  5. Nitrogen Dynamics Variation in Overlying Water of Jinshan Lake, China

    Directory of Open Access Journals (Sweden)

    Xiaohong Zhou

    2015-01-01

    Full Text Available Jinshan Lake is a famous urban landscape lake with approximately 8.8 km2 water area, which is located on the north of Zhenjiang, of Jiangsu Province, China. Eighteen sampled sites were selected and overlying water was sampled from 2013 to 2014 to study the seasonal and spatial variation of nitrogen in overlying water of Jinshan Lake. Results showed that physicochemical characteristics of temperature, pH, and DO showed high seasonal variation, whereas they had no significant spatial differences in the 18 sampling points (P>0.05 in overlying water of Jinshan Lake. Nitrogen concentrations showed strong seasonal variation trends. The ranked order of TN was as follows: spring > summer > autumn > winter; the order of NH4+-N was as follows: spring > autumn > summer > winter, whereas NO3--N concentrations revealed an inverse seasonal pattern, with maxima occurring in winter and minimal values occurring in spring. Nitrogen concentrations had dramatic spatial changes in 18 sampling points of Jinshan Lake. Physicochemical parameter difference, domestic wastes pollution, and rainfall runoff source may have led to seasonal and spatial fluctuation variations of nitrogen in overlying water of Jinshan Lake, China.

  6. Sampling from the normal and exponential distributions

    International Nuclear Information System (INIS)

    Chaplin, K.R.; Wills, C.A.

    1982-01-01

    Methods for generating random numbers from the normal and exponential distributions are described. These involve dividing each function into subregions, and for each of these developing a method of sampling usually based on an acceptance rejection technique. When sampling from the normal or exponential distribution, each subregion provides the required random value with probability equal to the ratio of its area to the total area. Procedures written in FORTRAN for the CYBER 175/CDC 6600 system are provided to implement the two algorithms

  7. Impact of soil-structure interaction on the probabilistic frequency variation of concrete structures

    International Nuclear Information System (INIS)

    Hadjian, A.H.; Hamilton, C.W.

    1975-01-01

    Earthquake response of equipment in nuclear power plants is characterized by floor response spectra. Since these spectra peak at the natural frequencies of the structure, it is important, both from safety and cost standpoints, to determine the degree of the expected variability of the calculated structural frequencies. A previous work is extended on the variability of the natural frequencies of structures due to the variations of concrete properties and a rigorous approach is presented to evaluate frequency variations based on the probability distributions of both the structural and soil parameters and jointly determine the distributions of the natural frequencies. It is assumed that the soil-structure interaction coefficients are normally distributed. With the proper choice of coordinates, the simultaneous random variations of both the structural properties and the interaction coefficients can be incorporated in the eigenvalue problem. The key methodology problem is to obtain the probability distribution of eigenvalues of matrices with random variable elements. Since no analytic relation exists between the eigenvalues and the elements, a numerical procedure had to be designed. It was found that the desired accuracy can be best achieved by splitting the joint variation into two parts: the marginal distribution of soil variations and the conditional distribution of structural variations at specific soil fractiles. Then after calculating the actual eigenvalues at judiciously selected paired values of soil and structure parameters, this information is recombined to obtain the desired cumulative distribution of natural frequencies

  8. Methodology Series Module 5: Sampling Strategies

    OpenAIRE

    Setia, Maninder Singh

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...

  9. Describing shell shape variations and sexual dimorphism of Golden Apple Snail, Pomacea caniculata (Lamarck, 1822 using geometric morphometric analysis

    Directory of Open Access Journals (Sweden)

    C.C. Cabuga

    2017-09-01

    Full Text Available Pomacea caniculata or Golden Apple Snail (GAS existed to be a rice pest in the Philippines and in Asia. Likewise, geographic location also contributes its increasing populations thus making it invasive among freshwater habitats and rice field areas. This study was conducted in order to describe shell shape variations and sexual dimorphism among the populations of P. caniculata. A total of 180 were randomly collected in the three lakes of Esperanza, Agusan del Sur (Lake Dakong Napo, Lake Oro, and Lake Cebulan, of which each lake comprised of 60 samples (30 males and 30 females. To determine the variations and sexual dimorphism in the shell shape of golden apple snail, coordinates was administered to relative warp analysis and the resulting data were subjected to Multivariate Analysis of Variance (MANOVA, Principal Component Analysis (PCA and Canonical Variate Analysis (CVA. The results show statistically significant (P<0.05 from the appended male and female dorsal and ventral/apertural portion. While male and female spire height, body size, and shell shape opening also shows significant variations. These phenotypic distinctions could be associated with geographic isolation, predation and nutrient component of the gastropods. Thus, the importance of using geometric morphometric advances in describing sexual dimorphism in the shell shape of P. caniculata.

  10. Estimating spatial and temporal components of variation in count data using negative binomial mixed models

    Science.gov (United States)

    Irwin, Brian J.; Wagner, Tyler; Bence, James R.; Kepler, Megan V.; Liu, Weihai; Hayes, Daniel B.

    2013-01-01

    Partitioning total variability into its component temporal and spatial sources is a powerful way to better understand time series and elucidate trends. The data available for such analyses of fish and other populations are usually nonnegative integer counts of the number of organisms, often dominated by many low values with few observations of relatively high abundance. These characteristics are not well approximated by the Gaussian distribution. We present a detailed description of a negative binomial mixed-model framework that can be used to model count data and quantify temporal and spatial variability. We applied these models to data from four fishery-independent surveys of Walleyes Sander vitreus across the Great Lakes basin. Specifically, we fitted models to gill-net catches from Wisconsin waters of Lake Superior; Oneida Lake, New York; Saginaw Bay in Lake Huron, Michigan; and Ohio waters of Lake Erie. These long-term monitoring surveys varied in overall sampling intensity, the total catch of Walleyes, and the proportion of zero catches. Parameter estimation included the negative binomial scaling parameter, and we quantified the random effects as the variations among gill-net sampling sites, the variations among sampled years, and site × year interactions. This framework (i.e., the application of a mixed model appropriate for count data in a variance-partitioning context) represents a flexible approach that has implications for monitoring programs (e.g., trend detection) and for examining the potential of individual variance components to serve as response metrics to large-scale anthropogenic perturbations or ecological changes.

  11. Positive random variables with a discrete probability mass at the origin: Parameter estimation for left-censored samples with application to air quality monitoring data

    International Nuclear Information System (INIS)

    Gogolak, C.V.

    1986-11-01

    The concentration of a contaminant measured in a particular medium might be distributed as a positive random variable when it is present, but it may not always be present. If there is a level below which the concentration cannot be distinguished from zero by the analytical apparatus, a sample from such a population will be censored on the left. The presence of both zeros and positive values in the censored portion of such samples complicates the problem of estimating the parameters of the underlying positive random variable and the probability of a zero observation. Using the method of maximum likelihood, it is shown that the solution to this estimation problem reduces largely to that of estimating the parameters of the distribution truncated at the point of censorship. The maximum likelihood estimate of the proportion of zero values follows directly. The derivation of the maximum likelihood estimates for a lognormal population with zeros is given in detail, and the asymptotic properties of the estimates are examined. The estimation method was used to fit several different distributions to a set of severely censored 85 Kr monitoring data from six locations at the Savannah River Plant chemical separations facilities

  12. Seasonal variation in soil seed bank size and species composition of selected habitat types in Maputaland, South Africa

    Directory of Open Access Journals (Sweden)

    M. J. S. Kellerman

    2007-08-01

    Full Text Available Seasonal variation in seed bank size and species composition of five selected habitat types within the Tembe Elephant Park. South Africa, was investigated. At three-month intervals, soil samples were randomly collected from five different habitat types: a, Licuati forest; b, Licuati thicket; c, a bare or sparsely vegetated zone surrounding the forest edge, referred to as the forest/grassland ecotone; d, grassland; and e, open woodland. Most species in the seed bank flora were either grasses, sedges, or forbs, with hardly any evidence of woody species. The Licuati forest and thicket soils produced the lowest seed densities in all seasons.  Licuati forest and grassland seed banks showed a two-fold seasonal variation in size, those of the Licuati thicket and woodland a three-fold variation in size, whereas the forest/grassland ecotone maintained a relatively large seed bank all year round. The woodland seed bank had the highest species richness, whereas the Licuati forest and thicket soils were poor in species. Generally, it was found that the greatest correspondence in species composition was between the Licuati forest and thicket, as well as the forest/grassland ecotone and grassland seed bank floras.

  13. Instrumental neutron activation analysis of soil sample

    International Nuclear Information System (INIS)

    Abdul Khalik Haji Wood.

    1983-01-01

    This paper describes the analysis of soil samples collected from 5 different location around Sungai Lui, Kajang, Selangor, Malaysia. These sample were taken at 22-24 cm from the top of the ground and were analysed using the techniques of Instrumental Neutron Activation Analysis (INAA). The analysis on soil sample taken above 22-24 cm level were done in order to determine if there is any variation in elemental contents at different sampling levels. The results indicate a wide variation in the contents of the samples. About 30 elements have been analysed. The major ones are Na, I, Cl, Mg, Al, K, Ti, Ca and Fe. Trace elements analysed were Ba, Sc, V, Cr, Mn, Ga, As, Zn, Br, Rb, Co, Hf, Zr, Th, U, Sb, Cs, Ce, Sm, Eu, Tb, Dy, Yb, Lu and La. (author)

  14. Sample size determinations for group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms.

    Science.gov (United States)

    Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H

    2017-02-01

    We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.

  15. Quantifying the influence of sediment source area sampling on detrital thermochronometer data

    Science.gov (United States)

    Whipp, D. M., Jr.; Ehlers, T. A.; Coutand, I.; Bookhagen, B.

    2014-12-01

    Detrital thermochronology offers a unique advantage over traditional bedrock thermochronology because of its sensitivity to sediment production and transportation to sample sites. In mountainous regions, modern fluvial sediment is often collected and dated to determine the past (105 to >107 year) exhumation history of the upstream drainage area. Though potentially powerful, the interpretation of detrital thermochronometer data derived from modern fluvial sediment is challenging because of spatial and temporal variations in sediment production and transport, and target mineral concentrations. Thermochronometer age prediction models provide a quantitative basis for data interpretation, but it can be difficult to separate variations in catchment bedrock ages from the effects of variable basin denudation and sediment transport. We present two examples of quantitative data interpretation using detrital thermochronometer data from the Himalaya, focusing on the influence of spatial and temporal variations in basin denudation on predicted age distributions. We combine age predictions from the 3D thermokinematic numerical model Pecube with simple models for sediment sampling in the upstream drainage basin area to assess the influence of variations in sediment production by different geomorphic processes or scaled by topographic metrics. We first consider a small catchment from the central Himalaya where bedrock landsliding appears to have affected the observed muscovite 40Ar/39Ar age distributions. Using a simple model of random landsliding with a power-law landslide frequency-area relationship we find that the sediment residence time in the catchment has a major influence on predicted age distributions. In the second case, we compare observed detrital apatite fission-track age distributions from 16 catchments in the Bhutan Himalaya to ages predicted using Pecube and scaled by various topographic metrics. Preliminary results suggest that predicted age distributions scaled

  16. A randomized trial of a DWI intervention program for first offenders: intervention outcomes and interactions with antisocial personality disorder among a primarily American-Indian sample.

    Science.gov (United States)

    Woodall, W Gill; Delaney, Harold D; Kunitz, Stephen J; Westerberg, Verner S; Zhao, Hongwei

    2007-06-01

    Randomized trial evidence on the effectiveness of incarceration and treatment of first-time driving while intoxicated (DWI) offenders who are primarily American Indian has yet to be reported in the literature on DWI prevention. Further, research has confirmed the association of antisocial personality disorder (ASPD) with problems with alcohol including DWI. A randomized clinical trial was conducted, in conjunction with 28 days of incarceration, of a treatment program incorporating motivational interviewing principles for first-time DWI offenders. The sample of 305 offenders including 52 diagnosed as ASPD by the Diagnostic Interview Schedule were assessed before assignment to conditions and at 6, 12, and 24 months after discharge. Self-reported frequency of drinking and driving as well as various measures of drinking over the preceding 90 days were available at all assessments for 244 participants. Further, DWI rearrest data for 274 participants were available for analysis. Participants randomized to receive the first offender incarceration and treatment program reported greater reductions in alcohol consumption from baseline levels when compared with participants who were only incarcerated. Antisocial personality disorder participants reported heavier and more frequent drinking but showed significantly greater declines in drinking from intake to posttreatment assessments. Further, the treatment resulted in larger effects relative to the control on ASPD than non-ASPD participants. Nonconfrontational treatment may significantly enhance outcomes for DWI offenders with ASPD when delivered in an incarcerated setting, and in the present study, such effects were found in a primarily American-Indian sample.

  17. Effect of optimum stratification on sampling with varying probabilities under proportional allocation

    Directory of Open Access Journals (Sweden)

    Syed Ejaz Husain Rizvi

    2007-10-01

    Full Text Available The problem of optimum stratification on an auxiliary variable when the units from different strata are selected with probability proportional to the value of auxiliary variable (PPSWR was considered by Singh (1975 for univariate case. In this paper we have extended the same problem, for proportional allocation, when two variates are under study. A cum. 3 R3(x rule for obtaining approximately optimum strata boundaries has been provided. It has been shown theoretically as well as empirically that the use of stratification has inverse effect on the relative efficiency of PPSWR as compared to unstratified PPSWR method when proportional method of allocation is envisaged. Further comparison showed that with increase in number of strata the stratified simple random sampling is equally efficient as PPSWR.

  18. Estimating the encounter rate variance in distance sampling

    Science.gov (United States)

    Fewster, R.M.; Buckland, S.T.; Burnham, K.P.; Borchers, D.L.; Jupp, P.E.; Laake, J.L.; Thomas, L.

    2009-01-01

    The dominant source of variance in line transect sampling is usually the encounter rate variance. Systematic survey designs are often used to reduce the true variability among different realizations of the design, but estimating the variance is difficult and estimators typically approximate the variance by treating the design as a simple random sample of lines. We explore the properties of different encounter rate variance estimators under random and systematic designs. We show that a design-based variance estimator improves upon the model-based estimator of Buckland et al. (2001, Introduction to Distance Sampling. Oxford: Oxford University Press, p. 79) when transects are positioned at random. However, if populations exhibit strong spatial trends, both estimators can have substantial positive bias under systematic designs. We show that poststratification is effective in reducing this bias. ?? 2008, The International Biometric Society.

  19. Some connections between importance sampling and enhanced sampling methods in molecular dynamics.

    Science.gov (United States)

    Lie, H C; Quer, J

    2017-11-21

    In molecular dynamics, enhanced sampling methods enable the collection of better statistics of rare events from a reference or target distribution. We show that a large class of these methods is based on the idea of importance sampling from mathematical statistics. We illustrate this connection by comparing the Hartmann-Schütte method for rare event simulation (J. Stat. Mech. Theor. Exp. 2012, P11004) and the Valsson-Parrinello method of variationally enhanced sampling [Phys. Rev. Lett. 113, 090601 (2014)]. We use this connection in order to discuss how recent results from the Monte Carlo methods literature can guide the development of enhanced sampling methods.

  20. Data, model, conclusion, doing it again

    NARCIS (Netherlands)

    Molenaar, W.

    1998-01-01

    This paper explores the robustness of conclusions from a statistical model against variations in model choice (rather than variations in random sampling and random assignment to treatments, which are the usual variations covered by inferential statistics). After the problem formulation in section 1,

  1. Random-effects linear modeling and sample size tables for two special crossover designs of average bioequivalence studies: the four-period, two-sequence, two-formulation and six-period, three-sequence, three-formulation designs.

    Science.gov (United States)

    Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael

    2013-12-01

    Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.

  2. Designing a monitoring program to estimate estuarine survival of anadromous salmon smolts: simulating the effect of sample design on inference

    Science.gov (United States)

    Romer, Jeremy D.; Gitelman, Alix I.; Clements, Shaun; Schreck, Carl B.

    2015-01-01

    A number of researchers have attempted to estimate salmonid smolt survival during outmigration through an estuary. However, it is currently unclear how the design of such studies influences the accuracy and precision of survival estimates. In this simulation study we consider four patterns of smolt survival probability in the estuary, and test the performance of several different sampling strategies for estimating estuarine survival assuming perfect detection. The four survival probability patterns each incorporate a systematic component (constant, linearly increasing, increasing and then decreasing, and two pulses) and a random component to reflect daily fluctuations in survival probability. Generally, spreading sampling effort (tagging) across the season resulted in more accurate estimates of survival. All sampling designs in this simulation tended to under-estimate the variation in the survival estimates because seasonal and daily variation in survival probability are not incorporated in the estimation procedure. This under-estimation results in poorer performance of estimates from larger samples. Thus, tagging more fish may not result in better estimates of survival if important components of variation are not accounted for. The results of our simulation incorporate survival probabilities and run distribution data from previous studies to help illustrate the tradeoffs among sampling strategies in terms of the number of tags needed and distribution of tagging effort. This information will assist researchers in developing improved monitoring programs and encourage discussion regarding issues that should be addressed prior to implementation of any telemetry-based monitoring plan. We believe implementation of an effective estuary survival monitoring program will strengthen the robustness of life cycle models used in recovery plans by providing missing data on where and how much mortality occurs in the riverine and estuarine portions of smolt migration. These data

  3. Sampling pig farms at the abattoir in a cross-sectional study - Evaluation of a sampling method.

    Science.gov (United States)

    Birkegård, Anna Camilla; Halasa, Tariq; Toft, Nils

    2017-09-15

    A cross-sectional study design is relatively inexpensive, fast and easy to conduct when compared to other study designs. Careful planning is essential to obtaining a representative sample of the population, and the recommended approach is to use simple random sampling from an exhaustive list of units in the target population. This approach is rarely feasible in practice, and other sampling procedures must often be adopted. For example, when slaughter pigs are the target population, sampling the pigs on the slaughter line may be an alternative to on-site sampling at a list of farms. However, it is difficult to sample a large number of farms from an exact predefined list, due to the logistics and workflow of an abattoir. Therefore, it is necessary to have a systematic sampling procedure and to evaluate the obtained sample with respect to the study objective. We propose a method for 1) planning, 2) conducting, and 3) evaluating the representativeness and reproducibility of a cross-sectional study when simple random sampling is not possible. We used an example of a cross-sectional study with the aim of quantifying the association of antimicrobial resistance and antimicrobial consumption in Danish slaughter pigs. It was not possible to visit farms within the designated timeframe. Therefore, it was decided to use convenience sampling at the abattoir. Our approach was carried out in three steps: 1) planning: using data from meat inspection to plan at which abattoirs and how many farms to sample; 2) conducting: sampling was carried out at five abattoirs; 3) evaluation: representativeness was evaluated by comparing sampled and non-sampled farms, and the reproducibility of the study was assessed through simulated sampling based on meat inspection data from the period where the actual data collection was carried out. In the cross-sectional study samples were taken from 681 Danish pig farms, during five weeks from February to March 2015. The evaluation showed that the sampling

  4. The concentration of heavy metals: zinc, cadmium, lead, copper, mercury, iron and calcium in head hair of a randomly selected sample of Kenyan people

    International Nuclear Information System (INIS)

    Wandiga, S.O.; Jumba, I.O.

    1982-01-01

    An intercomparative analysis of the concentration of heavy metals:zinc, cadmium, lead, copper, mercury, iron and calcium in head hair of a randomly selected sample of Kenyan people using the techniques of atomic absorption spectrophotometry (AAS) and differential pulse anodic stripping voltammetry (DPAS) has been undertaken. The percent relative standard deviation for each sample analysed using either of the techniques show good sensitivity and correlation between the techniques. The DPAS was found to be slightly sensitive than the AAs instrument used. The recalculated body burden rations of Cd to Zn, Pb to Fe reveal no unusual health impairement symptoms and suggest a relatively clean environment in Kenya.(author)

  5. GRD: An SPSS extension command for generating random data

    Directory of Open Access Journals (Sweden)

    Bradley Harding

    2014-09-01

    Full Text Available To master statistics and data analysis tools, it is necessary to understand a number of concepts, manyof which are quite abstract. For example, sampling from a theoretical distribution can help individuals explore andunderstand randomness. Sampling can also be used to build exercises aimed to help students master statistics. Here, we present GRD (Generator of Random Data, an extension command for SPSS (version 17 and above. With GRD, it is possible to get random data from a given distribution. In its simplest use, GRD will return a set of simulated data from a normal distribution.With subcommands to GRD, it is possible to get data from multiple groups, over multiple repeated measures, and with desired effectsizes. Group sizes can be equal or unequal. With further subcommands, it is possible to sample from any theoretical population, (not simply the normal distribution, introduce non-homogeneous variances,fix or randomize subject effects, etc. Finally, GRD’s generated data are in a format ready to be analyzed.

  6. Hominid mandibular corpus shape variation and its utility for recognizing species diversity within fossil Homo.

    Science.gov (United States)

    Lague, Michael R; Collard, Nicole J; Richmond, Brian G; Wood, Bernard A

    2008-12-01

    Mandibular corpora are well represented in the hominin fossil record, yet few studies have rigorously assessed the utility of mandibular corpus morphology for species recognition, particularly with respect to the linear dimensions that are most commonly available. In this study, we explored the extent to which commonly preserved mandibular corpus morphology can be used to: (i) discriminate among extant hominid taxa and (ii) support species designations among fossil specimens assigned to the genus Homo. In the first part of the study, discriminant analysis was used to test for significant differences in mandibular corpus shape at different taxonomic levels (genus, species and subspecies) among extant hominid taxa (i.e. Homo, Pan, Gorilla, Pongo). In the second part of the study, we examined shape variation among fossil mandibles assigned to Homo (including H. habilis sensu stricto, H. rudolfensis, early African H. erectus/H. ergaster, late African H. erectus, Asian H. erectus, H. heidelbergensis, H. neanderthalensis and H. sapiens). A novel randomization procedure designed for small samples (and using group 'distinctness values') was used to determine whether shape variation among the fossils is consistent with conventional taxonomy (or alternatively, whether a priori taxonomic groupings are completely random with respect to mandibular morphology). The randomization of 'distinctness values' was also used on the extant samples to assess the ability of the test to recognize known taxa. The discriminant analysis results demonstrated that, even for a relatively modest set of traditional mandibular corpus measurements, we can detect significant differences among extant hominids at the genus and species levels, and, in some cases, also at the subspecies level. Although the randomization of 'distinctness values' test is more conservative than discriminant analysis (based on comparisons with extant specimens), we were able to detect at least four distinct groups among the

  7. Hominid mandibular corpus shape variation and its utility for recognizing species diversity within fossil Homo

    Science.gov (United States)

    Lague, Michael R; Collard, Nicole J; Richmond, Brian G; Wood, Bernard A

    2008-01-01

    Mandibular corpora are well represented in the hominin fossil record, yet few studies have rigorously assessed the utility of mandibular corpus morphology for species recognition, particularly with respect to the linear dimensions that are most commonly available. In this study, we explored the extent to which commonly preserved mandibular corpus morphology can be used to: (i) discriminate among extant hominid taxa and (ii) support species designations among fossil specimens assigned to the genus Homo. In the first part of the study, discriminant analysis was used to test for significant differences in mandibular corpus shape at different taxonomic levels (genus, species and subspecies) among extant hominid taxa (i.e. Homo, Pan, Gorilla, Pongo). In the second part of the study, we examined shape variation among fossil mandibles assigned to Homo(including H. habilis sensu stricto, H. rudolfensis, early African H. erectus/H. ergaster, late African H. erectus, Asian H. erectus, H. heidelbergensis, H. neanderthalensis and H. sapiens). A novel randomization procedure designed for small samples (and using group ‘distinctness values’) was used to determine whether shape variation among the fossils is consistent with conventional taxonomy (or alternatively, whether a priori taxonomic groupings are completely random with respect to mandibular morphology). The randomization of ‘distinctness values’ was also used on the extant samples to assess the ability of the test to recognize known taxa. The discriminant analysis results demonstrated that, even for a relatively modest set of traditional mandibular corpus measurements, we can detect significant differences among extant hominids at the genus and species levels, and, in some cases, also at the subspecies level. Although the randomization of ‘distinctness values’ test is more conservative than discriminant analysis (based on comparisons with extant specimens), we were able to detect at least four distinct groups

  8. Variation in commercial smoking mixtures containing third-generation synthetic cannabinoids.

    Science.gov (United States)

    Frinculescu, Anca; Lyall, Catherine L; Ramsey, John; Miserez, Bram

    2017-02-01

    Variation in ingredients (qualitative variation) and in quantity of active compounds (quantitative variation) in herbal smoking mixtures containing synthetic cannabinoids has been shown for older products. This can be dangerous to the user, as accurate and reproducible dosing is impossible. In this study, 69 packages containing third-generation cannabinoids of seven brands on the UK market in 2014 were analyzed both qualitatively and quantitatively for variation. When comparing the labels to actual active ingredients identified in the sample, only one brand was shown to be correctly labelled. The other six brands contained less, more, or ingredients other than those listed on the label. Only two brands were inconsistent, containing different active ingredients in different samples. Quantitative variation was assessed both within one package and between several packages. Within-package variation was within a 10% range for five of the seven brands, but two brands showed larger variation, up to 25% (Relative Standard Deviation). Variation between packages was significantly higher, with variation up to 38% and maximum concentration up to 2.7 times higher than the minimum concentration. Both qualitative and quantitative variation are common in smoking mixtures and endanger the user, as it is impossible to estimate the dose or to know the compound consumed when smoking commercial mixtures. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Equilibrium Molecular Thermodynamics from Kirkwood Sampling

    OpenAIRE

    Somani, Sandeep; Okamoto, Yuko; Ballard, Andrew J.; Wales, David J.

    2015-01-01

    We present two methods for barrierless equilibrium sampling of molecular systems based on the recently proposed Kirkwood method (J. Chem. Phys. 2009, 130, 134102). Kirkwood sampling employs low-order correlations among internal coordinates of a molecule for random (or non-Markovian) sampling of the high dimensional conformational space. This is a geometrical sampling method independent of the potential energy surface. The first method is a variant of biased Monte Carlo, wher...

  10. Prospective, randomized, blinded evaluation of donor semen quality provided by seven commercial sperm banks.

    Science.gov (United States)

    Carrell, Douglas T; Cartmill, Deborah; Jones, Kirtly P; Hatasaka, Harry H; Peterson, C Matthew

    2002-07-01

    To evaluate variability in donor semen quality between seven commercial donor sperm banks, within sperm banks, and between intracervical insemination and intrauterine insemination. Prospective, randomized, blind evaluation of commercially available donor semen samples. An academic andrology laboratory. Seventy-five cryopreserved donor semen samples were evaluated. Samples were coded, then blindly evaluated for semen quality. Standard semen quality parameters, including concentration, motility parameters, World Health Organization criteria morphology, and strict criteria morphology. Significant differences were observed between donor semen banks for most semen quality parameters analyzed in intracervical insemination samples. In general, the greatest variability observed between banks was in percentage progressive sperm motility (range, 8.8 +/- 5.8 to 42.4 +/- 5.5) and normal sperm morphology (strict criteria; range, 10.1 +/- 3.3 to 26.6 +/- 4.7). Coefficients of variation within sperm banks were generally high. These data demonstrate the variability of donor semen quality provided by commercial sperm banks, both between banks and within a given bank. No relationship was observed between the size or type of sperm bank and the degree of variability. The data demonstrate the lack of uniformity in the criteria used to screen potential semen donors and emphasize the need for more stringent screening criteria and strict quality control in processing samples.

  11. Sample preparation techniques for the determination of natural 15N/14N variations in amino acids by gas chromatography-combustion-isotope ratio mass spectrometry (GC-C-IRMS).

    Science.gov (United States)

    Hofmann, D; Gehre, M; Jung, K

    2003-09-01

    In order to identify natural nitrogen isotope variations of biologically important amino acids four derivatization reactions (t-butylmethylsilylation, esterification with subsequent trifluoroacetylation, acetylation and pivaloylation) were tested with standard mixtures of 17 proteinogenic amino acids and plant (moss) samples using GC-C-IRMS. The possible fractionation of the nitrogen isotopes, caused for instance by the formation of multiple reaction products, was investigated. For biological samples, the esterification of the amino acids with subsequent trifluoroacetylation is recommended for nitrogen isotope ratio analysis. A sample preparation technique is described for the isotope ratio mass spectrometric analysis of amino acids from the non-protein (NPN) fraction of terrestrial moss. 14N/15N ratios from moss (Scleropodium spec.) samples from different anthropogenically polluted areas were studied with respect to ecotoxicologal bioindication.

  12. Random walks and diffusion on networks

    Science.gov (United States)

    Masuda, Naoki; Porter, Mason A.; Lambiotte, Renaud

    2017-11-01

    Random walks are ubiquitous in the sciences, and they are interesting from both theoretical and practical perspectives. They are one of the most fundamental types of stochastic processes; can be used to model numerous phenomena, including diffusion, interactions, and opinions among humans and animals; and can be used to extract information about important entities or dense groups of entities in a network. Random walks have been studied for many decades on both regular lattices and (especially in the last couple of decades) on networks with a variety of structures. In the present article, we survey the theory and applications of random walks on networks, restricting ourselves to simple cases of single and non-adaptive random walkers. We distinguish three main types of random walks: discrete-time random walks, node-centric continuous-time random walks, and edge-centric continuous-time random walks. We first briefly survey random walks on a line, and then we consider random walks on various types of networks. We extensively discuss applications of random walks, including ranking of nodes (e.g., PageRank), community detection, respondent-driven sampling, and opinion models such as voter models.

  13. A Bayesian Method for Weighted Sampling

    OpenAIRE

    Lo, Albert Y.

    1993-01-01

    Bayesian statistical inference for sampling from weighted distribution models is studied. Small-sample Bayesian bootstrap clone (BBC) approximations to the posterior distribution are discussed. A second-order property for the BBC in unweighted i.i.d. sampling is given. A consequence is that BBC approximations to a posterior distribution of the mean and to the sampling distribution of the sample average, can be made asymptotically accurate by a proper choice of the random variables that genera...

  14. Estimating random transverse velocities in the fast solar wind from EISCAT Interplanetary Scintillation measurements

    Directory of Open Access Journals (Sweden)

    A. Canals

    2002-09-01

    Full Text Available Interplanetary scintillation measurements can yield estimates of a large number of solar wind parameters, including bulk flow speed, variation in bulk velocity along the observing path through the solar wind and random variation in transverse velocity. This last parameter is of particular interest, as it can indicate the flux of low-frequency Alfvén waves, and the dissipation of these waves has been proposed as an acceleration mechanism for the fast solar wind. Analysis of IPS data is, however, a significantly unresolved problem and a variety of a priori assumptions must be made in interpreting the data. Furthermore, the results may be affected by the physical structure of the radio source and by variations in the solar wind along the scintillation ray path. We have used observations of simple point-like radio sources made with EISCAT between 1994 and 1998 to obtain estimates of random transverse velocity in the fast solar wind. The results obtained with various a priori assumptions made in the analysis are compared, and we hope thereby to be able to provide some indication of the reliability of our estimates of random transverse velocity and the variation of this parameter with distance from the Sun.Key words. Interplanetary physics (MHD waves and turbulence; solar wind plasma; instruments and techniques

  15. Estimating random transverse velocities in the fast solar wind from EISCAT Interplanetary Scintillation measurements

    Directory of Open Access Journals (Sweden)

    A. Canals

    Full Text Available Interplanetary scintillation measurements can yield estimates of a large number of solar wind parameters, including bulk flow speed, variation in bulk velocity along the observing path through the solar wind and random variation in transverse velocity. This last parameter is of particular interest, as it can indicate the flux of low-frequency Alfvén waves, and the dissipation of these waves has been proposed as an acceleration mechanism for the fast solar wind. Analysis of IPS data is, however, a significantly unresolved problem and a variety of a priori assumptions must be made in interpreting the data. Furthermore, the results may be affected by the physical structure of the radio source and by variations in the solar wind along the scintillation ray path. We have used observations of simple point-like radio sources made with EISCAT between 1994 and 1998 to obtain estimates of random transverse velocity in the fast solar wind. The results obtained with various a priori assumptions made in the analysis are compared, and we hope thereby to be able to provide some indication of the reliability of our estimates of random transverse velocity and the variation of this parameter with distance from the Sun.

    Key words. Interplanetary physics (MHD waves and turbulence; solar wind plasma; instruments and techniques

  16. Selection of representative calibration sample sets for near-infrared reflectance spectroscopy to predict nitrogen concentration in grasses

    DEFF Research Database (Denmark)

    Shetty, Nisha; Rinnan, Åsmund; Gislum, René

    2012-01-01

    ) algorithm were used and compared. Both Puchwein and CADEX methods provide a calibration set equally distributed in space, and both methods require a minimum prior of knowledge. The samples were also selected randomly using complete random, cultivar random (year fixed), year random (cultivar fixed......) and interaction (cultivar × year fixed) random procedures to see the influence of different factors on sample selection. Puchwein's method performed best with lowest RMSEP followed by CADEX, interaction random, year random, cultivar random and complete random. Out of 118 samples of the complete calibration set...... effectively enhance the cost-effectiveness of NIR spectral analysis by reducing the number of analyzed samples in the calibration set by more than 80%, which substantially reduces the effort of laboratory analyses with no significant loss in prediction accuracy....

  17. Sampling Strategies for Evaluating the Rate of Adventitious Transgene Presence in Non-Genetically Modified Crop Fields.

    Science.gov (United States)

    Makowski, David; Bancal, Rémi; Bensadoun, Arnaud; Monod, Hervé; Messéan, Antoine

    2017-09-01

    According to E.U. regulations, the maximum allowable rate of adventitious transgene presence in non-genetically modified (GM) crops is 0.9%. We compared four sampling methods for the detection of transgenic material in agricultural non-GM maize fields: random sampling, stratified sampling, random sampling + ratio reweighting, random sampling + regression reweighting. Random sampling involves simply sampling maize grains from different locations selected at random from the field concerned. The stratified and reweighting sampling methods make use of an auxiliary variable corresponding to the output of a gene-flow model (a zero-inflated Poisson model) simulating cross-pollination as a function of wind speed, wind direction, and distance to the closest GM maize field. With the stratified sampling method, an auxiliary variable is used to define several strata with contrasting transgene presence rates, and grains are then sampled at random from each stratum. With the two methods involving reweighting, grains are first sampled at random from various locations within the field, and the observations are then reweighted according to the auxiliary variable. Data collected from three maize fields were used to compare the four sampling methods, and the results were used to determine the extent to which transgene presence rate estimation was improved by the use of stratified and reweighting sampling methods. We found that transgene rate estimates were more accurate and that substantially smaller samples could be used with sampling strategies based on an auxiliary variable derived from a gene-flow model. © 2017 Society for Risk Analysis.

  18. Specific Gravity Variation in a Lower Mississippi Valley Cottonwood Population

    Science.gov (United States)

    R. E. Farmer; J. R. Wilcox

    1966-01-01

    Specific gravity varied from 0,32 to 0.46, averaging 0.38. Most of the variation was associated with individual trees; samples within locations accounted for a smaller, but statistically significant, portion of the variation. Variation between locatians was not significant. It was concluded that individual high-density trees' should be sought throughout the...

  19. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  20. Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey

    Directory of Open Access Journals (Sweden)

    Steven R. Corman

    2013-12-01

    Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.

  1. Variation of topical application to skin under good clinical practice (GCP)

    DEFF Research Database (Denmark)

    Vind-Kezunovic, Dina; Serup, Jørgen Vedelskov

    2016-01-01

    INTRODUCTION: Application of topical products by individuals is inherently variable and accurate dosing can be difficult to control. Variation of the dose used under optimal conditions in drug trials is unknown. METHODS: This trial was part of a double-blind, randomized, placebo-controlled good...

  2. Sliding-Mode Control to Compensate PVT Variations in Dual Core Systems

    NARCIS (Netherlands)

    Pourshaghaghi, H.R.; Fatemi, S.H.; Pineda de Gyvez, J.

    2012-01-01

    In this paper, we present a novel robust sliding-mode controller for stabilizing supply voltage and clock frequency of dual core processors determined by dynamic voltage and frequency scaling (DVFS) methods in the presence of systematic and random variations. We show that maximum rejection for

  3. Random sequential adsorption of cubes

    Science.gov (United States)

    Cieśla, Michał; Kubala, Piotr

    2018-01-01

    Random packings built of cubes are studied numerically using a random sequential adsorption algorithm. To compare the obtained results with previous reports, three different models of cube orientation sampling were used. Also, three different cube-cube intersection algorithms were tested to find the most efficient one. The study focuses on the mean saturated packing fraction as well as kinetics of packing growth. Microstructural properties of packings were analyzed using density autocorrelation function.

  4. Seasonal variation in the Dutch bovine raw milk composition

    NARCIS (Netherlands)

    Heck, J.M.L.; Valenberg, van H.J.F.; Dijkstra, J.; Hooijdonk, van A.C.M.

    2009-01-01

    In this study, we determined the detailed composition of and seasonal variation in Dutch dairy milk. Raw milk samples representative of the complete Dutch milk supply were collected weekly from February 2005 until February 2006. Large seasonal variation exists in the concentrations of the main

  5. A sampling approach for predicting the eating quality of apples using visible–near infrared spectroscopy

    DEFF Research Database (Denmark)

    Vega, Mabel V Martínez; Sharifzadeh, Sara; Wulfsohn, Dvoralai

    2013-01-01

    BACKGROUND Visible–near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used represent......BACKGROUND Visible–near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used...... representative samples of an early and a late season apple cultivar to evaluate model robustness (in terms of prediction ability and error) on the soluble solids content (SSC) and acidity prediction, in the wavelength range 400–1100 nm. RESULTS A total of 196 middle–early season and 219 late season apples (Malus...... training and test sets (‘smooth fractionator’, by date of measurement after harvest and random). Using the ‘smooth fractionator’ sampling method, fewer spectral bands (26) and elastic net resulted in improved performance for SSC models of ‘Aroma’ apples, with a coefficient of variation CVSSC = 13...

  6. Statistics and sampling in transuranic studies

    International Nuclear Information System (INIS)

    Eberhardt, L.L.; Gilbert, R.O.

    1980-01-01

    The existing data on transuranics in the environment exhibit a remarkably high variability from sample to sample (coefficients of variation of 100% or greater). This chapter stresses the necessity of adequate sample size and suggests various ways to increase sampling efficiency. Objectives in sampling are regarded as being of great importance in making decisions as to sampling methodology. Four different classes of sampling methods are described: (1) descriptive sampling, (2) sampling for spatial pattern, (3) analytical sampling, and (4) sampling for modeling. A number of research needs are identified in the various sampling categories along with several problems that appear to be common to two or more such areas

  7. Studies on the Colour Variation in Larvae of Ephestia kuhniella (ZELLER)(Lepidoptera, Phycitidae) : 1. On the Inheritance of colour variation

    OpenAIRE

    Osamu, IMURA; Entomology Laboratory, National Food Research Institute

    1980-01-01

    The crossing experiment and the selection experiment were carried out at 25℃ and 67% r.h. to define genetic mechanisms of the colour variation in larvae of Ephestia kuhniella. The populations sampled from the Wild-type stock showed the wide range of continuous colour variation from white to deep pink in the 5th inster lavae. The results of the crossing experiment between white larval strain and red larval strain selected from the Wild-type stock suggested the larval colour variation was a qua...

  8. Decomposing Firm-level Sales Variation

    DEFF Research Database (Denmark)

    Munch, Jakob Roland; Nguyen, Daniel Xuyen

    , and that for the median product it drives 31% of the sales variation. When we remove first-time exports from our sample, the median value increases to 40%, implying that firm-destination-specific effects are most important the first year. We conclude that while firm-specific productivity can account for some......We measure the contribution of firm-specific effects to overall sales variation within a destination and find it remarkably low. Our empirical decomposition is structurally motivated by a heterogeneity model of exporting involving destination-specific, firm-specific, and firm......-destination-specific latent effects with incidental truncation. We use a highly detailed dataset with exports by products and destinations for all Danish manufacturing fi…rms. We fi…nd the contribution of firm-specific heterogeneity to within-destination sales variation varies greatly across HS6 products...

  9. RATIO ESTIMATORS FOR THE CO-EFFICIENT OF VARIATION IN A FINITE POPULATION

    Directory of Open Access Journals (Sweden)

    Archana V

    2011-04-01

    Full Text Available The Co-efficient of variation (C.V is a relative measure of dispersion and is free from unit of measurement. Hence it is widely used by the scientists in the disciplines of agriculture, biology, economics and environmental science. Although a lot of work has been reported in the past for the estimation of population C.V in infinite population models, they are not directly applicable for the finite populations. In this paper we have proposed six new estimators of the population C.V in finite population using ratio and product type estimators. The bias and mean square error of these estimators are derived for the simple random sampling design. The performance of the estimators is compared using a real life dataset. The ratio estimator using the information on the population C.V of the auxiliary variable emerges as the best estimator

  10. Random noise attenuation of non-uniformly sampled 3D seismic data along two spatial coordinates using non-equispaced curvelet transform

    Science.gov (United States)

    Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi

    2018-04-01

    The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.

  11. Molecular diversity among Turkish oaks ( QUERCUS ) using random ...

    African Journals Online (AJOL)

    Turkey is one of the most important region of the world according to oak species number and variation. In this study, species belonging to evergreen oaks in Turkey were investigated to solve taxonomic problems and to design the limit of taxa by using random amplified polymorphic DNA (RAPD) data. Here, three species of ...

  12. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  13. Reconstructing random media

    International Nuclear Information System (INIS)

    Yeong, C.L.; Torquato, S.

    1998-01-01

    We formulate a procedure to reconstruct the structure of general random heterogeneous media from limited morphological information by extending the methodology of Rintoul and Torquato [J. Colloid Interface Sci. 186, 467 (1997)] developed for dispersions. The procedure has the advantages that it is simple to implement and generally applicable to multidimensional, multiphase, and anisotropic structures. Furthermore, an extremely useful feature is that it can incorporate any type and number of correlation functions in order to provide as much morphological information as is necessary for accurate reconstruction. We consider a variety of one- and two-dimensional reconstructions, including periodic and random arrays of rods, various distribution of disks, Debye random media, and a Fontainebleau sandstone sample. We also use our algorithm to construct heterogeneous media from specified hypothetical correlation functions, including an exponentially damped, oscillating function as well as physically unrealizable ones. copyright 1998 The American Physical Society

  14. A variational Bayesian multiple particle filtering scheme for large-dimensional systems

    KAUST Repository

    Ait-El-Fquih, Boujemaa

    2016-06-14

    This paper considers the Bayesian filtering problem in high-dimensional nonlinear state-space systems. In such systems, classical particle filters (PFs) are impractical due to the prohibitive number of required particles to obtain reasonable performances. One approach that has been introduced to overcome this problem is the concept of multiple PFs (MPFs), where the state-space is split into low-dimensional subspaces and then a separate PF is applied to each subspace. Remarkable performances of MPF-like filters motivated our investigation here into a new strategy that combines the variational Bayesian approach to split the state-space with random sampling techniques, to derive a new computationally efficient MPF. The propagation of each particle in the prediction step of the resulting filter requires generating only a single particle in contrast with standard MPFs, for which a set of (children) particles is required. We present simulation results to evaluate the behavior of the proposed filter and compare its performances against standard PF and a MPF.

  15. A variational Bayesian multiple particle filtering scheme for large-dimensional systems

    KAUST Repository

    Ait-El-Fquih, Boujemaa; Hoteit, Ibrahim

    2016-01-01

    This paper considers the Bayesian filtering problem in high-dimensional nonlinear state-space systems. In such systems, classical particle filters (PFs) are impractical due to the prohibitive number of required particles to obtain reasonable performances. One approach that has been introduced to overcome this problem is the concept of multiple PFs (MPFs), where the state-space is split into low-dimensional subspaces and then a separate PF is applied to each subspace. Remarkable performances of MPF-like filters motivated our investigation here into a new strategy that combines the variational Bayesian approach to split the state-space with random sampling techniques, to derive a new computationally efficient MPF. The propagation of each particle in the prediction step of the resulting filter requires generating only a single particle in contrast with standard MPFs, for which a set of (children) particles is required. We present simulation results to evaluate the behavior of the proposed filter and compare its performances against standard PF and a MPF.

  16. Sample normalization methods in quantitative metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Variation in Local-Scale Edge Effects: Mechanisms and landscape Context

    Science.gov (United States)

    Therese M. Donovan; Peter W. Jones; Elizabeth M. Annand; Frank R. Thompson III

    1997-01-01

    Ecological processes near habitat edges often differ from processes away from edges. Yet, the generality of "edge effects" has been hotly debated because results vary tremendously. To understand the factors responsible for this variation, we described nest predation and cowbird distribution patterns in forest edge and forest core habitats on 36 randomly...

  18. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  19. Molecular Darwinism: the contingency of spontaneous genetic variation.

    Science.gov (United States)

    Arber, Werner

    2011-01-01

    The availability of spontaneously occurring genetic variants is an important driving force of biological evolution. Largely thanks to experimental investigations by microbial geneticists, we know today that several different molecular mechanisms contribute to the overall genetic variations. These mechanisms can be assigned to three natural strategies to generate genetic variants: 1) local sequence changes, 2) intragenomic reshuffling of DNA segments, and 3) acquisition of a segment of foreign DNA. In these processes, specific gene products are involved in cooperation with different nongenetic elements. Some genetic variations occur fully at random along the DNA filaments, others rather with a statistical reproducibility, although at many possible sites. We have to be aware that evolution in natural ecosystems is of higher complexity than under most laboratory conditions, not at least in view of symbiotic associations and the occurrence of horizontal gene transfer. The encountered contingency of genetic variation can possibly best ensure a long-term persistence of life under steadily changing living conditions.

  20. Heuristic Relative Entropy Principles with Complex Measures: Large-Degree Asymptotics of a Family of Multi-variate Normal Random Polynomials

    Science.gov (United States)

    Kiessling, Michael Karl-Heinz

    2017-10-01

    Let z\\in C, let σ ^2>0 be a variance, and for N\\in N define the integrals E_N^{}(z;σ ) := {1/σ } \\int _R\\ (x^2+z^2) e^{-{1/2σ^2 x^2}}{√{2π }}/dx \\quad if N=1, {1/σ } \\int _{R^N} \\prod \\prod \\limits _{1≤ k1. These are expected values of the polynomials P_N^{}(z)=\\prod _{1≤ n≤ N}(X_n^2+z^2) whose 2 N zeros ± i X_k^{}_{k=1,\\ldots ,N} are generated by N identically distributed multi-variate mean-zero normal random variables {X_k}N_{k=1} with co-variance {Cov}_N^{}(X_k,X_l)=(1+σ ^2-1/N)δ _{k,l}+σ ^2-1/N(1-δ _{k,l}). The E_N^{}(z;σ ) are polynomials in z^2, explicitly computable for arbitrary N, yet a list of the first three E_N^{}(z;σ ) shows that the expressions become unwieldy already for moderate N—unless σ = 1, in which case E_N^{}(z;1) = (1+z^2)^N for all z\\in C and N\\in N. (Incidentally, commonly available computer algebra evaluates the integrals E_N^{}(z;σ ) only for N up to a dozen, due to memory constraints). Asymptotic evaluations are needed for the large- N regime. For general complex z these have traditionally been limited to analytic expansion techniques; several rigorous results are proved for complex z near 0. Yet if z\\in R one can also compute this "infinite-degree" limit with the help of the familiar relative entropy principle for probability measures; a rigorous proof of this fact is supplied. Computer algebra-generated evidence is presented in support of a conjecture that a generalization of the relative entropy principle to signed or complex measures governs the N→ ∞ asymptotics of the regime iz\\in R. Potential generalizations, in particular to point vortex ensembles and the prescribed Gauss curvature problem, and to random matrix ensembles, are emphasized.