Sampling Polya-Gamma random variates: alternate and approximate techniques
Windle, Jesse; Polson, Nicholas G.; Scott, James G.
2014-01-01
Efficiently sampling from the P\\'olya-Gamma distribution, ${PG}(b,z)$, is an essential element of P\\'olya-Gamma data augmentation. Polson et. al (2013) show how to efficiently sample from the ${PG}(1,z)$ distribution. We build two new samplers that offer improved performance when sampling from the ${PG}(b,z)$ distribution and $b$ is not unity.
The contribution of simple random sampling to observed variations in faecal egg counts.
Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I
2012-09-10
It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.
Some regional variations in dietary patterns in a random sample of British adults.
Whichelow, M J; Erzinclioglu, S W; Cox, B D
1991-05-01
Comparison was made of the reported frequency of consumption or choice of 30 food items by 8860 adults in the 11 standard regions of Great Britain, with the use of log-linear analysis to allow for the age, sex, social class and smoking habit variations between the regions. The South-East was taken as the base region against which the others were compared. The number of food items for which there were significant differences from the South-East were Scotland 23, North 25, North-West and Yorkshire/Humberside 20, Wales 19, West Midlands 15, East Midlands 10, East Anglia 8, South-West 7 and Greater London 9. Overall the findings confirm a North/South trend in relation to eating habits, even when demographic and smoking-habit variations are taken into account, with the frequent consumption of many fruit and vegetable products being much less common and of several high-fat foods (chips, processed meats and fried food) more common in Scotland, Wales and the northern part of England. In most regions there was a significantly lower frequency of consumption of fresh fruit, fruit juice, 'brown' bread, pasta/rice, poultry, skimmed/semi-skimmed milk, light desserts and nuts, and a higher consumption of red meat, fish and fried food than in the South-East.
Independent random sampling methods
Martino, Luca; Míguez, Joaquín
2018-01-01
This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...
Systematic versus random sampling in stereological studies.
West, Mark J
2012-12-01
The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.
A Bayesian Justification for Random Sampling in Sample Survey
Directory of Open Access Journals (Sweden)
Glen Meeden
2012-07-01
Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.
Systematic random sampling of the comet assay.
McArt, Darragh G; Wasson, Gillian R; McKerr, George; Saetzler, Kurt; Reed, Matt; Howard, C Vyvyan
2009-07-01
The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory 'tail' DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the 'randomness' of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.
k-Means: Random Sampling Procedure
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.
Variational data assimilation using targetted random walks
Cotter, S. L.
2011-02-15
The variational approach to data assimilation is a widely used methodology for both online prediction and for reanalysis. In either of these scenarios, it can be important to assess uncertainties in the assimilated state. Ideally, it is desirable to have complete information concerning the Bayesian posterior distribution for unknown state given data. We show that complete computational probing of this posterior distribution is now within the reach in the offline situation. We introduce a Markov chain-Monte Carlo (MCMC) method which enables us to directly sample from the Bayesian posterior distribution on the unknown functions of interest given observations. Since we are aware that these methods are currently too computationally expensive to consider using in an online filtering scenario, we frame this in the context of offline reanalysis. Using a simple random walk-type MCMC method, we are able to characterize the posterior distribution using only evaluations of the forward model of the problem, and of the model and data mismatch. No adjoint model is required for the method we use; however, more sophisticated MCMC methods are available which exploit derivative information. For simplicity of exposition, we consider the problem of assimilating data, either Eulerian or Lagrangian, into a low Reynolds number flow in a two-dimensional periodic geometry. We will show that in many cases it is possible to recover the initial condition and model error (which we describe as unknown forcing to the model) from data, and that with increasing amounts of informative data, the uncertainty in our estimations reduces. © 2011 John Wiley & Sons, Ltd.
Sampling problems for randomly broken sticks
Energy Technology Data Exchange (ETDEWEB)
Huillet, Thierry [Laboratoire de Physique Theorique et Modelisation, CNRS-UMR 8089 et Universite de Cergy-Pontoise, 5 mail Gay-Lussac, 95031, Neuville sur Oise (France)
2003-04-11
Consider the random partitioning model of a population (represented by a stick of length 1) into n species (fragments) with identically distributed random weights (sizes). Upon ranking the fragments' weights according to ascending sizes, let S{sub m:n} be the size of the mth smallest fragment. Assume that some observer is sampling such populations as follows: drop at random k points (the sample size) onto this stick and record the corresponding numbers of visited fragments. We shall investigate the following sampling problems: (1) what is the sample size if the sampling is carried out until the first visit of the smallest fragment (size S{sub 1:n})? (2) For a given sample size, have all the fragments of the stick been visited at least once or not? This question is related to Feller's random coupon collector problem. (3) In what order are new fragments being discovered and what is the random number of samples separating the discovery of consecutive new fragments until exhaustion of the list? For this problem, the distribution of the size-biased permutation of the species' weights, as the sequence of their weights in their order of appearance is needed and studied.
Padilla, Alberto
2009-01-01
Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...
Generation and Analysis of Constrained Random Sampling Patterns
DEFF Research Database (Denmark)
Pierzchlewski, Jacek; Arildsen, Thomas
2016-01-01
Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...... indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed....
Non-compact random generalized games and random quasi-variational inequalities
Yuan, Xian-Zhi
1994-01-01
In this paper, existence theorems of random maximal elements, random equilibria for the random one-person game and random generalized game with a countable number of players are given as applications of random fixed point theorems. By employing existence theorems of random generalized games, we deduce the existence of solutions for non-compact random quasi-variational inequalities. These in turn are used to establish several existence theorems of noncompact generalized random ...
Acceptance sampling using judgmental and randomly selected samples
Energy Technology Data Exchange (ETDEWEB)
Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl
2010-09-01
We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.
Variational Infinite Hidden Conditional Random Fields
Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja; Ghahramani, Zoubin
2015-01-01
Hidden conditional random fields (HCRFs) are discriminative latent variable models which have been shown to successfully learn the hidden structure of a given classification problem. An Infinite hidden conditional random field is a hidden conditional random field with a countably infinite number of
A random sampling procedure for anisotropic distributions
International Nuclear Information System (INIS)
Nagrajan, P.S.; Sethulakshmi, P.; Raghavendran, C.P.; Bhatia, D.P.
1975-01-01
A procedure is described for sampling the scattering angle of neutrons as per specified angular distribution data. The cosine of the scattering angle is written as a double Legendre expansion in the incident neutron energy and a random number. The coefficients of the expansion are given for C, N, O, Si, Ca, Fe and Pb and these elements are of interest in dosimetry and shielding. (author)
BWIP-RANDOM-SAMPLING, Random Sample Generation for Nuclear Waste Disposal
International Nuclear Information System (INIS)
Sagar, B.
1989-01-01
1 - Description of program or function: Random samples for different distribution types are generated. Distribution types as required for performance assessment modeling of geologic nuclear waste disposal are provided. These are: - Uniform, - Log-uniform (base 10 or natural), - Normal, - Lognormal (base 10 or natural), - Exponential, - Bernoulli, - User defined continuous distribution. 2 - Method of solution: A linear congruential generator is used for uniform random numbers. A set of functions is used to transform the uniform distribution to the other distributions. Stratified, rather than random, sampling can be chosen. Truncated limits can be specified on many distributions, whose usual definition has an infinite support. 3 - Restrictions on the complexity of the problem: Generation of correlated random variables is not included
Variational data assimilation using targetted random walks
Cotter, S. L.; Dashti, M.; Stuart, A. M.
2011-01-01
chain-Monte Carlo (MCMC) method which enables us to directly sample from the Bayesian posterior distribution on the unknown functions of interest given observations. Since we are aware that these methods are currently too computationally expensive
A random spatial sampling method in a rural developing nation
Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas
2014-01-01
Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...
Variational random phase approximation for the anharmonic oscillator
International Nuclear Information System (INIS)
Dukelsky, J.; Schuck, P.
1990-04-01
The recently derived Variational Random Phase Approximation is examined using the anharmonic oscillator model. Special attention is paid to the ground state RPA wave function and the convergence of the proposed truncation scheme to obtain the diagonal density matrix. Comparison with the standard Coupled Cluster method is made
Detection of somaclonal variation by random amplified polymorphic ...
African Journals Online (AJOL)
Detection of somaclonal variation by random amplified polymorphic DNA analysis during micropropagation of Phalaenopsis bellina (Rchb.f.) Christenson. ... Among the primers used, P 16 produced the highest number of bands (29), while primer OPU 10 produced the lowest number (15). The range of similarity coefficient ...
Power Spectrum Estimation of Randomly Sampled Signals
DEFF Research Database (Denmark)
Velte, C. M.; Buchhave, P.; K. George, W.
algorithms; sample and-hold and the direct spectral estimator without residence time weighting. The computer generated signal is a Poisson process with a sample rate proportional to velocity magnitude that consist of well-defined frequency content, which makes bias easy to spot. The idea...
Variational Approach to Enhanced Sampling and Free Energy Calculations
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
Biro, Peter A
2013-02-01
Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.
Efficient sampling of complex network with modified random walk strategies
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
Sampling large random knots in a confined space
International Nuclear Information System (INIS)
Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M
2007-01-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications
Sampling large random knots in a confined space
Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.
2007-09-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
Sampling large random knots in a confined space
Energy Technology Data Exchange (ETDEWEB)
Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)
2007-09-28
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS
Sampath Sundaram; Ammani Sivaraman
2010-01-01
In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i) Balanced Systematic Sampling (BSS) of Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...
Directory of Open Access Journals (Sweden)
Morgan P. Kain
2015-09-01
Full Text Available In ecology and evolution generalized linear mixed models (GLMMs are becoming increasingly used to test for differences in variation by treatment at multiple hierarchical levels. Yet, the specific sampling schemes that optimize the power of an experiment to detect differences in random effects by treatment/group remain unknown. In this paper we develop a blueprint for conducting power analyses for GLMMs focusing on detecting differences in variance by treatment. We present parameterization and power analyses for random-intercepts and random-slopes GLMMs because of their generality as focal parameters for most applications and because of their immediate applicability to emerging questions in the field of behavioral ecology. We focus on the extreme case of hierarchically structured binomial data, though the framework presented here generalizes easily to any error distribution model. First, we determine the optimal ratio of individuals to repeated measures within individuals that maximizes power to detect differences by treatment in among-individual variation in intercept, among-individual variation in slope, and within-individual variation in intercept. Second, we explore how power to detect differences in target variance parameters is affected by total variation. Our results indicate heterogeneity in power across ratios of individuals to repeated measures with an optimal ratio determined by both the target variance parameter and total sample size. Additionally, power to detect each variance parameter was low overall (in most cases >1,000 total observations per treatment needed to achieve 80% power and decreased with increasing variance in non-target random effects. With growing interest in variance as the parameter of inquiry, these power analyses provide a crucial component for designing experiments focused on detecting differences in variance. We hope to inspire novel experimental designs in ecology and evolution investigating the causes and
Random matrix approach to the dynamics of stock inventory variations
International Nuclear Information System (INIS)
Zhou Weixing; Mu Guohua; Kertész, János
2012-01-01
It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient C ij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ 1 and λ 2 ) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ 1 ) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients C VR between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets. (paper)
Random matrix approach to the dynamics of stock inventory variations
Zhou, Wei-Xing; Mu, Guo-Hua; Kertész, János
2012-09-01
It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient Cij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ1 and λ2) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ1) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients CV R between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets.
A Variational Approach to Enhanced Sampling and Free Energy Calculations
Parrinello, Michele
2015-03-01
The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.
Seismic random noise attenuation using shearlet and total generalized variation
International Nuclear Information System (INIS)
Kong, Dehui; Peng, Zhenming
2015-01-01
Seismic denoising from a corrupted observation is an important part of seismic data processing which improves the signal-to-noise ratio (SNR) and resolution. In this paper, we present an effective denoising method to attenuate seismic random noise. The method takes advantage of shearlet and total generalized variation (TGV) regularization. Different regularity levels of TGV improve the quality of the final result by suppressing Gibbs artifacts caused by the shearlet. The problem is formulated as mixed constraints in a convex optimization. A Bregman algorithm is proposed to solve the proposed model. Extensive experiments based on one synthetic datum and two post-stack field data are done to compare performance. The results demonstrate that the proposed method provides superior effectiveness and preserve the structure better. (paper)
Seismic random noise attenuation using shearlet and total generalized variation
Kong, Dehui; Peng, Zhenming
2015-12-01
Seismic denoising from a corrupted observation is an important part of seismic data processing which improves the signal-to-noise ratio (SNR) and resolution. In this paper, we present an effective denoising method to attenuate seismic random noise. The method takes advantage of shearlet and total generalized variation (TGV) regularization. Different regularity levels of TGV improve the quality of the final result by suppressing Gibbs artifacts caused by the shearlet. The problem is formulated as mixed constraints in a convex optimization. A Bregman algorithm is proposed to solve the proposed model. Extensive experiments based on one synthetic datum and two post-stack field data are done to compare performance. The results demonstrate that the proposed method provides superior effectiveness and preserve the structure better.
Random sampling of evolution time space and Fourier transform processing
International Nuclear Information System (INIS)
Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor
2006-01-01
Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time
Computing variational bounds for flow through random aggregates of Spheres
International Nuclear Information System (INIS)
Berryman, J.G.
1983-01-01
Known formulas for variational bounds on Darcy's constant for slow flow through porous media depend on two-point and three-poiint spatial correlation functions. Certain bounds due to Prager and Doi depending only a two-point correlation functions have been calculated for the first time for random aggregates of spheres with packing fractions (eta) up to eta = 0.64. Three radial distribution functions for hard spheres were tested for eta up to 0.49: (1) the uniform distribution or ''well-stirred approximation,'' (2) the Percus Yevick approximation, and (3) the semi-empirical distribution of Verlet and Weis. The empirical radial distribution functions of Benett andd Finney were used for packing fractions near the random-close-packing limit (eta/sub RCP/dapprox.0.64). An accurate multidimensional Monte Carlo integration method (VEGAS) developed by Lepage was used to compute the required two-point correlation functions. The results show that Doi's bounds are preferred for eta>0.10 while Prager's bounds are preferred for eta>0.10. The ''upper bounds'' computed using the well-stirred approximation actually become negative (which is physically impossible) as eta increases, indicating the very limited value of this approximation. The other two choices of radial distribution function give reasonable results for eta up to 0.49. However, these bounds do not decrease with eta as fast as expected for large eta. It is concluded that variational bounds dependent on three-point correlation functions are required to obtain more accurate bounds on Darcy's constant for large eta
Adaptive importance sampling of random walks on continuous state spaces
International Nuclear Information System (INIS)
Baggerly, K.; Cox, D.; Picard, R.
1998-01-01
The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material
Random vs. systematic sampling from administrative databases involving human subjects.
Hagino, C; Lo, R J
1998-09-01
Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.
Hierarchical Protein Free Energy Landscapes from Variationally Enhanced Sampling.
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-12-13
In recent work, we demonstrated that it is possible to obtain approximate representations of high-dimensional free energy surfaces with variationally enhanced sampling ( Shaffer, P.; Valsson, O.; Parrinello, M. Proc. Natl. Acad. Sci. , 2016 , 113 , 17 ). The high-dimensional spaces considered in that work were the set of backbone dihedral angles of a small peptide, Chignolin, and the high-dimensional free energy surface was approximated as the sum of many two-dimensional terms plus an additional term which represents an initial estimate. In this paper, we build on that work and demonstrate that we can calculate high-dimensional free energy surfaces of very high accuracy by incorporating additional terms. The additional terms apply to a set of collective variables which are more coarse than the base set of collective variables. In this way, it is possible to build hierarchical free energy surfaces, which are composed of terms that act on different length scales. We test the accuracy of these free energy landscapes for the proteins Chignolin and Trp-cage by constructing simple coarse-grained models and comparing results from the coarse-grained model to results from atomistic simulations. The approach described in this paper is ideally suited for problems in which the free energy surface has important features on different length scales or in which there is some natural hierarchy.
RandomSpot: A web-based tool for systematic random sampling of virtual slides.
Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E
2015-01-01
This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
International Nuclear Information System (INIS)
Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.
1993-01-01
Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs
Variation in rank abundance replicate samples and impact of clustering
Neuteboom, J.H.; Struik, P.C.
2005-01-01
Calculating a single-sample rank abundance curve by using the negative-binomial distribution provides a way to investigate the variability within rank abundance replicate samples and yields a measure of the degree of heterogeneity of the sampled community. The calculation of the single-sample rank
Aspects of Students' Reasoning about Variation in Empirical Sampling Distributions
Noll, Jennifer; Shaughnessy, J. Michael
2012-01-01
Sampling tasks and sampling distributions provide a fertile realm for investigating students' conceptions of variability. A project-designed teaching episode on samples and sampling distributions was team-taught in 6 research classrooms (2 middle school and 4 high school) by the investigators and regular classroom mathematics teachers. Data…
A Table-Based Random Sampling Simulation for Bioluminescence Tomography
Directory of Open Access Journals (Sweden)
Xiaomeng Zhang
2006-01-01
Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.
Particulate organic nitrates: Sampling and night/day variation
DEFF Research Database (Denmark)
Nielsen, T.; Platz, J.; Granby, K.
1998-01-01
Atmospheric day and night concentrations of particulate organic nitrates (PON) and several other air pollutants were measured in the summer 1995 over an open-land area in Denmark. The sampling of PON was evaluated comparing 24 h samples with two sets of 12 h samples. These results indicate...... that the observed low contribution of PON to NO, is real and not the result of an extensive loss during the sampling. Empirical relationships between the vapour pressure and chemical formula of organic compounds were established in order to evaluate the gas/particle distribution of organic nitrates. A positive...
Variations among animals when estimating the undegradable fraction of fiber in forage samples
Directory of Open Access Journals (Sweden)
Cláudia Batista Sampaio
2014-10-01
Full Text Available The objective of this study was to assess the variability among animals regarding the critical time to estimate the undegradable fraction of fiber (ct using an in situ incubation procedure. Five rumenfistulated Nellore steers were used to estimate the degradation profile of fiber. Animals were fed a standard diet with an 80:20 forage:concentrate ratio. Sugarcane, signal grass hay, corn silage and fresh elephant grass samples were assessed. Samples were put in F57 Ankom® bags and were incubated in the rumens of the animals for 0, 6, 12, 18, 24, 48, 72, 96, 120, 144, 168, 192, 216, 240 and 312 hours. The degradation profiles were interpreted using a mixed non-linear model in which a random effect was associated with the degradation rate. For sugarcane, signal grass hay and corn silage, there were no significant variations among animals regarding the fractional degradation rate of neutral and acid detergent fiber; consequently, the ct required to estimate the undegradable fiber fraction did not vary among animals for those forages. However, a significant variability among animals was found for the fresh elephant grass. The results seem to suggest that the variability among animals regarding the degradation rate of fibrous components can be significant.
Directory of Open Access Journals (Sweden)
Martin M Gossner
Full Text Available There is a great demand for standardising biodiversity assessments in order to allow optimal comparison across research groups. For invertebrates, pitfall or flight-interception traps are commonly used, but sampling solution differs widely between studies, which could influence the communities collected and affect sample processing (morphological or genetic. We assessed arthropod communities with flight-interception traps using three commonly used sampling solutions across two forest types and two vertical strata. We first considered the effect of sampling solution and its interaction with forest type, vertical stratum, and position of sampling jar at the trap on sample condition and community composition. We found that samples collected in copper sulphate were more mouldy and fragmented relative to other solutions which might impair morphological identification, but condition depended on forest type, trap type and the position of the jar. Community composition, based on order-level identification, did not differ across sampling solutions and only varied with forest type and vertical stratum. Species richness and species-level community composition, however, differed greatly among sampling solutions. Renner solution was highly attractant for beetles and repellent for true bugs. Secondly, we tested whether sampling solution affects subsequent molecular analyses and found that DNA barcoding success was species-specific. Samples from copper sulphate produced the fewest successful DNA sequences for genetic identification, and since DNA yield or quality was not particularly reduced in these samples additional interactions between the solution and DNA must also be occurring. Our results show that the choice of sampling solution should be an important consideration in biodiversity studies. Due to the potential bias towards or against certain species by Ethanol-containing sampling solution we suggest ethylene glycol as a suitable sampling solution when
LOD score exclusion analyses for candidate QTLs using random population samples.
Deng, Hong-Wen
2003-11-01
While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes as putative QTLs using random population samples. Previously, we developed an LOD score exclusion mapping approach for candidate genes for complex diseases. Here, we extend this LOD score approach for exclusion analyses of candidate genes for quantitative traits. Under this approach, specific genetic effects (as reflected by heritability) and inheritance models at candidate QTLs can be analyzed and if an LOD score is < or = -2.0, the locus can be excluded from having a heritability larger than that specified. Simulations show that this approach has high power to exclude a candidate gene from having moderate genetic effects if it is not a QTL and is robust to population admixture. Our exclusion analysis complements association analysis for candidate genes as putative QTLs in random population samples. The approach is applied to test the importance of Vitamin D receptor (VDR) gene as a potential QTL underlying the variation of bone mass, an important determinant of osteoporosis.
Estimates of Inequality Indices Based on Simple Random, Ranked Set, and Systematic Sampling
Bansal, Pooja; Arora, Sangeeta; Mahajan, Kalpana K.
2013-01-01
Gini index, Bonferroni index, and Absolute Lorenz index are some popular indices of inequality showing different features of inequality measurement. In general simple random sampling procedure is commonly used to estimate the inequality indices and their related inference. The key condition that the samples must be drawn via simple random sampling procedure though makes calculations much simpler but this assumption is often violated in practice as the data does not always yield simple random ...
Importance sampling of heavy-tailed iterated random functions
B. Chen (Bohan); C.H. Rhee (Chang-Han); A.P. Zwart (Bert)
2016-01-01
textabstractWe consider a stochastic recurrence equation of the form $Z_{n+1} = A_{n+1} Z_n+B_{n+1}$, where $\\mathbb{E}[\\log A_1]<0$, $\\mathbb{E}[\\log^+ B_1]<\\infty$ and $\\{(A_n,B_n)\\}_{n\\in\\mathbb{N}}$ is an i.i.d. sequence of positive random vectors. The stationary distribution of this Markov
Random Variation in Student Performance by Class Size: Implications of NCLB in Rural Pennsylvania
Goetz, Stephan J.
2005-01-01
Schools that fail to make "adequate yearly progress" under NCLB face sanctions and may lose students to other schools. In smaller schools, random yearly variation in innate student ability and behavior can cause changes in scores that are beyond the influence of teachers. This study examines changes in reading and math scores across…
DEFF Research Database (Denmark)
Bøgh Andersen, Ida; Brasen, Claus L.; Christensen, Henry
2015-01-01
.9×10-7) and sodium (p = 8.7×10-16). Only TSH and albumin were clinically significantly influenced by diurnal variation. Resting time had no clinically significant effect. CONCLUSIONS: We found no need for resting 15 minutes prior to blood sampling. However, diurnal variation was found to have a significant......BACKGROUND: According to current recommendations, blood samples should be taken in the morning after 15 minutes' resting time. Some components exhibit diurnal variation and in response to pressures to expand opening hours and reduce waiting time, the aims of this study were to investigate...... the impact of resting time prior to blood sampling and diurnal variation on biochemical components, including albumin, thyrotropin (TSH), total calcium and sodium in plasma. METHODS: All patients referred to an outpatient clinic for blood sampling were included in the period Nov 2011 until June 2014 (opening...
Sawada, Takuya; Takata, Hidehiro; Nii, Koji; Nagata, Makoto
2013-04-01
Static random access memory (SRAM) cores exhibit susceptibility against power supply voltage variation. False operation is investigated among SRAM cells under sinusoidal voltage variation on power lines introduced by direct RF power injection. A standard SRAM core of 16 kbyte in a 90 nm 1.5 V technology is diagnosed with built-in self test and on-die noise monitor techniques. The sensitivity of bit error rate is shown to be high against the frequency of injected voltage variation, while it is not greatly influenced by the difference in frequency and phase against SRAM clocking. It is also observed that the distribution of false bits is substantially random in a cell array.
Directory of Open Access Journals (Sweden)
CODRUŢA DURA
2010-01-01
Full Text Available The sample represents a particular segment of the statistical populationchosen to represent it as a whole. The representativeness of the sample determines the accuracyfor estimations made on the basis of calculating the research indicators and the inferentialstatistics. The method of random sampling is part of probabilistic methods which can be usedwithin marketing research and it is characterized by the fact that it imposes the requirementthat each unit belonging to the statistical population should have an equal chance of beingselected for the sampling process. When the simple random sampling is meant to be rigorouslyput into practice, it is recommended to use the technique of random number tables in order toconfigure the sample which will provide information that the marketer needs. The paper alsodetails the practical procedure implemented in order to create a sample for a marketingresearch by generating random numbers using the facilities offered by Microsoft Excel.
Directory of Open Access Journals (Sweden)
Tianjiao Chu
Full Text Available Our goal was to test the hypothesis that inter-individual genomic copy number variation in control samples is a confounding factor in the non-invasive prenatal detection of fetal microdeletions via the sequence-based analysis of maternal plasma DNA. The database of genomic variants (DGV was used to determine the "Genomic Variants Frequency" (GVF for each 50kb region in the human genome. Whole genome sequencing of fifteen karyotypically normal maternal plasma and six CVS DNA controls samples was performed. The coefficient of variation of relative read counts (cv.RTC for these samples was determined for each 50kb region. Maternal plasma from two pregnancies affected with a chromosome 5p microdeletion was also sequenced, and analyzed using the GCREM algorithm. We found strong correlation between high variance in read counts and GVF amongst controls. Consequently we were unable to confirm the presence of the microdeletion via sequencing of maternal plasma samples obtained from two sequential affected pregnancies. Caution should be exercised when performing NIPT for microdeletions. It is vital to develop our understanding of the factors that impact the sensitivity and specificity of these approaches. In particular, benign copy number variation amongst controls is a major confounder, and their effects should be corrected bioinformatically.
Correlated random sampling for multivariate normal and log-normal distributions
International Nuclear Information System (INIS)
Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.
2012-01-01
A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.
Detection of the Thickness Variation of a Stainless Steel sample using Pulsed Eddy Current
International Nuclear Information System (INIS)
Cheong, Y. M.; Angani, C. S.; Park, D. G.; Jhong, H. K.; Kim, G. D.; Kim, C. G.
2008-01-01
The Pulsed Eddy Current (PEC) system has been developed for the detection of thickness variation of stainless steel. The sample was machined as step configuration using stainless steel for thickness variation from 1mm to 5mm step by step. The LabView computer program was developed to display the variation in the amplitude of the detected pulse by scanning the PECT probe on the flat side of the sample. The pickup Sensor measures the effective magnetic field on the sample, which is the sum of the incident field and the field reflected by the specimen due to the induced eddy currents in the sample. We use the hall sensor for the detection. Usage of hall sensor instead of coil as a field detector improves the detectability and special resolution. This technology can be used in detection of local wall thinning of the pipeline of nuclear power plant
Muldowney, Patrick
2012-01-01
A Modern Theory of Random Variation is a new and radical re-formulation of the mathematical underpinnings of subjects as diverse as investment, communication engineering, and quantum mechanics. Setting aside the classical theory of probability measure spaces, the book utilizes a mathematically rigorous version of the theory of random variation that bases itself exclusively on finitely additive probability distribution functions. In place of twentieth century Lebesgue integration and measure theory, the author uses the simpler concept of Riemann sums, and the non-absolute Riemann-type integration of Henstock. Readers are supplied with an accessible approach to standard elements of probability theory such as the central limmit theorem and Brownian motion as well as remarkable, new results on Feynman diagrams and stochastic integrals. Throughout the book, detailed numerical demonstrations accompany the discussions of abstract mathematical theory, from the simplest elements of the subject to the most complex. I...
Weekday variation in triglyceride concentrations in 1.8 million blood samples
DEFF Research Database (Denmark)
Jaskolowski, Jörn; Ritz, Christian; Sjödin, Anders Mikael
2017-01-01
BACKGROUND: Triglyceride (TG) concentration is used as a marker of cardio-metabolic risk. However, diurnal and possibly weekday variation exists in TG concentrations. OBJECTIVE: To investigate weekday variation in TG concentrations among 1.8 million blood samples drawn between 2008 and 2015 from...... variations in TG concentrations were recorded for out-patients between the age of 9 to 26 years, with up to 20% higher values on Mondays compared to Fridays (all PTriglyceride concentrations were highest after the weekend and gradually declined during the week. We suggest that unhealthy...
Borak, T B
1986-04-01
Periodic grab sampling in combination with time-of-occupancy surveys has been the accepted procedure for estimating the annual exposure of underground U miners to Rn daughters. Temporal variations in the concentration of potential alpha energy in the mine generate uncertainties in this process. A system to randomize the selection of locations for measurement is described which can reduce uncertainties and eliminate systematic biases in the data. In general, a sample frequency of 50 measurements per year is sufficient to satisfy the criteria that the annual exposure be determined in working level months to within +/- 50% of the true value with a 95% level of confidence. Suggestions for implementing this randomization scheme are presented.
An alternative procedure for estimating the population mean in simple random sampling
Directory of Open Access Journals (Sweden)
Housila P. Singh
2012-03-01
Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.
Health plan auditing: 100-percent-of-claims vs. random-sample audits.
Sillup, George P; Klimberg, Ronald K
2011-01-01
The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.
Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster
DEFF Research Database (Denmark)
Schou, Mads Fristrup
2013-01-01
When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and diminishes environmental variance. This method was compared with a traditional egg collection method where eggs are collected directly from the medium. Within each method the observed and expected standard deviations of egg-to-adult viability were compared, whereby the difference in the randomness...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....
Yabusaki, Katsumi; Faits, Tyler; McMullen, Eri; Figueiredo, Jose Luiz; Aikawa, Masanori; Aikawa, Elena
2014-01-01
As computing technology and image analysis techniques have advanced, the practice of histology has grown from a purely qualitative method to one that is highly quantified. Current image analysis software is imprecise and prone to wide variation due to common artifacts and histological limitations. In order to minimize the impact of these artifacts, a more robust method for quantitative image analysis is required. Here we present a novel image analysis software, based on the hue saturation value color space, to be applied to a wide variety of histological stains and tissue types. By using hue, saturation, and value variables instead of the more common red, green, and blue variables, our software offers some distinct advantages over other commercially available programs. We tested the program by analyzing several common histological stains, performed on tissue sections that ranged from 4 µm to 10 µm in thickness, using both a red green blue color space and a hue saturation value color space. We demonstrated that our new software is a simple method for quantitative analysis of histological sections, which is highly robust to variations in section thickness, sectioning artifacts, and stain quality, eliminating sample-to-sample variation.
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-02-02
The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin.
Shaffer, Patrick; Valsson, Omar; Parrinello, Michele
2016-01-01
The capabilities of molecular simulations have been greatly extended by a number of widely used enhanced sampling methods that facilitate escaping from metastable states and crossing large barriers. Despite these developments there are still many problems which remain out of reach for these methods which has led to a vigorous effort in this area. One of the most important problems that remains unsolved is sampling high-dimensional free-energy landscapes and systems that are not easily described by a small number of collective variables. In this work we demonstrate a new way to compute free-energy landscapes of high dimensionality based on the previously introduced variationally enhanced sampling, and we apply it to the miniprotein chignolin. PMID:26787868
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure
Magnitude of 14C/12C variations based on archaeological samples
International Nuclear Information System (INIS)
Kusumgar, S.; Agrawal, D.P.
1977-01-01
The magnitude of 14 C/ 12 C variations in the period A.D. 5O0 to 200 B.C. and 370 B.C. to 2900 B.C. is discussed. The 14 C dates of well-dated archaeological samples from India and Egypt do not show any significant divergence from the historical ages. On the other hand, the corrections based on dendrochronological samples show marked deviations for the same time period. A plea is, therefore, made to study old tree samples from Anatolia and Irish bogs and archaeological samples from west Asia to arrive at a more realistic calibration curve. (author)
Variation in the diversity and richness of parasitoid wasps based on sampling effort.
Saunders, Thomas E; Ward, Darren F
2018-01-01
Parasitoid wasps are a mega-diverse, ecologically dominant, but poorly studied component of global biodiversity. In order to maximise the efficiency and reduce the cost of their collection, the application of optimal sampling techniques is necessary. Two sites in Auckland, New Zealand were sampled intensively to determine the relationship between sampling effort and observed species richness of parasitoid wasps from the family Ichneumonidae. Twenty traps were deployed at each site at three different times over the austral summer period, resulting in a total sampling effort of 840 Malaise-trap-days. Rarefaction techniques and non-parametric estimators were used to predict species richness and to evaluate the variation and completeness of sampling. Despite an intensive Malaise-trapping regime over the summer period, no asymptote of species richness was reached. At best, sampling captured two-thirds of parasitoid wasp species present. The estimated total number of species present depended on the month of sampling and the statistical estimator used. Consequently, the use of fewer traps would have caught only a small proportion of all species (one trap 7-21%; two traps 13-32%), and many traps contributed little to the overall number of individuals caught. However, variation in the catch of individual Malaise traps was not explained by seasonal turnover of species, vegetation or environmental conditions surrounding the trap, or distance of traps to one another. Overall the results demonstrate that even with an intense sampling effort the community is incompletely sampled. The use of only a few traps and/or for very short periods severely limits the estimates of richness because (i) fewer individuals are caught leading to a greater number of singletons; and (ii) the considerable variation of individual traps means some traps will contribute few or no individuals. Understanding how sampling effort affects the richness and diversity of parasitoid wasps is a useful
Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions
Energy Technology Data Exchange (ETDEWEB)
Žerovnik, G., E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Kodeli, I.A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Capote, R. [International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Smith, D.L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States)
2015-01-15
Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.
Williamson, Graham R
2003-11-01
This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.
Bespoke Bias for Obtaining Free Energy Differences within Variationally Enhanced Sampling.
McCarty, James; Valsson, Omar; Parrinello, Michele
2016-05-10
Obtaining efficient sampling of multiple metastable states through molecular dynamics and hence determining free energy differences is central for understanding many important phenomena. Here we present a new biasing strategy, which employs the recent variationally enhanced sampling approach (Valsson and Parrinello Phys. Rev. Lett. 2014, 113, 090601). The bias is constructed from an intuitive model of the local free energy surface describing fluctuations around metastable minima and depends on only a few parameters which are determined variationally such that efficient sampling between states is obtained. The bias constructed in this manner largely reduces the need of finding a set of collective variables that completely spans the conformational space of interest, as they only need to be a locally valid descriptor of the system about its local minimum. We introduce the method and demonstrate its power on two representative examples.
Variations in reporting of outcomes in randomized trials on diet and physical activity in pregnancy
DEFF Research Database (Denmark)
Rogozińska, Ewelina; Marlin, Nadine; Yang, Fen
2017-01-01
AIM: Trials on diet and physical activity in pregnancy report on various outcomes. We aimed to assess the variations in outcomes reported and their quality in trials on lifestyle interventions in pregnancy. METHODS: We searched major databases without language restrictions for randomized controlled...... trials on diet and physical activity-based interventions in pregnancy up to March 2015. Two independent reviewers undertook study selection and data extraction. We estimated the percentage of papers reporting 'critically important' and 'important' outcomes. We defined the quality of reporting...... as a proportion using a six-item questionnaire. Regression analysis was used to identify factors affecting this quality. RESULTS: Sixty-six randomized controlled trials were published in 78 papers (66 main, 12 secondary). Gestational diabetes (57.6%, 38/66), preterm birth (48.5%, 32/66) and cesarian section (60...
Directory of Open Access Journals (Sweden)
Rawid Banchuin
2013-01-01
Full Text Available The novel probabilistic models of the random variations in nanoscale MOSFET's high frequency performance defined in terms of gate capacitance and transition frequency have been proposed. As the transition frequency variation has also been considered, the proposed models are considered as complete unlike the previous one which take only the gate capacitance variation into account. The proposed models have been found to be both analytic and physical level oriented as they are the precise mathematical expressions in terms of physical parameters. Since the up-to-date model of variation in MOSFET's characteristic induced by physical level fluctuation has been used, part of the proposed models for gate capacitance is more accurate and physical level oriented than its predecessor. The proposed models have been verified based on the 65 nm CMOS technology by using the Monte-Carlo SPICE simulations of benchmark circuits and Kolmogorov-Smirnov tests as highly accurate since they fit the Monte-Carlo-based analysis results with 99% confidence. Hence, these novel models have been found to be versatile for the statistical/variability aware analysis/design of nanoscale MOSFET-based analog/mixed signal circuits and systems.
Generating Random Samples of a Given Size Using Social Security Numbers.
Erickson, Richard C.; Brauchle, Paul E.
1984-01-01
The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)
M. O. Partala; S. Ya. Zhuk
2007-01-01
On the base of mixed Markoff process in discrete time optimal and quasioptimal algorithms is designed for adaptive filtration of speech signals in the presence of correlated noise with random variation of probabilistic characteristics.
Occupational position and its relation to mental distress in a random sample of Danish residents
DEFF Research Database (Denmark)
Rugulies, Reiner Ernst; Madsen, Ida E H; Nielsen, Maj Britt D
2010-01-01
PURPOSE: To analyze the distribution of depressive, anxiety, and somatization symptoms across different occupational positions in a random sample of Danish residents. METHODS: The study sample consisted of 591 Danish residents (50% women), aged 20-65, drawn from an age- and gender-stratified random...... sample of the Danish population. Participants filled out a survey that included the 92 item version of the Hopkins Symptom Checklist (SCL-92). We categorized occupational position into seven groups: high- and low-grade non-manual workers, skilled and unskilled manual workers, high- and low-grade self...
Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne
2018-01-01
Introduction Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. Materials and methods The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. Results The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. Conclusions The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in
L'Engle, Kelly; Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne
2018-01-01
Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in comparison to household surveys. Random digit dialing of mobile
Directory of Open Access Journals (Sweden)
Kelly L'Engle
Full Text Available Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample.The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census.The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample.The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in comparison to household surveys. Random digit
Contributions from the data samples in NOC technique on the extracting of the Sq variation
Wu, Yingyan; Xu, Wenyao
2015-04-01
The solar quiet daily variation, Sq, a rather regular variation is usually observed at mid-low latitudes on magnetic quiet days or less-disturbed days. It is mainly resulted from the dynamo currents in the ionospheric E region, which are driven by the atmospheric tidal wind and different processes and flow as two current whorls in each of the northern and southern hemispheres[1]. The Sq exhibits a conspicuous day-to-day (DTD) variability in daily range (or strength), shape (or phase) and its current focus. This variability is mainly attributed to changes in the ionospheric conductivity and tidal winds, varying with solar radiation and ionospheric conditions. Furthermore, it presents a seasonal variation and solar cycle variation[2-4]. In generally, Sq is expressed with the average value of the five international magnetic quiet days. Using data from global magnetic stations, equivalent current system of daily variation can be constructed to reveal characteristics of the currents[5]. In addition, using the differences of H component at two stations on north and south side of the Sq currents of focus, Sq is extracted much better[6]. Recently, the method of Natural Orthoganal Components (NOC) is used to decompose the magnetic daily variation and express it as the summation of eigenmodes, and indicate the first NOC eigenmode as the solar quiet daily variation, the second as the disturbance daily variation[7-9]. As we know, the NOC technique can help reveal simpler patterns within a complex set of variables, without designed basic-functions such as FFT technique. But the physical explanation of the NOC eigenmodes is greatly depends on the number of data samples and data regular-quality. Using the NOC method, we focus our present study on the analysis of the hourly means of the H component at BMT observatory in China from 2001 to 2008. The contributions of the number and the regular-quality of the data samples on which eigenmode corresponds to the Sq are analyzed, by
International Nuclear Information System (INIS)
Maziero, Jonas
2015-01-01
The numerical generation of random quantum states (RQS) is an important procedure for investigations in quantum information science. Here, we review some methods that may be used for performing that task. We start by presenting a simple procedure for generating random state vectors, for which the main tool is the random sampling of unbiased discrete probability distributions (DPD). Afterwards, the creation of random density matrices is addressed. In this context, we first present the standard method, which consists in using the spectral decomposition of a quantum state for getting RQS from random DPDs and random unitary matrices. In the sequence, the Bloch vector parametrization method is described. This approach, despite being useful in several instances, is not in general convenient for RQS generation. In the last part of the article, we regard the overparametrized method (OPM) and the related Ginibre and Bures techniques. The OPM can be used to create random positive semidefinite matrices with unit trace from randomly produced general complex matrices in a simple way that is friendly for numerical implementations. We consider a physically relevant issue related to the possible domains that may be used for the real and imaginary parts of the elements of such general complex matrices. Subsequently, a too fast concentration of measure in the quantum state space that appears in this parametrization is noticed. (author)
Ana L. Albarrán-Lara; Jessica W. Wright; Paul F. Gugger; Annette Delfino-Mix; Juan Manuel Peñaloza-Ramírez; Victoria L. Sork
2015-01-01
California oaks exhibit tremendous phenotypic variation throughout their range. This variation reflects phenotypic plasticity in tree response to local environmental conditions as well as genetic differences underlying those phenotypes. In this study, we analyze phenotypic variation in leaf traits for valley oak adults sampled along three elevational transects and in...
International Nuclear Information System (INIS)
Allagi, Mabruk O.; Lewins, Jeffery D.
1999-01-01
In a further study of virtually processed Monte Carlo estimates in neutron transport, a shielding problem has been studied. The use of virtual sampling to estimate the importance function at a certain point in the phase space depends on the presence of neutrons from the real source at that point. But in deep penetration problems, not many neutrons will reach regions far away from the source. In order to overcome this problem, two suggestions are considered: (1) virtual sampling is used as far as the real neutrons can reach, then fictitious sampling is introduced for the remaining regions, distributed in all the regions, or (2) only one fictitious source is placed where the real neutrons almost terminate and then virtual sampling is used in the same way as for the real source. Variational processing is again found to improve the Monte Carlo estimates, being best when using one fictitious source in the far regions with virtual sampling (option 2). When fictitious sources are used to estimate the importances in regions far away from the source, some optimization has to be performed for the proportion of fictitious to real sources, weighted against accuracy and computational costs. It has been found in this study that the optimum number of cells to be treated by fictitious sampling is problem dependent, but as a rule of thumb, fictitious sampling should be employed in regions where the number of neutrons from the real source fall below a specified limit for good statistics
Directory of Open Access Journals (Sweden)
Daria Sanna
2011-01-01
Full Text Available We report a sampling strategy based on Mendelian Breeding Units (MBUs, representing an interbreeding group of individuals sharing a common gene pool. The identification of MBUs is crucial for case-control experimental design in association studies. The aim of this work was to evaluate the possible existence of bias in terms of genetic variability and haplogroup frequencies in the MBU sample, due to severe sample selection. In order to reach this goal, the MBU sampling strategy was compared to a standard selection of individuals according to their surname and place of birth. We analysed mitochondrial DNA variation (first hypervariable segment and coding region in unrelated healthy subjects from two different areas of Sardinia: the area around the town of Cabras and the western Campidano area. No statistically significant differences were observed when the two sampling methods were compared, indicating that the stringent sample selection needed to establish a MBU does not alter original genetic variability and haplogroup distribution. Therefore, the MBU sampling strategy can be considered a useful tool in association studies of complex traits.
Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E
2001-01-01
Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.
Freeman, Lindsay M; Pang, Lin; Fainman, Yeshaiahu
2018-05-09
The analysis of DNA has led to revolutionary advancements in the fields of medical diagnostics, genomics, prenatal screening, and forensic science, with the global DNA testing market expected to reach revenues of USD 10.04 billion per year by 2020. However, the current methods for DNA analysis remain dependent on the necessity for fluorophores or conjugated proteins, leading to high costs associated with consumable materials and manual labor. Here, we demonstrate a potential label-free DNA composition detection method using surface-enhanced Raman spectroscopy (SERS) in which we identify the composition of cytosine and adenine within single strands of DNA. This approach depends on the fact that there is one phosphate backbone per nucleotide, which we use as a reference to compensate for systematic measurement variations. We utilize plasmonic nanomaterials with random Raman sampling to perform label-free detection of the nucleotide composition within DNA strands, generating a calibration curve from standard samples of DNA and demonstrating the capability of resolving the nucleotide composition. The work represents an innovative way for detection of the DNA composition within DNA strands without the necessity of attached labels, offering a highly sensitive and reproducible method that factors in random sampling to minimize error.
X-ray speckle contrast variation at a sample-specific absorption edges
International Nuclear Information System (INIS)
Retsch, C. C.; Wang, Y.; Frigo, S. P.; Stephenson, G. B.; McNulty, I.
2000-01-01
The authors measured static x-ray speckle contrast variation with the incident photon energy across sample-specific absorption edges. They propose that the variation depends strongly on the spectral response function of the monochromator. Speckle techniques have been introduced to the x-ray regime during recent years. Most of these experiments, however, were done at photon energies above 5 keV. They are working on this technique in the 1 to 4 keV range, an energy range that includes many important x-ray absorption edges, e.g., in Al, Si, P, S, the rare-earths, and others. To their knowledge, the effect of absorption edges on speckle contrast has not yet been studied. In this paper, they present their initial measurements and understanding of the observed phenomena
Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling
Directory of Open Access Journals (Sweden)
Bo Yu
2015-01-01
Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... Â§ 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...
Lorenzo, C; Carretero, J M; Arsuaga, J L; Gracia, A; Martínez, I
1998-05-01
A sexual dimorphism more marked than in living humans has been claimed for European Middle Pleistocene humans, Neandertals and prehistoric modern humans. In this paper, body size and cranial capacity variation are studied in the Sima de los Huesos Middle Pleistocene sample. This is the largest sample of non-modern humans found to date from one single site, and with all skeletal elements represented. Since the techniques available to estimate the degree of sexual dimorphism in small palaeontological samples are all unsatisfactory, we have used the bootstraping method to asses the magnitude of the variation in the Sima de los Huesos sample compared to modern human intrapopulational variation. We analyze size variation without attempting to sex the specimens a priori. Anatomical regions investigated are scapular glenoid fossa; acetabulum; humeral proximal and distal epiphyses; ulnar proximal epiphysis; radial neck; proximal femur; humeral, femoral, ulnar and tibial shaft; lumbosacral joint; patella; calcaneum; and talar trochlea. In the Sima de los Huesos sample only the humeral midshaft perimeter shows an unusual high variation (only when it is expressed by the maximum ratio, not by the coefficient of variation). In spite of that the cranial capacity range at Sima de los Huesos almost spans the rest of the European and African Middle Pleistocene range. The maximum ratio is in the central part of the distribution of modern human samples. Thus, the hypothesis of a greater sexual dimorphism in Middle Pleistocene populations than in modern populations is not supported by either cranial or postcranial evidence from Sima de los Huesos.
Onsongo, Getiria; Baughn, Linda B; Bower, Matthew; Henzler, Christine; Schomaker, Matthew; Silverstein, Kevin A T; Thyagarajan, Bharat
2016-11-01
Simultaneous detection of small copy number variations (CNVs) (<0.5 kb) and single-nucleotide variants in clinically significant genes is of great interest for clinical laboratories. The analytical variability in next-generation sequencing (NGS) and artifacts in coverage data because of issues with mappability along with lack of robust bioinformatics tools for CNV detection have limited the utility of targeted NGS data to identify CNVs. We describe the development and implementation of a bioinformatics algorithm, copy number variation-random forest (CNV-RF), that incorporates a machine learning component to identify CNVs from targeted NGS data. Using CNV-RF, we identified 12 of 13 deletions in samples with known CNVs, two cases with duplications, and identified novel deletions in 22 additional cases. Furthermore, no CNVs were identified among 60 genes in 14 cases with normal copy number and no CNVs were identified in another 104 patients with clinical suspicion of CNVs. All positive deletions and duplications were confirmed using a quantitative PCR method. CNV-RF also detected heterozygous deletions and duplications with a specificity of 50% across 4813 genes. The ability of CNV-RF to detect clinically relevant CNVs with a high degree of sensitivity along with confirmation using a low-cost quantitative PCR method provides a framework for providing comprehensive NGS-based CNV/single-nucleotide variant detection in a clinical molecular diagnostics laboratory. Copyright © 2016 American Society for Investigative Pathology and the Association for Molecular Pathology. Published by Elsevier Inc. All rights reserved.
Random sampling of elementary flux modes in large-scale metabolic networks.
Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel
2012-09-15
The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.
The Dirichet-Multinomial model for multivariate randomized response data and small samples
Avetisyan, Marianna; Fox, Gerardus J.A.
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The
The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples
Avetisyan, Marianna; Fox, Jean-Paul
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…
A Nationwide Random Sampling Survey of Potential Complicated Grief in Japan
Mizuno, Yasunao; Kishimoto, Junji; Asukai, Nozomu
2012-01-01
To investigate the prevalence of significant loss, potential complicated grief (CG), and its contributing factors, we conducted a nationwide random sampling survey of Japanese adults aged 18 or older (N = 1,343) using a self-rating Japanese-language version of the Complicated Grief Brief Screen. Among them, 37.0% experienced their most significant…
A simple sample size formula for analysis of covariance in cluster randomized trials.
Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.
2012-01-01
For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An
Random selection of items. Selection of n1 samples among N items composing a stratum
International Nuclear Information System (INIS)
Jaech, J.L.; Lemaire, R.J.
1987-02-01
STR-224 provides generalized procedures to determine required sample sizes, for instance in the course of a Physical Inventory Verification at Bulk Handling Facilities. The present report describes procedures to generate random numbers and select groups of items to be verified in a given stratum through each of the measurement methods involved in the verification. (author). 3 refs
Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA
Taylor, Laura; Doehler, Kirsten
2015-01-01
This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…
An efficient method of randomly sampling the coherent angular scatter distribution
International Nuclear Information System (INIS)
Williamson, J.F.; Morin, R.L.
1983-01-01
Monte Carlo simulations of photon transport phenomena require random selection of an interaction process at each collision site along the photon track. Possible choices are usually limited to photoelectric absorption and incoherent scatter as approximated by the Klein-Nishina distribution. A technique is described for sampling the coherent angular scatter distribution, for the benefit of workers in medical physics. (U.K.)
Genetic variation of seedling traits in a random mating population of sunflower
International Nuclear Information System (INIS)
Habib, S.
2004-01-01
Forty S/sub 1/ families obtained from a random mating population of sunflower were evaluated in the laboratory for various seedling traits. The objectives of this study were to investigate the extent and nature of genetic variability and to determine the estimates of genotypic and phenotypic correlations among ten seedling traits prevailing in a random mating population of sunflower. The results indicated that significant differences existed among the 40 S/sub 1/ families for all the traits evaluated. Genotypic and phenotypic coefficients of variation were comparatively high for emergence rate index, root/shoot ratio, dry root weight, fresh root weight and fresh shoot weight. The estimates of broad-sense heritability were high and significant for all the traits. The study of genotypic and phenotypic correlations among these traits revealed that generally, the seedlings which took more time to emerge were vigorous for most of the traits except fresh shoot length. However, rapidly emerging seedlings had higher emergence percentage. The root traits appeared to be better indicators of seedling vigour compared to other traits as these traits exhibited strong and positive genotypic and phenotypic correlations among them. (author)
International Nuclear Information System (INIS)
Rezapour, Arash; Rezapour, Pegah
2015-01-01
We investigate the effect of dopant random fluctuation on threshold voltage and drain current variation in a two-gate nanoscale transistor. We used a quantum-corrected technology computer aided design simulation to run the simulation (10000 randomizations). With this simulation, we could study the effects of varying the dimensions (length and width), and thicknesses of oxide and dopant factors of a transistor on the threshold voltage and drain current in subthreshold region (off) and overthreshold (on). It was found that in the subthreshold region the variability of the drain current and threshold voltage is relatively fixed while in the overthreshold region the variability of the threshold voltage and drain current decreases remarkably, despite the slight reduction of gate voltage diffusion (compared with that of the subthreshold). These results have been interpreted by using previously reported models for threshold current variability, load displacement, and simple analytical calculations. Scaling analysis shows that the variability of the characteristics of this semiconductor increases as the effects of the short channel increases. Therefore, with a slight increase of length and a reduction of width, oxide thickness, and dopant factor, we could correct the effect of the short channel. (paper)
Tamellini, Lorenzo
2016-01-05
In this talk we discuss possible strategies to minimize the impact of the curse of dimensionality effect when building sparse-grid approximations of a multivariate function u = u(y1, ..., yN ). More precisely, we present a knapsack approach , in which we estimate the cost and the error reduction contribution of each possible component of the sparse grid, and then we choose the components with the highest error reduction /cost ratio. The estimates of the error reduction are obtained by either a mixed a-priori / a-posteriori approach, in which we first derive a theoretical bound and then tune it with some inexpensive auxiliary computations (resulting in the so-called quasi-optimal sparse grids ), or by a fully a-posteriori approach (obtaining the so-called adaptive sparse grids ). This framework is very general and can be used to build quasi-optimal/adaptive sparse grids on bounded and unbounded domains (e.g. u depending on uniform and normal random distributions for yn), using both nested and non-nested families of univariate collocation points. We present some theoretical convergence results as well as numerical results showing the efficiency of the proposed approach for the approximation of the solution of elliptic PDEs with random diffusion coefficients. In this context, to treat the case of rough permeability fields in which a sparse grid approach may not be suitable, we propose to use the sparse grids as a control variate in a Monte Carlo simulation.
Adaptive force produced by stress-induced regulation of random variation intensity.
Shimansky, Yury P
2010-08-01
The Darwinian theory of life evolution is capable of explaining the majority of related phenomena. At the same time, the mechanisms of optimizing traits beneficial to a population as a whole but not directly to an individual remain largely unclear. There are also significant problems with explaining the phenomenon of punctuated equilibrium. From another perspective, multiple mechanisms for the regulation of the rate of genetic mutations according to the environmental stress have been discovered, but their precise functional role is not well understood yet. Here a novel mathematical paradigm called a Kinetic-Force Principle (KFP), which can serve as a general basis for biologically plausible optimization methods, is introduced and its rigorous derivation is provided. Based on this principle, it is shown that, if the rate of random changes in a biological system is proportional, even only roughly, to the amount of environmental stress, a virtual force is created, acting in the direction of stress relief. It is demonstrated that KFP can provide important insights into solving the above problems. Evidence is presented in support of a hypothesis that the nature employs KFP for accelerating adaptation in biological systems. A detailed comparison between KFP and the principle of variation and natural selection is presented and their complementarity is revealed. It is concluded that KFP is not a competing alternative, but a powerful addition to the principle of variation and natural selection. It is also shown KFP can be used in multiple ways for adaptation of individual biological organisms.
Monitoring oil persistence on beaches : SCAT versus stratified random sampling designs
International Nuclear Information System (INIS)
Short, J.W.; Lindeberg, M.R.; Harris, P.M.; Maselko, J.M.; Pella, J.J.; Rice, S.D.
2003-01-01
In the event of a coastal oil spill, shoreline clean-up assessment teams (SCAT) commonly rely on visual inspection of the entire affected area to monitor the persistence of the oil on beaches. Occasionally, pits are excavated to evaluate the persistence of subsurface oil. This approach is practical for directing clean-up efforts directly following a spill. However, sampling of the 1989 Exxon Valdez oil spill in Prince William Sound 12 years later has shown that visual inspection combined with pit excavation does not offer estimates of contaminated beach area of stranded oil volumes. This information is needed to statistically evaluate the significance of change with time. Assumptions regarding the correlation of visually-evident surface oil and cryptic subsurface oil are usually not evaluated as part of the SCAT mandate. Stratified random sampling can avoid such problems and could produce precise estimates of oiled area and volume that allow for statistical assessment of major temporal trends and the extent of the impact. The 2001 sampling of the shoreline of Prince William Sound showed that 15 per cent of surface oil occurrences were associated with subsurface oil. This study demonstrates the usefulness of the stratified random sampling method and shows how sampling design parameters impact statistical outcome. Power analysis based on the study results, indicate that optimum power is derived when unnecessary stratification is avoided. It was emphasized that sampling effort should be balanced between choosing sufficient beaches for sampling and the intensity of sampling
The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.
Rodgers, J L
1999-10-01
A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.
Spatial Variation of Soil Lead in an Urban Community Garden: Implications for Risk-Based Sampling.
Bugdalski, Lauren; Lemke, Lawrence D; McElmurry, Shawn P
2014-01-01
Soil lead pollution is a recalcitrant problem in urban areas resulting from a combination of historical residential, industrial, and transportation practices. The emergence of urban gardening movements in postindustrial cities necessitates accurate assessment of soil lead levels to ensure safe gardening. In this study, we examined small-scale spatial variability of soil lead within a 15 × 30 m urban garden plot established on two adjacent residential lots located in Detroit, Michigan, USA. Eighty samples collected using a variably spaced sampling grid were analyzed for total, fine fraction (less than 250 μm), and bioaccessible soil lead. Measured concentrations varied at sampling scales of 1-10 m and a hot spot exceeding 400 ppm total soil lead was identified in the northwest portion of the site. An interpolated map of total lead was treated as an exhaustive data set, and random sampling was simulated to generate Monte Carlo distributions and evaluate alternative sampling strategies intended to estimate the average soil lead concentration or detect hot spots. Increasing the number of individual samples decreases the probability of overlooking the hot spot (type II error). However, the practice of compositing and averaging samples decreased the probability of overestimating the mean concentration (type I error) at the expense of increasing the chance for type II error. The results reported here suggest a need to reconsider U.S. Environmental Protection Agency sampling objectives and consequent guidelines for reclaimed city lots where soil lead distributions are expected to be nonuniform. © 2013 Society for Risk Analysis.
Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted
2012-03-01
Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper investigates the utility of applying lot quality-assurance sampling to identify geographic heterogeneity in the proportion of new cases with multidrug resistance. We simulated the performance of lot quality-assurance sampling by applying these classification-based approaches to data collected in the most recent TB drug-resistance surveys in Ukraine, Vietnam, and Tanzania. We explored 3 classification systems- two-way static, three-way static, and three-way truncated sequential sampling-at 2 sets of thresholds: low MDR TB = 2%, high MDR TB = 10%, and low MDR TB = 5%, high MDR TB = 20%. The lot quality-assurance sampling systems identified local variability in the prevalence of multidrug resistance in both high-resistance (Ukraine) and low-resistance settings (Vietnam). In Tanzania, prevalence was uniformly low, and the lot quality-assurance sampling approach did not reveal variability. The three-way classification systems provide additional information, but sample sizes may not be obtainable in some settings. New rapid drug-sensitivity testing methods may allow truncated sequential sampling designs and early stopping within static designs, producing even greater efficiency gains. Lot quality-assurance sampling study designs may offer an efficient approach for collecting critical information on local variability in the burden of multidrug-resistant TB. Before this methodology is adopted, programs must determine appropriate classification thresholds, the most useful classification system, and appropriate weighting if unbiased national estimates are also desired.
Hu, Guozhong; Yang, Nan; Xu, Guang; Xu, Jialin
2018-03-01
The gas drainage rate of low-permeability coal seam is generally less than satisfactory. This leads to the gas disaster of coal mine, and largely restricts the extraction of coalbed methane (CBM), and increases the emission of greenhouse gases in the mining area. Consequently, enhancing the gas drainage rate is an urgent challenge. To solve this problem, a new approach of using microwave irradiation (MWR) as a non-contact physical field excitation method to enhance gas drainage has been attempted. In order to evaluate the feasibility of this method, the methane adsorption, diffusion and penetrability of coal subjected to MWR were experimentally investigated. The variation of methane adsorbed amount, methane diffusion speed and absorption loop for the coal sample before and after MWR were obtained. The findings show that the MWR can change the adsorption property and reduce the methane adsorption capacity of coal. Moreover, the methane diffusion characteristic curves for both the irradiated coal samples and theoriginal coal samples present the same trend. The irradiated coal samples have better methane diffusion ability than the original ones. As the adsorbed methane decreases, the methane diffusion speed increases or remain the same for the sample subjected to MWR. Furthermore, compared to the original coal samples, the area of the absorption loop for irradiated samples increases, especially for the micro-pore and medium-pore stage. This leads to the increase of open pores in the coal, thus improving the gas penetrability of coal. This study provides supports for positive MWR effects on changing the methane adsorption and improving the methane diffusion and the gas penetrability properties of coal samples.
Seasonal Variation, Chemical Composition and Antioxidant Activity of Brazilian Propolis Samples
Directory of Open Access Journals (Sweden)
Érica Weinstein Teixeira
2010-01-01
Full Text Available Total phenolic contents, antioxidant activity and chemical composition of propolis samples from three localities of Minas Gerais state (southeast Brazil were determined. Total phenolic contents were determined by the Folin–Ciocalteau method, antioxidant activity was evaluated by DPPH, using BHT as reference, and chemical composition was analyzed by GC/MS. Propolis from Itapecerica and Paula Cândido municipalities were found to have high phenolic contents and pronounced antioxidant activity. From these extracts, 40 substances were identified, among them were simple phenylpropanoids, prenylated phenylpropanoids, sesqui- and diterpenoids. Quantitatively, the main constituent of both samples was allyl-3-prenylcinnamic acid. A sample from Virginópolis municipality had no detectable phenolic substances and contained mainly triterpenoids, the main constituents being α- and β-amyrins. Methanolic extracts from Itapecerica and Paula Cândido exhibited pronounced scavenging activity towards DPPH, indistinguishable from BHT activity. However, extracts from Virginópolis sample exhibited no antioxidant activity. Total phenolic substances, GC/MS analyses and antioxidant activity of samples from Itapecerica collected monthly over a period of 1 year revealed considerable variation. No correlation was observed between antioxidant activity and either total phenolic contents or contents of artepillin C and other phenolic substances, as assayed by CG/MS analysis.
Ellis, Randall P; Fiebig, Denzil G; Johar, Meliyanni; Jones, Glenn; Savage, Elizabeth
2013-09-01
Explaining individual, regional, and provider variation in health care spending is of enormous value to policymakers but is often hampered by the lack of individual level detail in universal public health systems because budgeted spending is often not attributable to specific individuals. Even rarer is self-reported survey information that helps explain this variation in large samples. In this paper, we link a cross-sectional survey of 267 188 Australians age 45 and over to a panel dataset of annual healthcare costs calculated from several years of hospital, medical and pharmaceutical records. We use this data to distinguish between cost variations due to health shocks and those that are intrinsic (fixed) to an individual over three years. We find that high fixed expenditures are positively associated with age, especially older males, poor health, obesity, smoking, cancer, stroke and heart conditions. Being foreign born, speaking a foreign language at home and low income are more strongly associated with higher time-varying expenditures, suggesting greater exposure to adverse health shocks. Copyright © 2013 John Wiley & Sons, Ltd.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Variation in marital quality in a national sample of divorced women.
James, Spencer L
2015-06-01
Previous work has compared marital quality between stably married and divorced individuals. Less work has examined the possibility of variation among divorcés in trajectories of marital quality as divorce approaches. This study addressed that hole by first examining whether distinct trajectories of marital quality can be discerned among women whose marriages ended in divorce and, second, the profile of women who experienced each trajectory. Latent class growth analyses with longitudinal data from a nationally representative sample were used to "look backward" from the time of divorce. Although demographic and socioeconomic variables from this national sample did not predict the trajectories well, nearly 66% of divorced women reported relatively high levels of both happiness and communication and either low or moderate levels of conflict. Future research including personality or interactional patterns may lead to theoretical insights about patterns of marital quality in the years leading to divorce. (c) 2015 APA, all rights reserved).
Lee, Sang-Hee
2005-07-01
This study uses data resampling to test the null hypothesis that the degree of variation in the cranial capacity of the Dmanisi hominid sample is within the range variation of a single species. The statistical significance of the variation in the Dmanisi sample is examined using simulated distributions based on comparative samples of modern humans, chimpanzees, and gorillas. Results show that it is unlikely to find the maximum difference observed in the Dmanisi sample in distributions of female-female pairs from comparative single-species samples. Given that two sexes are represented, the difference in the Dmanisi sample is not enough to reject the null hypothesis of a single species. Results of this study suggest no compelling reason to invoke multiple taxa to explain variation in the cranial capacity of the Dmanisi hominids. (c) 2004 Wiley-Liss, Inc
Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge
International Nuclear Information System (INIS)
Ma, Y C; Liu, H Y; Yan, S B; Li, J M; Tang, J; Yang, Y H; Yang, M W
2013-01-01
This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency. (paper)
Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge
Ma, Y. C.; Liu, H. Y.; Yan, S. B.; Yang, Y. H.; Yang, M. W.; Li, J. M.; Tang, J.
2013-05-01
This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency.
Measurement of the natural variation of 13C/12C isotope ratio in organic samples
International Nuclear Information System (INIS)
Ducatti, C.
1977-01-01
The isotopic ratio analysis for 13 C/ 12 C by mass spectrometry using a 'Working standard' allows the study of 13 C natural variation in organic material, with a total analytical error of less than 0,2%. Equations were derived in order to determine 13 C/ 12 C and 18 O/ 16 O ratios related to the 'working standard' CENA-std and to the international standard PDB. Isotope ratio values obtained with samples prepared in two different combustion apparatus were compared; also the values obtained preparing samples by acid decomposition of carbonaceous materials were compared with the values obtained in different international laboratories. Utilizing the methodology proposed, several leaves collected at different heights of different vegetal species, found 'inside' and 'outside' of the Ducke Forest Reserve, located in the Amazon region, are analysed. It is found that the 13 C natural variation depends upon metabolic process and environmental factors, both being factors which may be qualified as parcial influences on the CO 2 cycle in the forest. (author) [pt
Characterization of electron microscopes with binary pseudo-random multilayer test samples
Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.
2011-09-01
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1,2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi 2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
Characterization of electron microscopes with binary pseudo-random multilayer test samples
International Nuclear Information System (INIS)
Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.
2011-01-01
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi 2 /Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
LOD score exclusion analyses for candidate genes using random population samples.
Deng, H W; Li, J; Recker, R R
2001-05-01
While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes with random population samples. We develop a LOD score approach for exclusion analyses of candidate genes with random population samples. Under this approach, specific genetic effects and inheritance models at candidate genes can be analysed and if a LOD score is < or = - 2.0, the locus can be excluded from having an effect larger than that specified. Computer simulations show that, with sample sizes often employed in association studies, this approach has high power to exclude a gene from having moderate genetic effects. In contrast to regular association analyses, population admixture will not affect the robustness of our analyses; in fact, it renders our analyses more conservative and thus any significant exclusion result is robust. Our exclusion analysis complements association analysis for candidate genes in random population samples and is parallel to the exclusion mapping analyses that may be conducted in linkage analyses with pedigrees or relative pairs. The usefulness of the approach is demonstrated by an application to test the importance of vitamin D receptor and estrogen receptor genes underlying the differential risk to osteoporotic fractures.
Statistical issues in reporting quality data: small samples and casemix variation.
Zaslavsky, A M
2001-12-01
To present two key statistical issues that arise in analysis and reporting of quality data. Casemix variation is relevant to quality reporting when the units being measured have differing distributions of patient characteristics that also affect the quality outcome. When this is the case, adjustment using stratification or regression may be appropriate. Such adjustments may be controversial when the patient characteristic does not have an obvious relationship to the outcome. Stratified reporting poses problems for sample size and reporting format, but may be useful when casemix effects vary across units. Although there are no absolute standards of reliability, high reliabilities (interunit F > or = 10 or reliability > or = 0.9) are desirable for distinguishing above- and below-average units. When small or unequal sample sizes complicate reporting, precision may be improved using indirect estimation techniques that incorporate auxiliary information, and 'shrinkage' estimation can help to summarize the strength of evidence about units with small samples. With broader understanding of casemix adjustment and methods for analyzing small samples, quality data can be analysed and reported more accurately.
International Nuclear Information System (INIS)
Bertschinger, E.
1987-01-01
Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
Melvin, Neal R; Poda, Daniel; Sutherland, Robert J
2007-10-01
When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.
McGarvey, Richard; Burch, Paul; Matthews, Janet M
2016-01-01
Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with
Iterative algorithm of discrete Fourier transform for processing randomly sampled NMR data sets
International Nuclear Information System (INIS)
Stanek, Jan; Kozminski, Wiktor
2010-01-01
Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15 N- and 13 C-edited NOESY-HSQC spectra of human ubiquitin.
Davis, P B; Yee, R L; Millar, J
1994-08-01
Medical practice variation is extensive and well documented, particularly for surgical interventions, and raises important questions for health policy. To date, however, little work has been carried out on interpractitioner variation in prescribing activity in the primary care setting. An analytical model of medical variation is derived from the literature and relevant indicators are identified from a study of New Zealand general practice. The data are based on nearly 9,500 completed patient encounter records drawn from over a hundred practitioners in the Waikato region of the North Island, New Zealand. The data set represents a 1% sample of all weekday general practice office encounters in the Hamilton Health District recorded over a 12-month period. Overall levels of prescribing, and the distribution of drug mentions across diagnostic groupings, are broadly comparable to results drawn from international benchmark data. A multivariate analysis is carried out on seven measures of activity in the areas of prescribing volume, script detail, and therapeutic choice. The analysis indicates that patient, practitioner and practice attributes exert little systematic influence on the prescribing task. The principal influences are diagnosis, followed by practitioner identity. The pattern of findings suggests also that the prescribing task cannot be viewed as an undifferentiated activity. It is more usefully considered as a process of decision-making in which 'core' judgements--such as the decision to prescribe and the choice of drug--are highly predictable and strongly influenced by diagnosis, while 'peripheral' features of the task--such as choosing a combination drug or prescribing generically--are less determinate and more subject to the exercise of clinical discretion.(ABSTRACT TRUNCATED AT 250 WORDS)
Marquez-Garcia, Josimar; Cruz-Félix, Angel S.; Santiago-Alvarado, Agustin; González-García, Jorge
2017-09-01
Nowadays the elastomer known as polydimethylsiloxane (PDMS, Sylgard 184), due to its physical properties, low cost and easy handle, have become a frequently used material for the elaboration of optical components such as: variable focal length liquid lenses, optical waveguides, solid elastic lenses, etc. In recent years, we have been working in the characterization of this material for applications in visual sciences; in this work, we describe the elaboration of PDMSmade samples, also, we present physical and optical properties of the samples by varying its synthesis parameters such as base: curing agent ratio, and both, curing time and temperature. In the case of mechanical properties, tensile and compression tests were carried out through a universal testing machine to obtain the respective stress-strain curves, and to obtain information regarding its optical properties, UV-vis spectroscopy is applied to the samples to obtain transmittance and absorbance curves. Index of refraction variation was obtained through an Abbe refractometer. Results from the characterization will determine the proper synthesis parameters for the elaboration of tunable refractive surfaces for potential applications in robotics.
Randomized branch sampling to estimatefruit production in Pecan trees cv. ‘Barton’
Directory of Open Access Journals (Sweden)
Filemom Manoel Mokochinski
Full Text Available ABSTRACT: Sampling techniques to quantify the production of fruits are still very scarce and create a gap in crop development research. This study was conducted in a rural property in the county of Cachoeira do Sul - RS to estimate the efficiency of randomized branch sampling (RBS in quantifying the production of pecan fruit at three different ages (5,7 and 10 years. Two selection techniques were tested: the probability proportional to the diameter (PPD and the uniform probability (UP techniques, which were performed on nine trees, three from each age and randomly chosen. The RBS underestimated fruit production for all ages, and its main drawback was the high sampling error (125.17% - PPD and 111.04% - UP. The UP was regarded as more efficient than the PPD, though both techniques estimated similar production and similar experimental errors. In conclusion, we reported that branch sampling was inaccurate for this case study, requiring new studies to produce estimates with smaller sampling error.
Control Charts for Processes with an Inherent Between-Sample Variation
Directory of Open Access Journals (Sweden)
Eva Jarošová
2018-06-01
Full Text Available A number of processes to which statistical control is applied are subject to various effects that cause random changes in the mean value. The removal of these fluctuations is either technologically impossible or economically disadvantageous under current conditions. The frequent occurrence of signals in the Shewhart chart due to these fluctuations is then undesirable and therefore the conventional control limits need to be extended. Several approaches to the design of the control charts with extended limits are presented in the paper and applied on the data from a real production process. The methods assume samples of size greater than 1. The performance of the charts is examined using the operating characteristic and average run length. The study reveals that in many cases, reducing the risk of false alarms is insufficient.
Design of Energy Aware Adder Circuits Considering Random Intra-Die Process Variations
Directory of Open Access Journals (Sweden)
Marco Lanuzza
2011-04-01
Full Text Available Energy consumption is one of the main barriers to current high-performance designs. Moreover, the increased variability experienced in advanced process technologies implies further timing yield concerns and therefore intensifies this obstacle. Thus, proper techniques to achieve robust designs are a critical requirement for integrated circuit success. In this paper, the influence of intra-die random process variations is analyzed considering the particular case of the design of energy aware adder circuits. Five well known adder circuits were designed exploiting an industrial 45 nm static complementary metal-oxide semiconductor (CMOS standard cell library. The designed adders were comparatively evaluated under different energy constraints. As a main result, the performed analysis demonstrates that, for a given energy budget, simpler circuits (which are conventionally identified as low-energy slow architectures operating at higher power supply voltages can achieve a timing yield significantly better than more complex faster adders when used in low-power design with supply voltages lower than nominal.
Coarse graining from variationally enhanced sampling applied to the Ginzburg–Landau model
Invernizzi, Michele; Valsson, Omar; Parrinello, Michele
2017-01-01
A powerful way to deal with a complex system is to build a coarse-grained model capable of catching its main physical features, while being computationally affordable. Inevitably, such coarse-grained models introduce a set of phenomenological parameters, which are often not easily deducible from the underlying atomistic system. We present a unique approach to the calculation of these parameters, based on the recently introduced variationally enhanced sampling method. It allows us to obtain the parameters from atomistic simulations, providing thus a direct connection between the microscopic and the mesoscopic scale. The coarse-grained model we consider is that of Ginzburg–Landau, valid around a second-order critical point. In particular, we use it to describe a Lennard–Jones fluid in the region close to the liquid–vapor critical point. The procedure is general and can be adapted to other coarse-grained models. PMID:28292890
Coarse graining from variationally enhanced sampling applied to the Ginzburg-Landau model
Invernizzi, Michele; Valsson, Omar; Parrinello, Michele
2017-03-01
A powerful way to deal with a complex system is to build a coarse-grained model capable of catching its main physical features, while being computationally affordable. Inevitably, such coarse-grained models introduce a set of phenomenological parameters, which are often not easily deducible from the underlying atomistic system. We present a unique approach to the calculation of these parameters, based on the recently introduced variationally enhanced sampling method. It allows us to obtain the parameters from atomistic simulations, providing thus a direct connection between the microscopic and the mesoscopic scale. The coarse-grained model we consider is that of Ginzburg-Landau, valid around a second-order critical point. In particular, we use it to describe a Lennard-Jones fluid in the region close to the liquid-vapor critical point. The procedure is general and can be adapted to other coarse-grained models.
Avrachenkov, Konstantin; Borkar, Vivek S; Kadavankandy, Arun; Sreedharan, Jithin K
2018-01-01
In the framework of network sampling, random walk (RW) based estimation techniques provide many pragmatic solutions while uncovering the unknown network as little as possible. Despite several theoretical advances in this area, RW based sampling techniques usually make a strong assumption that the samples are in stationary regime, and hence are impelled to leave out the samples collected during the burn-in period. This work proposes two sampling schemes without burn-in time constraint to estimate the average of an arbitrary function defined on the network nodes, for example, the average age of users in a social network. The central idea of the algorithms lies in exploiting regeneration of RWs at revisits to an aggregated super-node or to a set of nodes, and in strategies to enhance the frequency of such regenerations either by contracting the graph or by making the hitting set larger. Our first algorithm, which is based on reinforcement learning (RL), uses stochastic approximation to derive an estimator. This method can be seen as intermediate between purely stochastic Markov chain Monte Carlo iterations and deterministic relative value iterations. The second algorithm, which we call the Ratio with Tours (RT)-estimator, is a modified form of respondent-driven sampling (RDS) that accommodates the idea of regeneration. We study the methods via simulations on real networks. We observe that the trajectories of RL-estimator are much more stable than those of standard random walk based estimation procedures, and its error performance is comparable to that of respondent-driven sampling (RDS) which has a smaller asymptotic variance than many other estimators. Simulation studies also show that the mean squared error of RT-estimator decays much faster than that of RDS with time. The newly developed RW based estimators (RL- and RT-estimators) allow to avoid burn-in period, provide better control of stability along the sample path, and overall reduce the estimation time. Our
Thomas B. Lynch; Jeffrey H. Gove
2013-01-01
Critical height sampling (CHS) estimates cubic volume per unit area by multiplying the sum of critical heights measured on trees tallied in a horizontal point sample (HPS) by the HPS basal area factor. One of the barriers to practical application of CHS is the fact that trees near the field location of the point-sampling sample point have critical heights that occur...
Improving ambulatory saliva-sampling compliance in pregnant women: a randomized controlled study.
Directory of Open Access Journals (Sweden)
Julian Moeller
Full Text Available OBJECTIVE: Noncompliance with scheduled ambulatory saliva sampling is common and has been associated with biased cortisol estimates in nonpregnant subjects. This study is the first to investigate in pregnant women strategies to improve ambulatory saliva-sampling compliance, and the association between sampling noncompliance and saliva cortisol estimates. METHODS: We instructed 64 pregnant women to collect eight scheduled saliva samples on two consecutive days each. Objective compliance with scheduled sampling times was assessed with a Medication Event Monitoring System and self-reported compliance with a paper-and-pencil diary. In a randomized controlled study, we estimated whether a disclosure intervention (informing women about objective compliance monitoring and a reminder intervention (use of acoustical reminders improved compliance. A mixed model analysis was used to estimate associations between women's objective compliance and their diurnal cortisol profiles, and between deviation from scheduled sampling and the cortisol concentration measured in the related sample. RESULTS: Self-reported compliance with a saliva-sampling protocol was 91%, and objective compliance was 70%. The disclosure intervention was associated with improved objective compliance (informed: 81%, noninformed: 60%, F(1,60 = 17.64, p<0.001, but not the reminder intervention (reminders: 68%, without reminders: 72%, F(1,60 = 0.78, p = 0.379. Furthermore, a woman's increased objective compliance was associated with a higher diurnal cortisol profile, F(2,64 = 8.22, p<0.001. Altered cortisol levels were observed in less objective compliant samples, F(1,705 = 7.38, p = 0.007, with delayed sampling associated with lower cortisol levels. CONCLUSIONS: The results suggest that in pregnant women, objective noncompliance with scheduled ambulatory saliva sampling is common and is associated with biased cortisol estimates. To improve sampling compliance, results suggest
Variation of the 18O/16O ratio in water samples from branches
International Nuclear Information System (INIS)
Foerstel, H.; Huetzen, H.
1979-06-01
The studies of the water turnover of plants may use the labelling of water by its natural variation of the 18 O/ 16 O ratio. The basic value of such a study is the isotope ratio in soil water, which is represented by the 18 O/ 16 O ratio in water samples from stem and branches, too. During the water transport from the soil water reservoir to the leaves of trees, no fractionation of the oxygen isotopes occurs. The oxygen isotope ratio within a single twig varies about +- 0 / 00 (variation given as standard deviation of the delta-values), within the stem of a large tree about +- 2 0 / 00 . The results of water from stems of different trees at the site of the Nuclear Research Center Juelich scatter about +- 1 0 / 00 . The delta-values from a larger area (Rur valley-Eifel hills-Mosel valley), which were collected in October 1978 during the end of the vegetation period, showed a standard deviation between +- 2.2 (Rur valley) and +- 3.6 0 / 00 (Eifel hills). The 18 O/ 16 O-delta-values of a beech wood from Juelich site are in the range of - 7.3 and - 10.1 0 / 00 (mean local precipitation 1974 - 1977: - 7.4 0 / 00 ). At the hill site near Cologne (Bergisches Land, late September 1978) we observed an oxygen isotope ratio of - 9.1 0 / 00 (groundwater at the neighbourhood between - 7.6 and 8.7 0 / 00 ). In October 1978 at an area from the Netherlands to the Mosel valley we found delta-values of branch water between - 13.9 (lower Ruhr valley) and - 13.1 (Eifel hills to Mosel valley) in comparison to groundwater samples from the same region: - 7.55 and - 8.39. There was no significant difference between delta-values from various species or locations within this area. Groundwater samples should normally represent the 18 O/ 16 O ratio of local precipitation. The low delta-values of branch water could be due to the rapid uptake of precipitation water of low 18 O content in autumn to the water transport system of plants. (orig.) [de
Energy Technology Data Exchange (ETDEWEB)
Jung Yu, Dae [School of Space Research, Kyung Hee University, Yongin 446-701 (Korea, Republic of); Kim, Kihong [Department of Energy Systems Research, Ajou University, Suwon 443-749 (Korea, Republic of)
2013-12-15
We study the effects of a random spatial variation of the plasma density on the mode conversion of electromagnetic waves into electrostatic oscillations in cold, unmagnetized, and stratified plasmas. Using the invariant imbedding method, we calculate precisely the electromagnetic field distribution and the mode conversion coefficient, which is defined to be the fraction of the incident wave power converted into electrostatic oscillations, for the configuration where a numerically generated random density variation is added to the background linear density profile. We repeat similar calculations for a large number of random configurations and take an average of the results. We obtain a peculiar nonmonotonic dependence of the mode conversion coefficient on the strength of randomness. As the disorder increases from zero, the maximum value of the mode conversion coefficient decreases initially, then increases to a maximum, and finally decreases towards zero. The range of the incident angle in which mode conversion occurs increases monotonically as the disorder increases. We present numerical results suggesting that the decrease of mode conversion mainly results from the increased reflection due to the Anderson localization effect originating from disorder, whereas the increase of mode conversion of the intermediate disorder regime comes from the appearance of many resonance points and the enhanced tunneling between the resonance points and the cutoff point. We also find a very large local enhancement of the magnetic field intensity for particular random configurations. In order to obtain high mode conversion efficiency, it is desirable to restrict the randomness close to the resonance region.
International Nuclear Information System (INIS)
Jung Yu, Dae; Kim, Kihong
2013-01-01
We study the effects of a random spatial variation of the plasma density on the mode conversion of electromagnetic waves into electrostatic oscillations in cold, unmagnetized, and stratified plasmas. Using the invariant imbedding method, we calculate precisely the electromagnetic field distribution and the mode conversion coefficient, which is defined to be the fraction of the incident wave power converted into electrostatic oscillations, for the configuration where a numerically generated random density variation is added to the background linear density profile. We repeat similar calculations for a large number of random configurations and take an average of the results. We obtain a peculiar nonmonotonic dependence of the mode conversion coefficient on the strength of randomness. As the disorder increases from zero, the maximum value of the mode conversion coefficient decreases initially, then increases to a maximum, and finally decreases towards zero. The range of the incident angle in which mode conversion occurs increases monotonically as the disorder increases. We present numerical results suggesting that the decrease of mode conversion mainly results from the increased reflection due to the Anderson localization effect originating from disorder, whereas the increase of mode conversion of the intermediate disorder regime comes from the appearance of many resonance points and the enhanced tunneling between the resonance points and the cutoff point. We also find a very large local enhancement of the magnetic field intensity for particular random configurations. In order to obtain high mode conversion efficiency, it is desirable to restrict the randomness close to the resonance region
Acute stress symptoms during the second Lebanon war in a random sample of Israeli citizens.
Cohen, Miri; Yahav, Rivka
2008-02-01
The aims of this study were to assess prevalence of acute stress disorder (ASD) and acute stress symptoms (ASS) in Israel during the second Lebanon war. A telephone survey was conducted in July 2006 of a random sample of 235 residents of northern Israel, who were subjected to missile attacks, and of central Israel, who were not subjected to missile attacks. Results indicate that ASS scores were higher in the northern respondents; 6.8% of the northern sample and 3.9% of the central sample met ASD criteria. Appearance of each symptom ranged from 15.4% for dissociative to 88.4% for reexperiencing, with significant differences between northern and central respondents only for reexperiencing and arousal. A low ASD rate and a moderate difference between areas subjected and not subjected to attack were found.
International Nuclear Information System (INIS)
Hightower, J.H. III
1994-01-01
Objectives of this field experiment were: (1) determine whether there was a statistically significant difference between the radon concentrations of samples collected by EPA's standard method, using a syringe, and an alternative, slow-flow method; (2) determine whether there was a statistically significant difference between the measured radon concentrations of samples mailed vs samples not mailed; and (3) determine whether there was a temporal variation of water radon concentration over a 7-month period. The field experiment was conducted at 9 sites, 5 private wells, and 4 public wells, at various locations in North Carolina. Results showed that a syringe is not necessary for sample collection, there was generally no significant radon loss due to mailing samples, and there was statistically significant evidence of temporal variations in water radon concentrations
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
Directory of Open Access Journals (Sweden)
Alireza Goli
2015-09-01
Full Text Available Distribution and optimum allocation of emergency resources are the most important tasks, which need to be accomplished during crisis. When a natural disaster such as earthquake, flood, etc. takes place, it is necessary to deliver rescue efforts as quickly as possible. Therefore, it is important to find optimum location and distribution of emergency relief resources. When a natural disaster occurs, it is not possible to reach some damaged areas. In this paper, location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling is investigated. In this study, there is no need to visit all the places and some demand points receive their needs from the nearest possible location. The proposed study is implemented for some randomly generated numbers in different sizes. The preliminary results indicate that the proposed method was capable of reaching desirable solutions in reasonable amount of time.
DEFF Research Database (Denmark)
Veraart, Almut
and present a new estimator for the asymptotic ‘variance’ of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies where we study the impact of the jump activity, the jump size of the jumps......This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures...... in the price and the presence of additional independent or dependent jumps in the volatility on the finite sample performance of the various estimators. We find that the finite sample performance of realised variance, and in particular of the log–transformed realised variance, is generally good, whereas...
Random On-Board Pixel Sampling (ROPS) X-Ray Camera
Energy Technology Data Exchange (ETDEWEB)
Wang, Zhehui [Los Alamos; Iaroshenko, O. [Los Alamos; Li, S. [Los Alamos; Liu, T. [Fermilab; Parab, N. [Argonne (main); Chen, W. W. [Purdue U.; Chu, P. [Los Alamos; Kenyon, G. [Los Alamos; Lipton, R. [Fermilab; Sun, K.-X. [Nevada U., Las Vegas
2017-09-25
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.
Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël
2016-11-17
Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.
Graham, John H; Robb, Daniel T; Poe, Amy R
2012-01-01
Distributed robustness is thought to influence the buffering of random phenotypic variation through the scale-free topology of gene regulatory, metabolic, and protein-protein interaction networks. If this hypothesis is true, then the phenotypic response to the perturbation of particular nodes in such a network should be proportional to the number of links those nodes make with neighboring nodes. This suggests a probability distribution approximating an inverse power-law of random phenotypic variation. Zero phenotypic variation, however, is impossible, because random molecular and cellular processes are essential to normal development. Consequently, a more realistic distribution should have a y-intercept close to zero in the lower tail, a mode greater than zero, and a long (fat) upper tail. The double Pareto-lognormal (DPLN) distribution is an ideal candidate distribution. It consists of a mixture of a lognormal body and upper and lower power-law tails. If our assumptions are true, the DPLN distribution should provide a better fit to random phenotypic variation in a large series of single-gene knockout lines than other skewed or symmetrical distributions. We fit a large published data set of single-gene knockout lines in Saccharomyces cerevisiae to seven different probability distributions: DPLN, right Pareto-lognormal (RPLN), left Pareto-lognormal (LPLN), normal, lognormal, exponential, and Pareto. The best model was judged by the Akaike Information Criterion (AIC). Phenotypic variation among gene knockouts in S. cerevisiae fits a double Pareto-lognormal (DPLN) distribution better than any of the alternative distributions, including the right Pareto-lognormal and lognormal distributions. A DPLN distribution is consistent with the hypothesis that developmental stability is mediated, in part, by distributed robustness, the resilience of gene regulatory, metabolic, and protein-protein interaction networks. Alternatively, multiplicative cell growth, and the mixing of
International Nuclear Information System (INIS)
Amendola, A.; Astolfi, M.; Lisanti, B.
1983-01-01
The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems
Event-triggered synchronization for reaction-diffusion complex networks via random sampling
Dong, Tao; Wang, Aijuan; Zhu, Huiyun; Liao, Xiaofeng
2018-04-01
In this paper, the synchronization problem of the reaction-diffusion complex networks (RDCNs) with Dirichlet boundary conditions is considered, where the data is sampled randomly. An event-triggered controller based on the sampled data is proposed, which can reduce the number of controller and the communication load. Under this strategy, the synchronization problem of the diffusion complex network is equivalently converted to the stability of a of reaction-diffusion complex dynamical systems with time delay. By using the matrix inequality technique and Lyapunov method, the synchronization conditions of the RDCNs are derived, which are dependent on the diffusion term. Moreover, it is found the proposed control strategy can get rid of the Zeno behavior naturally. Finally, a numerical example is given to verify the obtained results.
Energy Technology Data Exchange (ETDEWEB)
Tucker, J.D.; Christensen, M.L.; Strout, C.L.; McGee, K.A.; Carrano, A.V.
1987-01-01
The variation in lymphocyte sister chromatid exchange (SCE) frequency was investigated in healthy nonsmokers who were not taking any medication. Two separate studies were undertaken. In the first, blood was drawn from four women twice a week for 8 weeks. These donors recorded the onset and termination of menstruation and times of illness. In the second study, blood was obtained from two women and two men for 5 consecutive days on two separate occasions initiated 14 days apart. Analysis of the mean SCE frequencies in each study indicated that significant temporal variation occurred in each donor, and that more variation occurred in the longer study. Some of the variation was found to be associated with the menstrual cycle. In the daily study, most of the variation appeared to be random, but occasional day-to-day changes occurred that were greater than those expected by chance. To determine how well a single SCE sample estimated the pooled mean for each donor in each study, the authors calculated the number of samples that encompassed that donor's pooled mean within 1 or more standard errors. For both studies, about 75% of the samples encompassed the pooled mean within 2 standard errors. An analysis of high-frequency cells (HFCs) was also undertaken. The results for each study indicate that the proportion of HFCs, compared with the use of Fisher's Exact test, is significantly more constant than the means, which were compared by using the t-test. These results coupled with our previous work suggest that HFC analysis may be the method of choice when analyzing data from human population studies.
International Nuclear Information System (INIS)
Jeong, Hae-Yong; Park, Moon-Ghu
2015-01-01
In most existing evaluation methodologies, which follow a conservative approach, the most conservative initial conditions are searched for each transient scenario through tremendous assessment for wide operating windows or limiting conditions for operation (LCO) allowed by the operating guidelines. In this procedure, a user effect could be involved and a remarkable time and human resources are consumed. In the present study, we investigated a more effective statistical method for the selection of the most conservative initial condition by the use of random sampling of operating parameters affecting the initial conditions. A method for the determination of initial conditions based on random sampling of plant design parameters is proposed. This method is expected to be applied for the selection of the most conservative initial plant conditions in the safety analysis using a conservative evaluation methodology. In the method, it is suggested that the initial conditions of reactor coolant flow rate, pressurizer level, pressurizer pressure, and SG level are adjusted by controlling the pump rated flow, setpoints of PLCS, PPCS, and FWCS, respectively. The proposed technique is expected to contribute to eliminate the human factors introduced in the conventional safety analysis procedure and also to reduce the human resources invested in the safety evaluation of nuclear power plants
A systematic examination of a random sampling strategy for source apportionment calculations.
Andersson, August
2011-12-15
Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.
Landslide Susceptibility Assessment Using Frequency Ratio Technique with Iterative Random Sampling
Directory of Open Access Journals (Sweden)
Hyun-Joo Oh
2017-01-01
Full Text Available This paper assesses the performance of the landslide susceptibility analysis using frequency ratio (FR with an iterative random sampling. A pair of before-and-after digital aerial photographs with 50 cm spatial resolution was used to detect landslide occurrences in Yongin area, Korea. Iterative random sampling was run ten times in total and each time it was applied to the training and validation datasets. Thirteen landslide causative factors were derived from the topographic, soil, forest, and geological maps. The FR scores were calculated from the causative factors and training occurrences repeatedly ten times. The ten landslide susceptibility maps were obtained from the integration of causative factors that assigned FR scores. The landslide susceptibility maps were validated by using each validation dataset. The FR method achieved susceptibility accuracies from 89.48% to 93.21%. And the landslide susceptibility accuracy of the FR method is higher than 89%. Moreover, the ten times iterative FR modeling may contribute to a better understanding of a regularized relationship between the causative factors and landslide susceptibility. This makes it possible to incorporate knowledge-driven considerations of the causative factors into the landslide susceptibility analysis and also be extensively used to other areas.
A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling
Directory of Open Access Journals (Sweden)
Ying Yan
2017-01-01
Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.
International Nuclear Information System (INIS)
Eperon, Isabelle; Vassilakos, Pierre; Navarria, Isabelle; Menoud, Pierre-Alain; Gauthier, Aude; Pache, Jean-Claude; Boulvain, Michel; Untiet, Sarah; Petignat, Patrick
2013-01-01
To evaluate if human papillomavirus (HPV) self-sampling (Self-HPV) using a dry vaginal swab is a valid alternative for HPV testing. Women attending colposcopy clinic were recruited to collect two consecutive Self-HPV samples: a Self-HPV using a dry swab (S-DRY) and a Self-HPV using a standard wet transport medium (S-WET). These samples were analyzed for HPV using real time PCR (Roche Cobas). Participants were randomized to determine the order of the tests. Questionnaires assessing preferences and acceptability for both tests were conducted. Subsequently, women were invited for colposcopic examination; a physician collected a cervical sample (physician-sampling) with a broom-type device and placed it into a liquid-based cytology medium. Specimens were then processed for the production of cytology slides and a Hybrid Capture HPV DNA test (Qiagen) was performed from the residual liquid. Biopsies were performed if indicated. Unweighted kappa statistics (κ) and McNemar tests were used to measure the agreement among the sampling methods. A total of 120 women were randomized. Overall HPV prevalence was 68.7% (95% Confidence Interval (CI) 59.3–77.2) by S-WET, 54.4% (95% CI 44.8–63.9) by S-DRY and 53.8% (95% CI 43.8–63.7) by HC. Among paired samples (S-WET and S-DRY), the overall agreement was good (85.7%; 95% CI 77.8–91.6) and the κ was substantial (0.70; 95% CI 0.57-0.70). The proportion of positive type-specific HPV agreement was also good (77.3%; 95% CI 68.2-84.9). No differences in sensitivity for cervical intraepithelial neoplasia grade one (CIN1) or worse between the two Self-HPV tests were observed. Women reported the two Self-HPV tests as highly acceptable. Self-HPV using dry swab transfer does not appear to compromise specimen integrity. Further study in a large screening population is needed. ClinicalTrials.gov: http://clinicaltrials.gov/show/NCT01316120
Kristensen, Anne F; Kristensen, Søren R; Falkmer, Ursula; Münster, Anna-Marie B; Pedersen, Shona
2018-05-01
The Calibrated Automated Thrombography (CAT) is an in vitro thrombin generation (TG) assay that holds promise as a valuable tool within clinical diagnostics. However, the technique has a considerable analytical variation, and we therefore, investigated the analytical and between-subject variation of CAT systematically. Moreover, we assess the application of an internal standard for normalization to diminish variation. 20 healthy volunteers donated one blood sample which was subsequently centrifuged, aliquoted and stored at -80 °C prior to analysis. The analytical variation was determined on eight runs, where plasma from the same seven volunteers was processed in triplicates, and for the between-subject variation, TG analysis was performed on plasma from all 20 volunteers. The trigger reagents used for the TG assays included both PPP reagent containing 5 pM tissue factor (TF) and PPPlow with 1 pM TF. Plasma, drawn from a single donor, was applied to all plates as an internal standard for each TG analysis, which subsequently was used for normalization. The total analytical variation for TG analysis performed with PPPlow reagent is 3-14% and 9-13% for PPP reagent. This variation can be minimally reduced by using an internal standard but mainly for ETP (endogenous thrombin potential). The between-subject variation is higher when using PPPlow than PPP and this variation is considerable higher than the analytical variation. TG has a rather high inherent analytical variation but considerable lower than the between-subject variation when using PPPlow as reagent.
International Nuclear Information System (INIS)
Ahn, Peter H.; Ahn, Andrew I.; Lee, C. Joe; Shen Jin; Miller, Ekeni; Lukaj, Alex; Milan, Elissa; Yaparpalvi, Ravindra; Kalnicki, Shalom; Garg, Madhur K.
2009-01-01
Purpose: With 54 o of freedom from the skull to mandible to C7, ensuring adequate immobilization for head-and-neck radiotherapy (RT) is complex. We quantify variations in skull, mandible, and cervical spine movement between RT sessions. Methods and Materials: Twenty-three sequential head-and-neck RT patients underwent serial computed tomography. Patients underwent planned rescanning at 11, 22, and 33 fractions for a total of 93 scans. Coordinates of multiple bony elements of the skull, mandible, and cervical spine were used to calculate rotational and translational changes of bony anatomy compared with the original planning scan. Results: Mean translational and rotational variations on rescanning were negligible, but showed a wide range. Changes in scoliosis and lordosis of the cervical spine between fractions showed similar variability. There was no correlation between positional variation and fraction number and no strong correlation with weight loss or skin separation. Semi-independent rotational and translation movement of the skull in relation to the lower cervical spine was shown. Positioning variability measured by means of vector displacement was largest in the mandible and lower cervical spine. Conclusions: Although only small overall variations in position between head-and-neck RT sessions exist on average, there is significant random variation in patient positioning of the skull, mandible, and cervical spine elements. Such variation is accentuated in the mandible and lower cervical spine. These random semirigid variations in positioning of the skull and spine point to a need for improved immobilization and/or confirmation of patient positioning in RT of the head and neck
Energy Technology Data Exchange (ETDEWEB)
Ahn, Peter H. [Department of Radiation Oncology, Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, NY (United States)], E-mail: phahn@mdanderson.org; Ahn, Andrew I [Albert Einstein College of Medicine of Yeshiva University, Bronx, NY (United States); Lee, C Joe; Jin, Shen; Miller, Ekeni; Lukaj, Alex; Milan, Elissa; Yaparpalvi, Ravindra; Kalnicki, Shalom; Garg, Madhur K [Department of Radiation Oncology, Montefiore Medical Center and Albert Einstein College of Medicine, Bronx, NY (United States)
2009-02-01
Purpose: With 54{sup o} of freedom from the skull to mandible to C7, ensuring adequate immobilization for head-and-neck radiotherapy (RT) is complex. We quantify variations in skull, mandible, and cervical spine movement between RT sessions. Methods and Materials: Twenty-three sequential head-and-neck RT patients underwent serial computed tomography. Patients underwent planned rescanning at 11, 22, and 33 fractions for a total of 93 scans. Coordinates of multiple bony elements of the skull, mandible, and cervical spine were used to calculate rotational and translational changes of bony anatomy compared with the original planning scan. Results: Mean translational and rotational variations on rescanning were negligible, but showed a wide range. Changes in scoliosis and lordosis of the cervical spine between fractions showed similar variability. There was no correlation between positional variation and fraction number and no strong correlation with weight loss or skin separation. Semi-independent rotational and translation movement of the skull in relation to the lower cervical spine was shown. Positioning variability measured by means of vector displacement was largest in the mandible and lower cervical spine. Conclusions: Although only small overall variations in position between head-and-neck RT sessions exist on average, there is significant random variation in patient positioning of the skull, mandible, and cervical spine elements. Such variation is accentuated in the mandible and lower cervical spine. These random semirigid variations in positioning of the skull and spine point to a need for improved immobilization and/or confirmation of patient positioning in RT of the head and neck.
Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen
2017-12-01
We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.
Henry, M J; Pasco, J A; Seeman, E; Nicholson, G C; Sanders, K M; Kotowicz, M A
2001-01-01
Fracture risk is determined by bone mineral density (BMD). The T-score, a measure of fracture risk, is the position of an individual's BMD in relation to a reference range. The aim of this study was to determine the magnitude of change in the T-score when different sampling techniques were used to produce the reference range. Reference ranges were derived from three samples, drawn from the same region: (1) an age-stratified population-based random sample, (2) unselected volunteers, and (3) a selected healthy subset of the population-based sample with no diseases or drugs known to affect bone. T-scores were calculated using the three reference ranges for a cohort of women who had sustained a fracture and as a group had a low mean BMD (ages 35-72 yr; n = 484). For most comparisons, the T-scores for the fracture cohort were more negative using the population reference range. The difference in T-scores reached 1.0 SD. The proportion of the fracture cohort classified as having osteoporosis at the spine was 26, 14, and 23% when the population, volunteer, and healthy reference ranges were applied, respectively. The use of inappropriate reference ranges results in substantial changes to T-scores and may lead to inappropriate management.
Variation in orgasm occurrence by sexual orientation in a sample of U.S. singles.
Garcia, Justin R; Lloyd, Elisabeth A; Wallen, Kim; Fisher, Helen E
2014-11-01
Despite recent advances in understanding orgasm variation, little is known about ways in which sexual orientation is associated with men's and women's orgasm occurrence. To assess orgasm occurrence during sexual activity across sexual orientation categories. Data were collected by Internet questionnaire from 6,151 men and women (ages 21-65+ years) as part of a nationally representative sample of single individuals in the United States. Analyses were restricted to a subsample of 2,850 singles (1,497 men, 1,353 women) who had experienced sexual activity in the past 12 months. Participants reported their sex/gender, self-identified sexual orientation (heterosexual, gay/lesbian, bisexual), and what percentage of the time they experience orgasm when having sex with a familiar partner. Mean occurrence rate for experiencing orgasm during sexual activity with a familiar partner was 62.9% among single women and 85.1% among single men, which was significantly different (F1,2848 = 370.6, P sexual orientation: heterosexual men 85.5%, gay men 84.7%, bisexual men 77.6% (F2,1494 = 2.67, P = 0.07, η(2) = 0.004). For women, however, mean occurrence rate of orgasm varied significantly by sexual orientation: heterosexual women 61.6%, lesbian women 74.7%, bisexual women 58.0% (F2,1350 = 10.95, P sexual orientation, have less predictable, more varied orgasm experiences than do men and that for women, but not men, the likelihood of orgasm varies with sexual orientation. These findings demonstrate the need for further investigations into the comparative sexual experiences and sexual health outcomes of sexual minorities. © 2014 International Society for Sexual Medicine.
Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.
Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael
2014-10-01
Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.
Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch
2017-06-06
An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.
Gray bootstrap method for estimating frequency-varying random vibration signals with small samples
Directory of Open Access Journals (Sweden)
Wang Yanqing
2014-04-01
Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.
International Nuclear Information System (INIS)
Fouz, R.; Vilar, M.J.; Yus, E.; Sanjuán, M.L.; Diéguez, F.J.
2016-01-01
The objective of this study was to investigate the variability in cow´s milk somatic cell counts (SCC) depending on the type of milk meter used by dairy farms for official milk recording. The study was performed in 2011 and 2012 in the major cattle area of Spain. In total, 137,846 lactations of Holstein-Friesian cows were analysed at 1,912 farms. A generalised least squares regression model was used for data analysis. The model showed that the milk meter had a substantial effect on the SCC for individual milk samples obtained for official milk recording. The results suggested an overestimation of the SCC in milk samples from farms that had electronic devices in comparison with farms that used portable devices and underestimation when volumetric meters are used. A weak positive correlation was observed between the SCC and the percentage of fat in individual milk samples. The results underline the importance of considering this variable when using SCC data from milk recording in the dairy herd improvement program or in quality milk programs. (Author)
Energy Technology Data Exchange (ETDEWEB)
Fouz, R.; Vilar, M.J.; Yus, E.; Sanjuán, M.L.; Diéguez, F.J.
2016-11-01
The objective of this study was to investigate the variability in cow´s milk somatic cell counts (SCC) depending on the type of milk meter used by dairy farms for official milk recording. The study was performed in 2011 and 2012 in the major cattle area of Spain. In total, 137,846 lactations of Holstein-Friesian cows were analysed at 1,912 farms. A generalised least squares regression model was used for data analysis. The model showed that the milk meter had a substantial effect on the SCC for individual milk samples obtained for official milk recording. The results suggested an overestimation of the SCC in milk samples from farms that had electronic devices in comparison with farms that used portable devices and underestimation when volumetric meters are used. A weak positive correlation was observed between the SCC and the percentage of fat in individual milk samples. The results underline the importance of considering this variable when using SCC data from milk recording in the dairy herd improvement program or in quality milk programs. (Author)
Clerkin, Elise M; Magee, Joshua C; Wells, Tony T; Beard, Courtney; Barnett, Nancy P
2016-12-01
Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Adult participants (N = 86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Clerkin, Elise M.; Magee, Joshua C.; Wells, Tony T.; Beard, Courtney; Barnett, Nancy P.
2016-01-01
Objective Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Method Adult participants (N=86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Results Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. Conclusions These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. PMID:27591918
International Nuclear Information System (INIS)
Silk, M.G.
1986-04-01
The AECL random coil seal is to be used as a Nuclear Safeguards seal to deter and detect tampering with nuclear material in store. To be effective the ultrasonic signature from the seal must remain constant and be different from that of other seals. Angular variations in the ultrasonic response from certain seals have, however, been observed and the programme of study reported here has been carried out in order to clarify the source of this variation. It is shown that the variation observed may most probably be attributed to the ultrasonic probes used in the investigation and, in particular, to deviation of the probe beam from circularity. However it is probable that the angle of the beam with respect to the probe case (squint) is also a contributory factor. In addition, to reduce the degree of angular variation it is important to exclude air bubbles and to ensure that the coil is placed as centrally in the beam as possible. It is anticipated that the exclusion of air bubbles will be easier in the field than in the laboratory studies. The need to place the seal reasonably centrally with respect to the beam places some minor limits on the coil design and also makes it essential that the probe fits closely into its holder in the seal as any slackness may give rise to signature variations. (author)
Song, Zhuoyi; Zhou, Yu; Juusola, Mikko
2016-01-01
Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779
Brus, D.J.; Gruijter, de J.J.
1997-01-01
Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based
Directory of Open Access Journals (Sweden)
Khosro Mehdi Khanlou
2011-01-01
Full Text Available Cost reduction in plant breeding and conservation programs depends largely on correctly defining the minimal sample size required for the trustworthy assessment of intra- and inter-cultivar genetic variation. White clover, an important pasture legume, was chosen for studying this aspect. In clonal plants, such as the aforementioned, an appropriate sampling scheme eliminates the redundant analysis of identical genotypes. The aim was to define an optimal sampling strategy, i.e., the minimum sample size and appropriate sampling scheme for white clover cultivars, by using AFLP data (283 loci from three popular types. A grid-based sampling scheme, with an interplant distance of at least 40 cm, was sufficient to avoid any excess in replicates. Simulations revealed that the number of samples substantially influenced genetic diversity parameters. When using less than 15 per cultivar, the expected heterozygosity (He and Shannon diversity index (I were greatly underestimated, whereas with 20, more than 95% of total intra-cultivar genetic variation was covered. Based on AMOVA, a 20-cultivar sample was apparently sufficient to accurately quantify individual genetic structuring. The recommended sampling strategy facilitates the efficient characterization of diversity in white clover, for both conservation and exploitation.
Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.
2018-04-01
We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.
International Nuclear Information System (INIS)
Li, Y.; Chappell, A.; Nyamdavaa, B.; Yu, H.; Davaasuren, D.; Zoljargal, K.
2015-01-01
The 137 Cs technique for estimating net time-integrated soil redistribution is valuable for understanding the factors controlling soil redistribution by all processes. The literature on this technique is dominated by studies of individual fields and describes its typically time-consuming nature. We contend that the community making these studies has inappropriately assumed that many 137 Cs measurements are required and hence estimates of net soil redistribution can only be made at the field scale. Here, we support future studies of 137 Cs-derived net soil redistribution to apply their often limited resources across scales of variation (field, catchment, region etc.) without compromising the quality of the estimates at any scale. We describe a hybrid, design-based and model-based, stratified random sampling design with composites to estimate the sampling variance and a cost model for fieldwork and laboratory measurements. Geostatistical mapping of net (1954–2012) soil redistribution as a case study on the Chinese Loess Plateau is compared with estimates for several other sampling designs popular in the literature. We demonstrate the cost-effectiveness of the hybrid design for spatial estimation of net soil redistribution. To demonstrate the limitations of current sampling approaches to cut across scales of variation, we extrapolate our estimate of net soil redistribution across the region, show that for the same resources, estimates from many fields could have been provided and would elucidate the cause of differences within and between regional estimates. We recommend that future studies evaluate carefully the sampling design to consider the opportunity to investigate 137 Cs-derived net soil redistribution across scales of variation. - Highlights: • The 137 Cs technique estimates net time-integrated soil redistribution by all processes. • It is time-consuming and dominated by studies of individual fields. • We use limited resources to estimate soil
Taylor, Wendy; Stacey, Kaye
2014-01-01
This article presents "The Two Children Problem," published by Martin Gardner, who wrote a famous and widely-read math puzzle column in the magazine "Scientific American," and a problem presented by puzzler Gary Foshee. This paper explains the paradox of Problems 2 and 3 and many other variations of the theme. Then the authors…
Hedt, Bethany Lynn; van Leth, Frank; Zignol, Matteo; Cobelens, Frank; van Gemert, Wayne; Nhung, Nguyen Viet; Lyepshina, Svitlana; Egwaga, Saidi; Cohen, Ted
2012-01-01
Background: Current methodology for multidrug-resistant tuberculosis (MDR TB) surveys endorsed by the World Health Organization provides estimates of MDR TB prevalence among new cases at the national level. On the aggregate, local variation in the burden of MDR TB may be masked. This paper
Walvoort, D.J.J.; Brus, D.J.; Gruijter, de J.J.
2010-01-01
Both for mapping and for estimating spatial means of an environmental variable, the accuracy of the result will usually be increased by dispersing the sample locations so that they cover the study area as uniformly as possible. We developed a new R package for designing spatial coverage samples for
DEFF Research Database (Denmark)
Ingvartsen, Klaus Lønne; Andersen, Refsgaard; Foldager, John
1992-01-01
The objective of this paper is to describe the random variation in voluntary dry matter intake (VDMI) and to discuss the application of the results for monitoring purposes. Furthermore, the objective is to review and quantify the influence of day length or photoperiod on VDMI. VDMI was recorded...... was increased by 0.32% per hour increase in day length. This is in agreement with the increase found in reviewed literature when photoperiod was manipulated artificially. Practical application of the results for monitoring purposes are exemplified and discussed....
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Energy Technology Data Exchange (ETDEWEB)
Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Discriminative motif discovery via simulated evolution and random under-sampling.
Directory of Open Access Journals (Sweden)
Tao Song
Full Text Available Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.
Discriminative motif discovery via simulated evolution and random under-sampling.
Song, Tao; Gu, Hong
2014-01-01
Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs) training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.
Schmidt, Jennifer; Martin, Alexandra
2016-09-01
Brain-directed treatment techniques, such as neurofeedback, have recently been proposed as adjuncts in the treatment of eating disorders to improve therapeutic outcomes. In line with this recommendation, a cue exposure EEG-neurofeedback protocol was developed. The present study aimed at the evaluation of the specific efficacy of neurofeedback to reduce subjective binge eating in a female subthreshold sample. A total of 75 subjects were randomized to EEG-neurofeedback, mental imagery with a comparable treatment set-up or a waitlist group. At post-treatment, only EEG-neurofeedback led to a reduced frequency of binge eating (p = .015, g = 0.65). The effects remained stable to a 3-month follow-up. EEG-neurofeedback further showed particular beneficial effects on perceived stress and dietary self-efficacy. Differences in outcomes did not arise from divergent treatment expectations. Because EEG-neurofeedback showed a specific efficacy, it may be a promising brain-directed approach that should be tested as a treatment adjunct in clinical groups with binge eating. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.
Directory of Open Access Journals (Sweden)
Tim A. Moore
2016-01-01
Full Text Available DOI: 10.17014/ijog.3.1.29-51Stratified sampling of coal seams for petrographic analysis using block samples is a viable alternative to standard methods of channel sampling and particulate pellet mounts. Although petrographic analysis of particulate pellets is employed widely, it is both time consuming and does not allow variation within sampling units to be assessed - an important measure in any study whether it be for paleoenvironmental reconstruction or in obtaining estimates of industrial attributes. Also, samples taken as intact blocks provide additional information, such as texture and botanical affinity that cannot be gained using particulate pellets. Stratified sampling can be employed both on ‘fine’ and ‘coarse’ grained coal units. Fine-grained coals are defined as those coal intervals that do not contain vitrain bands greater than approximately 1 mm in thickness (as measured perpendicular to bedding. In fine-grained coal seams, a reasonable sized block sample (with a polished surface area of ~3 cm2 can be taken that encapsulates the macroscopic variability. However, for coarse-grained coals (vitrain bands >1 mm a different system has to be employed in order to accurately account for the larger particles. Macroscopic point counting of vitrain bands can accurately account for those particles>1 mm within a coal interval. This point counting method is conducted using something as simple as string on a coal face with marked intervals greater than the largest particle expected to be encountered (although new technologies are being developed to capture this type of information digitally. Comparative analyses of particulate pellets and blocks on the same interval show less than 6% variation between the two sample types when blocks are recalculated to include macroscopic counts of vitrain. Therefore even in coarse-grained coals, stratified sampling can be used effectively and representatively.
Directory of Open Access Journals (Sweden)
Elodie Caboux
Full Text Available The European Prospective Investigation into Cancer and nutrition (EPIC is a long-term, multi-centric prospective study in Europe investigating the relationships between cancer and nutrition. This study has served as a basis for a number of Genome-Wide Association Studies (GWAS and other types of genetic analyses. Over a period of 5 years, 52,256 EPIC DNA samples have been extracted using an automated DNA extraction platform. Here we have evaluated the pre-analytical factors affecting DNA yield, including anthropometric, epidemiological and technical factors such as center of subject recruitment, age, gender, body-mass index, disease case or control status, tobacco consumption, number of aliquots of buffy coat used for DNA extraction, extraction machine or procedure, DNA quantification method, degree of haemolysis and variations in the timing of sample processing. We show that the largest significant variations in DNA yield were observed with degree of haemolysis and with center of subject recruitment. Age, gender, body-mass index, cancer case or control status and tobacco consumption also significantly impacted DNA yield. Feedback from laboratories which have analyzed DNA with different SNP genotyping technologies demonstrate that the vast majority of samples (approximately 88% performed adequately in different types of assays. To our knowledge this study is the largest to date to evaluate the sources of pre-analytical variations in DNA extracted from peripheral leucocytes. The results provide a strong evidence-based rationale for standardized recommendations on blood collection and processing protocols for large-scale genetic studies.
International Nuclear Information System (INIS)
Macias B, L.R.; Garcia C, R.M.; De Ita de la Torre, A.; Chavez R, A.
2000-01-01
In this work making use of the diffraction and fluorescence techniques its were determined the presence of elements in a known compound ZrSiO 4 under different pressure conditions. At preparing the samples it were applied different pressures from 1600 until 350 k N/m 2 and it is detected the apparent variations in concentration in the Zr and Si elements. (Author)
Engineering practice variation through provider agreement: a cluster-randomized feasibility trial.
McCarren, Madeline; Twedt, Elaine L; Mansuri, Faizmohamed M; Nelson, Philip R; Peek, Brian T
2014-01-01
Minimal-risk randomized trials that can be embedded in practice could facilitate learning health-care systems. A cluster-randomized design was proposed to compare treatment strategies by assigning clusters (eg, providers) to "favor" a particular drug, with providers retaining autonomy for specific patients. Patient informed consent might be waived, broadening inclusion. However, it is not known if providers will adhere to the assignment or whether institutional review boards will waive consent. We evaluated the feasibility of this trial design. Agreeable providers were randomized to "favor" either hydrochlorothiazide or chlorthalidone when starting patients on thiazide-type therapy for hypertension. The assignment applied when the provider had already decided to start a thiazide, and providers could deviate from the strategy as needed. Prescriptions were aggregated to produce a provider strategy-adherence rate. All four institutional review boards waived documentation of patient consent. Providers (n=18) followed their assigned strategy for most of their new thiazide prescriptions (n=138 patients). In the "favor hydrochlorothiazide" group, there was 99% adherence to that strategy. In the "favor chlorthalidone" group, chlorthalidone comprised 77% of new thiazide starts, up from 1% in the pre-study period. When the assigned strategy was followed, dosing in the recommended range was 48% for hydrochlorothiazide (25-50 mg/day) and 100% for chlorthalidone (12.5-25.0 mg/day). Providers were motivated to participate by a desire to contribute to a comparative effectiveness study. A study promotional mug, provider information letter, and interactions with the site investigator were identified as most helpful in reminding providers of their study drug strategy. Providers prescribed according to an assigned drug-choice strategy most of the time for the purpose of a comparative effectiveness study. This simple design could facilitate research participation and behavior change
Influence of common preanalytical variations on the metabolic profile of serum samples in biobanks
International Nuclear Information System (INIS)
Fliniaux, Ophélie; Gaillard, Gwenaelle; Lion, Antoine; Cailleu, Dominique; Mesnard, François; Betsou, Fotini
2011-01-01
A blood pre-centrifugation delay of 24 h at room temperature influenced the proton NMR spectroscopic profiles of human serum. A blood pre-centrifugation delay of 24 h at 4°C did not influence the spectroscopic profile as compared with 4 h delays at either room temperature or 4°C. Five or ten serum freeze–thaw cycles also influenced the proton NMR spectroscopic profiles. Certain common in vitro preanalytical variations occurring in biobanks may impact the metabolic profile of human serum.
Influence of common preanalytical variations on the metabolic profile of serum samples in biobanks
Energy Technology Data Exchange (ETDEWEB)
Fliniaux, Ophelie [University of Picardie Jules Verne, Laboratoire de Phytotechnologie EA 3900-BioPI (France); Gaillard, Gwenaelle [Biobanque de Picardie (France); Lion, Antoine [University of Picardie Jules Verne, Laboratoire de Phytotechnologie EA 3900-BioPI (France); Cailleu, Dominique [Batiment Serres-Transfert, rue de Mai/rue Dallery, Plateforme Analytique (France); Mesnard, Francois, E-mail: francois.mesnard@u-picardie.fr [University of Picardie Jules Verne, Laboratoire de Phytotechnologie EA 3900-BioPI (France); Betsou, Fotini [Integrated Biobank of Luxembourg (Luxembourg)
2011-12-15
A blood pre-centrifugation delay of 24 h at room temperature influenced the proton NMR spectroscopic profiles of human serum. A blood pre-centrifugation delay of 24 h at 4 Degree-Sign C did not influence the spectroscopic profile as compared with 4 h delays at either room temperature or 4 Degree-Sign C. Five or ten serum freeze-thaw cycles also influenced the proton NMR spectroscopic profiles. Certain common in vitro preanalytical variations occurring in biobanks may impact the metabolic profile of human serum.
Li, Tiandong
2012-01-01
In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…
Clarke, Laura; Fairley, Susan; Zheng-Bradley, Xiangqun; Streeter, Ian; Perry, Emily; Lowy, Ernesto; Tassé, Anne-Marie; Flicek, Paul
2017-01-04
The International Genome Sample Resource (IGSR; http://www.internationalgenome.org) expands in data type and population diversity the resources from the 1000 Genomes Project. IGSR represents the largest open collection of human variation data and provides easy access to these resources. IGSR was established in 2015 to maintain and extend the 1000 Genomes Project data, which has been widely used as a reference set of human variation and by researchers developing analysis methods. IGSR has mapped all of the 1000 Genomes sequence to the newest human reference (GRCh38), and will release updated variant calls to ensure maximal usefulness of the existing data. IGSR is collecting new structural variation data on the 1000 Genomes samples from long read sequencing and other technologies, and will collect relevant functional data into a single comprehensive resource. IGSR is extending coverage with new populations sequenced by collaborating groups. Here, we present the new data and analysis that IGSR has made available. We have also introduced a new data portal that increases discoverability of our data-previously only browseable through our FTP site-by focusing on particular samples, populations or data sets of interest. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P
1995-01-01
This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.
Probabilistic, multi-variate flood damage modelling using random forests and Bayesian networks
Kreibich, Heidi; Schröter, Kai
2015-04-01
Decisions on flood risk management and adaptation are increasingly based on risk analyses. Such analyses are associated with considerable uncertainty, even more if changes in risk due to global change are expected. Although uncertainty analysis and probabilistic approaches have received increased attention recently, they are hardly applied in flood damage assessments. Most of the damage models usually applied in standard practice have in common that complex damaging processes are described by simple, deterministic approaches like stage-damage functions. This presentation will show approaches for probabilistic, multi-variate flood damage modelling on the micro- and meso-scale and discuss their potential and limitations. Reference: Merz, B.; Kreibich, H.; Lall, U. (2013): Multi-variate flood damage assessment: a tree-based data-mining approach. NHESS, 13(1), 53-64. Schröter, K., Kreibich, H., Vogel, K., Riggelsen, C., Scherbaum, F., Merz, B. (2014): How useful are complex flood damage models? - Water Resources Research, 50, 4, p. 3378-3395.
Shen, Lujun; Yang, Lei; Zhang, Jing; Zhang, Meng
2018-01-01
To explore the effect of expressive writing of positive emotions on test anxiety among senior-high-school students. The Test Anxiety Scale (TAS) was used to assess the anxiety level of 200 senior-high-school students. Seventy-five students with high anxiety were recruited and divided randomly into experimental and control groups. Each day for 30 days, the experimental group engaged in 20 minutes of expressive writing of positive emotions, while the control group was asked to merely write down their daily events. A second test was given after the month-long experiment to analyze whether there had been a reduction in anxiety among the sample. Quantitative data was obtained from TAS scores. The NVivo10.0 software program was used to examine the frequency of particular word categories used in participants' writing manuscripts. Senior-high-school students indicated moderate to high test anxiety. There was a significant difference in post-test results (P 0.05). Students' writing manuscripts were mainly encoded on five code categories: cause, anxiety manifestation, positive emotion, insight and evaluation. There was a negative relation between positive emotion, insight codes and test anxiety. There were significant differences in the positive emotion, anxiety manifestation, and insight code categories between the first 10 days' manuscripts and the last 10 days' ones. Long-term expressive writing of positive emotions appears to help reduce test anxiety by using insight and positive emotion words for Chinese students. Efficient and effective intervention programs to ease test anxiety can be designed based on this study.
Directory of Open Access Journals (Sweden)
Lujun Shen
Full Text Available To explore the effect of expressive writing of positive emotions on test anxiety among senior-high-school students.The Test Anxiety Scale (TAS was used to assess the anxiety level of 200 senior-high-school students. Seventy-five students with high anxiety were recruited and divided randomly into experimental and control groups. Each day for 30 days, the experimental group engaged in 20 minutes of expressive writing of positive emotions, while the control group was asked to merely write down their daily events. A second test was given after the month-long experiment to analyze whether there had been a reduction in anxiety among the sample. Quantitative data was obtained from TAS scores. The NVivo10.0 software program was used to examine the frequency of particular word categories used in participants' writing manuscripts.Senior-high-school students indicated moderate to high test anxiety. There was a significant difference in post-test results (P 0.05. Students' writing manuscripts were mainly encoded on five code categories: cause, anxiety manifestation, positive emotion, insight and evaluation. There was a negative relation between positive emotion, insight codes and test anxiety. There were significant differences in the positive emotion, anxiety manifestation, and insight code categories between the first 10 days' manuscripts and the last 10 days' ones.Long-term expressive writing of positive emotions appears to help reduce test anxiety by using insight and positive emotion words for Chinese students. Efficient and effective intervention programs to ease test anxiety can be designed based on this study.
Zhang, Jing; Zhang, Meng
2018-01-01
Purpose To explore the effect of expressive writing of positive emotions on test anxiety among senior-high-school students. Methods The Test Anxiety Scale (TAS) was used to assess the anxiety level of 200 senior-high-school students. Seventy-five students with high anxiety were recruited and divided randomly into experimental and control groups. Each day for 30 days, the experimental group engaged in 20 minutes of expressive writing of positive emotions, while the control group was asked to merely write down their daily events. A second test was given after the month-long experiment to analyze whether there had been a reduction in anxiety among the sample. Quantitative data was obtained from TAS scores. The NVivo10.0 software program was used to examine the frequency of particular word categories used in participants’ writing manuscripts. Results Senior-high-school students indicated moderate to high test anxiety. There was a significant difference in post-test results (P 0.05). Students’ writing manuscripts were mainly encoded on five code categories: cause, anxiety manifestation, positive emotion, insight and evaluation. There was a negative relation between positive emotion, insight codes and test anxiety. There were significant differences in the positive emotion, anxiety manifestation, and insight code categories between the first 10 days’ manuscripts and the last 10 days’ ones. Conclusions Long-term expressive writing of positive emotions appears to help reduce test anxiety by using insight and positive emotion words for Chinese students. Efficient and effective intervention programs to ease test anxiety can be designed based on this study. PMID:29401473
International Nuclear Information System (INIS)
Plevnik, Lucijan; Žerovnik, Gašper
2016-01-01
Highlights: • Methods for random sampling of correlated parameters. • Link to open-source code for sampling of resonance parameters in ENDF-6 format. • Validation of the code on realistic and artificial data. • Validation of covariances in three major contemporary nuclear data libraries. - Abstract: Methods for random sampling of correlated parameters are presented. The methods are implemented for sampling of resonance parameters in ENDF-6 format and a link to the open-source code ENDSAM is given. The code has been validated on realistic data. Additionally, consistency of covariances of resonance parameters of three major contemporary nuclear data libraries (JEFF-3.2, ENDF/B-VII.1 and JENDL-4.0u2) has been checked.
Depression and Racial/Ethnic Variations within a Diverse Nontraditional College Sample
Hudson, Richard; Towey, James; Shinar, Ori
2008-01-01
The study's objective was to ascertain whether rates of depression were significantly higher for Dominican, Puerto Rican, South and Central American and Jamaican/Haitian students than for African American and White students. The sample consisted of 987 predominantly nontraditional college students. The depression rate for Dominican students was…
Scheerlinck, E; Dhaenens, M; Van Soom, A; Peelman, L; De Sutter, P; Van Steendam, K; Deforce, D
2015-12-01
Sample preparation is the crucial starting point to obtain high-quality mass spectrometry data and can be divided into two main steps in a bottom-up proteomics approach: cell/tissue lysis with or without detergents and a(n) (in-solution) digest comprising denaturation, reduction, alkylation, and digesting of the proteins. Here, some important considerations, among others, are that the reagents used for sample preparation can inhibit the digestion enzyme (e.g., 0.1% sodium dodecyl sulfate [SDS] and 0.5 M guanidine HCl), give rise to ion suppression (e.g., polyethylene glycol [PEG]), be incompatible with liquid chromatography-tandem mass spectrometry (LC-MS/MS) (e.g., SDS), and can induce additional modifications (e.g., urea). Taken together, all of these irreproducible effects are gradually becoming a problem when label-free quantitation of the samples is envisioned such as during the increasingly popular high-definition mass spectrometry (HDMS(E)) and sequential window acquisition of all theoretical fragment ion spectra (SWATH) data-independent acquisition strategies. Here, we describe the detailed validation of a reproducible method with sufficient protein yield for sample preparation without any known LC-MS/MS interfering substances by using 1% sodium deoxycholate (SDC) during both cell lysis and in-solution digest. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Iruka, Iheoma U.; Dotterer, Aryn M.; Pungello, Elizabeth P.
2014-01-01
Research Findings: Grounded in the investment model and informed by the integrative theory of the study of minority children, this study used the Early Childhood Longitudinal Study-Birth Cohort data set, a nationally representative sample of young children, to investigate whether the association between socioeconomic status (family income and…
Diwan, Vishal; Stålsby Lundborg, Cecilia; Tamhankar, Ashok J
2013-01-01
The presence of antibiotics in the environment and their subsequent impact on resistance development has raised concerns globally. Hospitals are a major source of antibiotics released into the environment. To reduce these residues, research to improve knowledge of the dynamics of antibiotic release from hospitals is essential. Therefore, we undertook a study to estimate seasonal and temporal variation in antibiotic release from two hospitals in India over a period of two years. For this, 6 sampling sessions of 24 hours each were conducted in the three prominent seasons of India, at all wastewater outlets of the two hospitals, using continuous and grab sampling methods. An in-house wastewater sampler was designed for continuous sampling. Eight antibiotics from four major antibiotic groups were selected for the study. To understand the temporal pattern of antibiotic release, each of the 24-hour sessions were divided in three sub-sampling sessions of 8 hours each. Solid phase extraction followed by liquid chromatography/tandem mass spectrometry (LC-MS/MS) was used to determine the antibiotic residues. Six of the eight antibiotics studied were detected in the wastewater samples. Both continuous and grab sampling methods indicated that the highest quantities of fluoroquinolones were released in winter followed by the rainy season and the summer. No temporal pattern in antibiotic release was detected. In general, in a common timeframe, continuous sampling showed less concentration of antibiotics in wastewater as compared to grab sampling. It is suggested that continuous sampling should be the method of choice as grab sampling gives erroneous results, it being indicative of the quantities of antibiotics present in wastewater only at the time of sampling. Based on our studies, calculations indicate that from hospitals in India, an estimated 89, 1 and 25 ng/L/day of fluroquinolones, metronidazole and sulfamethoxazole respectively, might be getting released into the
Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Fr?d?ric; Petignat, Patrick
2015-01-01
Background Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. Methods A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed f...
International Nuclear Information System (INIS)
Makepeace, C.E.; Horvath, F.J.; Stocker, H.
1981-11-01
The aim of a stratified random sampling plan is to provide the best estimate (in the absence of full-shift personal gravimetric sampling) of personal exposure to respirable quartz among underground miners. One also gains information of the exposure distribution of all the miners at the same time. Three variables (or strata) are considered in the present scheme: locations, occupations and times of sampling. Random sampling within each stratum ensures that each location, occupation and time of sampling has equal opportunity of being selected without bias. Following implementation of the plan and analysis of collected data, one can determine the individual exposures and the mean. This information can then be used to identify those groups whose exposure contributes significantly to the collective exposure. In turn, this identification, along with other considerations, allows the mine operator to carry out a cost-benefit optimization and eventual implementation of engineering controls for these groups. This optimization and engineering control procedure, together with the random sampling plan, can then be used in an iterative manner to minimize the mean value of the distribution and collective exposures
Crystallite size variation of TiO_2 samples depending time heat treatment
International Nuclear Information System (INIS)
Galante, A.G.M.; Paula, F.R. de; Montanhera, M.A.; Pereira, E.A.; Spada, E.R.
2016-01-01
Titanium dioxide (TiO_2) is an oxide semiconductor that may be found in mixed phase or in distinct phases: brookite, anatase and rutile. In this work was carried out the study of the residence time influence at a given temperature in the TiO_2 powder physical properties. After the powder synthesis, the samples were divided and heat treated at 650 °C with a ramp up to 3 °C/min and a residence time ranging from 0 to 20 hours and subsequently characterized by x-ray diffraction. Analyzing the obtained diffraction patterns, it was observed that, from 5-hour residence time, began the two-distinct phase coexistence: anatase and rutile. It also calculated the average crystallite size of each sample. The results showed an increase in average crystallite size with increasing residence time of the heat treatment. (author)
Attitudes to Gun Control in an American Twin Sample: Sex Differences in the Causes of Variation.
Eaves, Lindon J; Silberg, Judy L
2017-10-01
The genetic and social causes of individual differences in attitudes to gun control are estimated in a sample of senior male and female twin pairs in the United States. Genetic and environmental parameters were estimated by weighted least squares applied to polychoric correlations for monozygotic (MZ) and dizygotic (DZ) twins of both sexes. The analysis suggests twin similarity for attitudes to gun control in men is entirely genetic while that in women is purely social. Although the volunteer sample is small, the analysis illustrates how the well-tested concepts and methods of genetic epidemiology may be a fertile resource for deepening our scientific understanding of biological and social pathways that affect individual risk to gun violence.
Albumin to creatinine ratio in a random urine sample: Correlation with severity of preeclampsia
Directory of Open Access Journals (Sweden)
Fady S. Moiety
2014-06-01
Conclusions: Random urine ACR may be a reliable method for prediction and assessment of severity of preeclampsia. Using the estimated cut-off may add to the predictive value of such a simple quick test.
Seasonal variation in physical activity, sedentary behaviour and sleep in a sample of UK adults.
O'Connell, Sophie E; Griffiths, Paula L; Clemes, Stacy A
2014-01-01
Physical activity (PA), sedentary behaviour (SB), sleep and diet have all been associated with increased risk for chronic disease. Seasonality is often overlooked as a determinant of these behaviours in adults. Currently, no study has simultaneously monitored these behaviours in UK adults to assess seasonal variation. The present study investigated whether PA, SB, sleep and diet differed over season in UK adults. Forty-six adults (72% female; age = 41.7 ± 14.4 years, BMI = 24.9 ± 4.4 kg/m(2)) completed four 7-day monitoring periods; one during each season of the year. The ActiGraph GT1M was used to monitor PA and SB. Daily sleep diaries monitored time spent in bed (TIB) and total sleep time (TST). The European Prospective Investigation of Cancer (EPIC) food frequency questionnaire (FFQ) assessed diet. Repeated measures ANOVAs were used to identify seasonal differences in behaviours. Light-intensity PA was significantly higher in summer and spring (p diet (p > 0.05). Findings support the concept that health promotion campaigns need to encourage year-round participation in light intensity PA, whilst limiting SB, particularly during the winter months.
Neigel, J E; Avise, J C
1993-12-01
In rapidly evolving molecules, such as animal mitochondrial DNA, mutations that delineate specific lineages may not be dispersed at sufficient rates to attain an equilibrium between genetic drift and gene flow. Here we predict conditions that lead to nonequilibrium geographic distributions of mtDNA lineages, test the robustness of these predictions and examine mtDNA data sets for consistency with our model. Under a simple isolation by distance model, the variance of an mtDNA lineage's geographic distribution is expected be proportional to its age. Simulation results indicated that this relationship is fairly robust. Analysis of mtDNA data from natural populations revealed three qualitative distributional patterns: (1) significant departure of lineage structure from equilibrium geographic distributions, a pattern exhibited in three rodent species with limited dispersal; (2) nonsignificant departure from equilibrium expectations, exhibited by two avian and two marine fish species with potentials for relatively long-distance dispersal; and (3) a progression from nonequilibrium distributions for younger lineages to equilibrium distributions for older lineages, a condition displayed by one surveyed avian species. These results demonstrate the advantages of considering mutation and genealogy in the interpretation of mtDNA geographic variation.
Energy Technology Data Exchange (ETDEWEB)
Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-11-01
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10^{-4} probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.
Birgül, Askın; Tasdemir, Yücel
2011-03-01
Ambient air and bulk deposition samples were collected between June 2008 and June 2009. Eighty-three polychlorinated biphenyl (PCB) congeners were targeted in the samples. The average gas and particle PCB concentrations were found as 393 ± 278 and 70 ± 102 pg/m(3), respectively, and 85% of the atmospheric PCBs were in the gas phase. Bulk deposition samples were collected by using a sampler made of stainless steel. The average PCB bulk deposition flux value was determined as 6,020 ± 4,350 pg/m(2) day. The seasonal bulk deposition fluxes were not statistically different from each other, but the summer flux had higher values. Flux values differed depending on the precipitation levels. The average flux value in the rainy periods was 7,480 ± 4,080 pg/m(2) day while the average flux value in dry periods was 5,550 ± 4,420 pg/m(2) day. The obtained deposition values were lower than the reported values given for the urban and industrialized areas, yet close to the ones for the rural sites. The reported deposition values were also influenced by the type of the instruments used. The average dry deposition and total deposition velocity values calculated based on deposition and concentration values were found as 0.23 ± 0.21 and 0.13 ± 0.13 cm/s, respectively.
Edgington, Eugene
2007-01-01
Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani
Duy, Pham K; Chang, Kyeol; Sriphong, Lawan; Chung, Hoeil
2015-03-17
An axially perpendicular offset (APO) scheme that is able to directly acquire reproducible Raman spectra of samples contained in an oval container under variation of container orientation has been demonstrated. This scheme utilized an axially perpendicular geometry between the laser illumination and the Raman photon detection, namely, irradiation through a sidewall of the container and gathering of the Raman photon just beneath the container. In the case of either backscattering or transmission measurements, Raman sampling volumes for an internal sample vary when the orientation of an oval container changes; therefore, the Raman intensities of acquired spectra are inconsistent. The generated Raman photons traverse the same bottom of the container in the APO scheme; the Raman sampling volumes can be relatively more consistent under the same situation. For evaluation, the backscattering, transmission, and APO schemes were simultaneously employed to measure alcohol gel samples contained in an oval polypropylene container at five different orientations and then the accuracies of the determination of the alcohol concentrations were compared. The APO scheme provided the most reproducible spectra, yielding the best accuracy when the axial offset distance was 10 mm. Monte Carlo simulations were performed to study the characteristics of photon propagation in the APO scheme and to explain the origin of the optimal offset distance that was observed. In addition, the utility of the APO scheme was further demonstrated by analyzing samples in a circular glass container.
Rogozińska, Ewelina; Marlin, Nadine; Yang, Fen; Dodd, Jodie M; Guelfi, Kym; Teede, Helena; Surita, Fernanda; Jensen, Dorte M; Geiker, Nina R W; Astrup, Arne; Yeo, SeonAe; Kinnunen, Tarja I; Stafne, Signe N; Cecatti, Jose G; Bogaerts, Annick; Hauner, Hans; Mol, Ben W; Scudeller, Tânia T; Vinter, Christina A; Renault, Kristina M; Devlieger, Roland; Thangaratinam, Shakila; Khan, Khalid S
2017-07-01
Trials on diet and physical activity in pregnancy report on various outcomes. We aimed to assess the variations in outcomes reported and their quality in trials on lifestyle interventions in pregnancy. We searched major databases without language restrictions for randomized controlled trials on diet and physical activity-based interventions in pregnancy up to March 2015. Two independent reviewers undertook study selection and data extraction. We estimated the percentage of papers reporting 'critically important' and 'important' outcomes. We defined the quality of reporting as a proportion using a six-item questionnaire. Regression analysis was used to identify factors affecting this quality. Sixty-six randomized controlled trials were published in 78 papers (66 main, 12 secondary). Gestational diabetes (57.6%, 38/66), preterm birth (48.5%, 32/66) and cesarian section (60.6%, 40/66), were the commonly reported 'critically important' outcomes. Gestational weight gain (84.5%, 56/66) and birth weight (87.9%, 58/66) were reported in most papers, although not considered critically important. The median quality of reporting was 0.60 (interquartile range 0.25, 0.83) for a maximum score of one. Study and journal characteristics did not affect quality. Many studies on lifestyle interventions in pregnancy do not report critically important outcomes, highlighting the need for core outcome set development. © 2017 Japan Society of Obstetrics and Gynecology.
Directory of Open Access Journals (Sweden)
KUSUMADEWI SRI YULITA
2011-07-01
Full Text Available Yulita KS (2011 Genetic variations of Lansium domesticum Corr. accessions from Java, Bengkulu and Ceram based on Random Amplified Polymorphic DNA fingerprints. Biodiversitas 12: 125-130. Duku (Lansium domesticum Corr. is one of popular tropical fruits in SE Asia. The spesies has three varieties, known as duku, langsat and kokosan; and duku is the most popular one for being the sweetiest fruit. Indonesia has several local varieties of duku, such as duku Condet, duku Sumber and duku Palembang. This present study aimed to assess genetic diversity of 47 accessions of duku from Java, Sumatra, and Ceram based on RAPD fingerprints. Ten RAPD’s primers were initially screened and five were selected for the analysis. These five primers (OPA 7, 13, 18, OPB 7, and OPN 12 generated 53 scorable bands with an average of 10.6 polymorphic fragment per primer. Percentage of polymorphism ranged from 16.89% (OPA 7 and OPN 12 to 24.54% (OPB 7 with an average of 20.16% polymorphism. OPB 7 at 450 bp was exclusively possessed by accession 20 (Java, OPA 18 at 500 bp was by accession 6 (Java, 550 bp by 3 clones from Bengkulu. While OPN 12 at 300 bp and OPA 13 at 450 bp were shared among the accessions. Clustering analysis was performed based on RAPD profiles using the UPGMA method. The range of genetic similarity value among accessions was 0.02-0.65 suggesting high variation of gene pool existed among accessions.
International Nuclear Information System (INIS)
Matsuda, Hideharu; Minato, Susumu
2002-01-01
The accuracy of statistical quantity like the mean value and contour map obtained by measurement of the environmental gamma-ray dose rate was evaluated by random sampling of 5 different model distribution maps made by the mean slope, -1.3, of power spectra calculated from the actually measured values. The values were derived from 58 natural gamma dose rate data reported worldwide ranging in the means of 10-100 Gy/h rates and 10 -3 -10 7 km 2 areas. The accuracy of the mean value was found around ±7% even for 60 or 80 samplings (the most frequent number) and the standard deviation had the accuracy less than 1/4-1/3 of the means. The correlation coefficient of the frequency distribution was found 0.860 or more for 200-400 samplings (the most frequent number) but of the contour map, 0.502-0.770. (K.H.)
Bhaskar, Anand; Wang, Y X Rachel; Song, Yun S
2015-02-01
With the recent increase in study sample sizes in human genetics, there has been growing interest in inferring historical population demography from genomic variation data. Here, we present an efficient inference method that can scale up to very large samples, with tens or hundreds of thousands of individuals. Specifically, by utilizing analytic results on the expected frequency spectrum under the coalescent and by leveraging the technique of automatic differentiation, which allows us to compute gradients exactly, we develop a very efficient algorithm to infer piecewise-exponential models of the historical effective population size from the distribution of sample allele frequencies. Our method is orders of magnitude faster than previous demographic inference methods based on the frequency spectrum. In addition to inferring demography, our method can also accurately estimate locus-specific mutation rates. We perform extensive validation of our method on simulated data and show that it can accurately infer multiple recent epochs of rapid exponential growth, a signal that is difficult to pick up with small sample sizes. Lastly, we use our method to analyze data from recent sequencing studies, including a large-sample exome-sequencing data set of tens of thousands of individuals assayed at a few hundred genic regions. © 2015 Bhaskar et al.; Published by Cold Spring Harbor Laboratory Press.
Lamontagne, Anthony D; Smith, Peter M; Louie, Amber M; Quinlan, Michael; Shoveller, Jean; Ostry, Aleck S
2009-04-01
We tested the hypothesis that the risk of experiencing unwanted sexual advances at work (UWSA) is greater for precariously-employed workers in comparison to those in permanent or continuing employment. A cross-sectional population-based telephone survey was conducted in Victoria (66% response rate, N=1,101). Employment arrangements were analysed using eight differentiated categories, as well as a four-category collapsed measure to address small cell sizes. Self-report of unwanted sexual advances at work was modelled using multiple logistic regression in relation to employment arrangement, controlling for gender, age, and occupational skill level. Forty-seven respondents reported UWSA in our sample (4.3%), mainly among women (37 of 47). Risk of UWSA was higher for younger respondents, but did not vary significantly by occupational skill level or education. In comparison to Permanent Full-Time, three employment arrangements were strongly associated with UWSA after adjustment for age, gender, and occupational skill level: Casual Full-Time OR = 7.2 (95% Confidence Interval 1.7-30.2); Fixed-Term Contract OR = 11.4 (95% CI 3.4-38.8); and Own-Account Self-Employed OR = 3.8 (95% CI 1.2-11.7). In analyses of females only, the magnitude of these associations was further increased. Respondents employed in precarious arrangements were more likely to report being exposed to UWSA, even after adjustment for age and gender. Greater protections from UWSA are likely needed for precariously employed workers.
Energy Technology Data Exchange (ETDEWEB)
Balulla, Shama, E-mail: shamamohammed77@outlook.com; Padmanabhan, E., E-mail: eswaran-padmanabhan@petronas.com.my [Department of Geoscience, Faculty of Geosciencs and Petroleum Engineering Universiti Teknologi PETRONAS, Tronoh (Malaysia); Over, Jeffrey, E-mail: over@geneseo.edu [Department of geological sciences, Geneseo, NY (United States)
2015-07-22
This study demonstrates the significant lithologic variations that occur within the two shale samples from the Chittenango member of the Marcellus shale formation from western New York State in terms of mineralogical composition, type of lamination, pyrite occurrences and fossil content using thin section detailed description and field emission Scanning electron microscope (FESEM) with energy dispersive X-Ray Spectrum (EDX). This study is classified samples as laminated clayshale and fossiliferous carbonaceous shale. The most important detrital constituents of these shales are the clay mineral illite and chlorite, quartz, organic matter, carbonate mineral, and pyrite. The laminated clayshale has a lower amount of quartz and carbonate minerals than fossiliferous carbonaceous shale while it has a higher amount of clay minerals (chlorite and illite) and organic matter. FESEM analysis confirms the presence of chlorite and illite. The fossil content in the laminated clayshale is much lower than the fossiliferous carbonaceous shale. This can provide greater insights about variations in the depositional and environmental factors that influenced its deposition. This result can be compiled with the sufficient data to be helpful for designing the horizontal wells and placement of hydraulic fracturing in shale gas exploration and production.
Directory of Open Access Journals (Sweden)
Elsa Tavernier
Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Reynolds, Maureen D; Tarter, Ralph E; Kirisci, Levent
2004-09-06
Men qualifying for substance use disorder (SUD) consequent to consumption of an illicit drug were compared according to recruitment method. It was hypothesized that volunteers would be more self-disclosing and exhibit more severe disturbances compared to randomly recruited subjects. Personal, demographic, family, social, substance use, psychiatric, and SUD characteristics of volunteers (N = 146) were compared to randomly recruited (N = 102) subjects. Volunteers had lower socioceconomic status, were more likely to be African American, and had lower IQ than randomly recruited subjects. Volunteers also evidenced greater social and family maladjustment and more frequently had received treatment for substance abuse. In addition, lower social desirability response bias was observed in the volunteers. SUD was not more severe in the volunteers; however, they reported a higher lifetime rate of opiate, diet, depressant, and analgesic drug use. Volunteers and randomly recruited subjects qualifying for SUD consequent to illicit drug use are similar in SUD severity but differ in terms of severity of psychosocial disturbance and history of drug involvement. The factors discriminating volunteers and randomly recruited subjects are well known to impact on outcome, hence they need to be considered in research design, especially when selecting a sampling strategy in treatment research.
Directory of Open Access Journals (Sweden)
Andreas Steimer
Full Text Available Oscillations between high and low values of the membrane potential (UP and DOWN states respectively are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs of the exponential integrate and fire (EIF model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing
Steimer, Andreas; Schindler, Kaspar
2015-01-01
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational
International Nuclear Information System (INIS)
Martens, B.R.
1989-01-01
In the context of random sampling tests, parameters are checked on the waste barrels and criteria are given on which these tests are based. Also, it is shown how faulty data on the properties of the waste or faulty waste barrels should be treated. To decide the extent of testing, the properties of the waste relevant to final storage are determined based on the conditioning process used. (DG) [de
Random or systematic sampling to detect a localised microbial contamination within a batch of food
Jongenburger, I.; Reij, M.W.; Boer, E.P.J.; Gorris, L.G.M.; Zwietering, M.H.
2011-01-01
Pathogenic microorganisms are known to be distributed heterogeneously in food products that are solid, semi-solid or powdered, like for instance peanut butter, cereals, or powdered milk. This complicates effective detection of the pathogens by sampling. Two-class sampling plans, which are deployed
Conditional estimation of exponential random graph models from snowball sampling designs
Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng
2013-01-01
A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members
Kane, Michael
2002-01-01
Reviews the criticisms of sampling assumptions in generalizability theory (and in reliability theory) and examines the feasibility of using representative sampling, stratification, homogeneity assumptions, and replications to address these criticisms. Suggests some general outlines for the conduct of generalizability theory studies. (SLD)
Lewis, P K; Babiker, S A
1983-01-01
Electrical stimulation decreased the shear force and increased the cooking loss in seven paired lamb Longissimus dorsi (LD) muscles. This treatment did not have any effect on the within-sample variation. Cooking in 55°, 65° and 75°C water baths for 90 min caused a linear increase in the cooking loss and shear force. There was no stimulation-cooking temperature interaction observed. Cooking temperature also had no effect on the within-sample variation. A possible explanation as to why electrical stimulation did not affect the within-sample variation is given. Copyright © 1983. Published by Elsevier Ltd.
Hustedt, Jason T; Vu, Jennifer A; Bargreen, Kaitlin N; Hallam, Rena A; Han, Myae
2017-09-01
The federal Early Head Start program provides a relevant context to examine families' experiences with stress since participants qualify on the basis of poverty and risk. Building on previous research that has shown variations in demographic and economic risks even among qualifying families, we examined possible variations in families' perceptions of stress. Family, parent, and child data were collected to measure stressors and risk across a variety of domains in families' everyday lives, primarily from self-report measures, but also including assay results from child cortisol samples. A cluster analysis was employed to examine potential differences among groups of Early Head Start families. Results showed that there were three distinct subgroups of families, with some families perceiving that they experienced very high levels of stress while others perceived much lower levels of stress despite also experiencing poverty and heightened risk. These findings have important implications in that they provide an initial step toward distinguishing differences in low-income families' experiences with stress, thereby informing interventions focused on promoting responsive caregiving as a possible mechanism to buffer the effects of family and social stressors on young children. © 2017 Michigan Association for Infant Mental Health.
Chen, Carla Chia-Ming; Schwender, Holger; Keith, Jonathan; Nunkesser, Robin; Mengersen, Kerrie; Macrossan, Paula
2011-01-01
Due to advancements in computational ability, enhanced technology and a reduction in the price of genotyping, more data are being generated for understanding genetic associations with diseases and disorders. However, with the availability of large data sets comes the inherent challenges of new methods of statistical analysis and modeling. Considering a complex phenotype may be the effect of a combination of multiple loci, various statistical methods have been developed for identifying genetic epistasis effects. Among these methods, logic regression (LR) is an intriguing approach incorporating tree-like structures. Various methods have built on the original LR to improve different aspects of the model. In this study, we review four variations of LR, namely Logic Feature Selection, Monte Carlo Logic Regression, Genetic Programming for Association Studies, and Modified Logic Regression-Gene Expression Programming, and investigate the performance of each method using simulated and real genotype data. We contrast these with another tree-like approach, namely Random Forests, and a Bayesian logistic regression with stochastic search variable selection.
Energy Technology Data Exchange (ETDEWEB)
Padula, D.; Madigan, T.; Kiermeier, A.; Daughtry, B.; Pointon, A. [South Australian Research and Development Inst. (Australia)
2004-09-15
To date there has been no published information available on the levels of dioxin (PCDD/F) and PCBs in Australian aquaculture-produced Southern Bluefin Tuna (Thunnus maccoyii). Southern Bluefin Tuna are commercially farmed off the coast of Port Lincoln in the state of South Australia, Australia. This paper reports the levels of dioxin (PCDD/F) and PCBs in muscle tissue samples from 11 randomly sampled aquaculture-produced Southern Bluefin Tuna collected in 2003. Little published data exists on the levels of dioxin (PCDD/F) and PCBs in Australian aquacultureproduced seafood. Wild tuna are first caught in the Great Australian Bight in South Australian waters, and are then brought back to Port Lincoln where they are ranched in sea-cages before being harvested and exported to Japan. The aim of the study was to identify pathways whereby contaminants such as dioxin (PCDD/F) and PCBs may enter the aquaculture production system. This involved undertaking a through chain analysis of the levels of dioxin (PCDD/F) and PCBs in wild caught tuna, seafloor sediment samples from the marine environment, levels in feeds and final harvested exported product. Detailed study was also undertaken on the variation of dioxin (PCDD/F) and PCBs across individual tuna carcases. This paper addresses the levels found in final harvested product. Details on levels found in other studies will be published elsewhere shortly.
Min, M.
2017-10-01
Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.
Random Walks on Directed Networks: Inference and Respondent-Driven Sampling
Directory of Open Access Journals (Sweden)
Malmros Jens
2016-06-01
Full Text Available Respondent-driven sampling (RDS is often used to estimate population properties (e.g., sexual risk behavior in hard-to-reach populations. In RDS, already sampled individuals recruit population members to the sample from their social contacts in an efficient snowball-like sampling procedure. By assuming a Markov model for the recruitment of individuals, asymptotically unbiased estimates of population characteristics can be obtained. Current RDS estimation methodology assumes that the social network is undirected, that is, all edges are reciprocal. However, empirical social networks in general also include a substantial number of nonreciprocal edges. In this article, we develop an estimation method for RDS in populations connected by social networks that include reciprocal and nonreciprocal edges. We derive estimators of the selection probabilities of individuals as a function of the number of outgoing edges of sampled individuals. The proposed estimators are evaluated on artificial and empirical networks and are shown to generally perform better than existing estimators. This is the case in particular when the fraction of directed edges in the network is large.
Bloom, Howard S.; Raudenbush, Stephen W.; Weiss, Michael J.; Porter, Kristin
2017-01-01
The present article considers a fundamental question in evaluation research: "By how much do program effects vary across sites?" The article first presents a theoretical model of cross-site impact variation and a related estimation model with a random treatment coefficient and fixed site-specific intercepts. This approach eliminates…
Directory of Open Access Journals (Sweden)
Thelma Suely Okay
2009-03-01
Full Text Available INTRODUCTION: Performance variation among PCR systems in detecting Toxoplasma gondii has been extensively reported and associated with target genes, primer composition, amplification parameters, treatment during pregnancy, host genetic susceptibility and genotypes of different parasites according to geographical characteristics. PATIENTS: A total of 467 amniotic fluid samples from T. gondii IgM- and IgG-positive Brazilian pregnant women being treated for 1 to 6 weeks at the time of amniocentesis (gestational ages of 14 to 25 weeks. METHODS: One nested-B1-PCR and three one-round amplification systems targeted to rDNA, AF146527 and the B1 gene were employed. RESULTS: Of the 467 samples, 189 (40.47% were positive for one-round amplifications: 120 (63.49% for the B1 gene, 24 (12.69% for AF146527, 45 (23.80% for both AF146527 and the B1 gene, and none for rDNA. Fifty previously negative one-round PCR samples were chosen by computer-assisted randomization analysis and re-tested (nested-B1-PCR, during which nine additional cases were detected (9/50 or 18%. DISCUSSION: The B1 gene PCR was far more sensitive than the AF146527 PCR, and the rDNA PCR was the least effective even though the rDNA had the most repetitive sequence. Considering that the four amplification systems were equally affected by treatment, that the amplification conditions were optimized for the target genes and that most of the primers have already been reported, it is plausible that the striking differences found among PCR performances could be associated with genetic diversity in patients and/or with different Toxoplasma gondii genotypes occurring in Brazil. CONCLUSION: The use of PCR for the diagnosis of fetal Toxoplasma infections in Brazil should be targeted to the B1 gene when only one gene can be amplified, preferably by nested amplification with primers B22/B23.
Characterization of electron microscopes with binary pseudo-random multilayer test samples
International Nuclear Information System (INIS)
Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.
2010-01-01
We discuss the results of SEM and TEM measurements with the BPRML test samples fabricated from a BPRML (WSi2/Si with fundamental layer thickness of 3 nm) with a Dual Beam FIB (focused ion beam)/SEM technique. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.
Vidovszky, Márton; Kohl, Claudia; Boldogh, Sándor; Görföl, Tamás; Wibbelt, Gudrun; Kurth, Andreas; Harrach, Balázs
2015-12-01
From over 1250 extant species of the order Chiroptera, 25 and 28 are known to occur in Germany and Hungary, respectively. Close to 350 samples originating from 28 bat species (17 from Germany, 27 from Hungary) were screened for the presence of adenoviruses (AdVs) using a nested PCR that targets the DNA polymerase gene of AdVs. An additional PCR was designed and applied to amplify a fragment from the gene encoding the IVa2 protein of mastadenoviruses. All German samples originated from organs of bats found moribund or dead. The Hungarian samples were excrements collected from colonies of known bat species, throat or rectal swab samples, taken from live individuals that had been captured for faunistic surveys and migration studies, as well as internal organs of dead specimens. Overall, 51 samples (14.73%) were found positive. We detected 28 seemingly novel and six previously described bat AdVs by sequencing the PCR products. The positivity rate was the highest among the guano samples of bat colonies. In phylogeny reconstructions, the AdVs detected in bats clustered roughly, but not perfectly, according to the hosts' families (Vespertilionidae, Rhinolophidae, Hipposideridae, Phyllostomidae and Pteropodidae). In a few cases, identical sequences were derived from animals of closely related species. On the other hand, some bat species proved to harbour more than one type of AdV. The high prevalence of infection and the large number of chiropteran species worldwide make us hypothesise that hundreds of different yet unknown AdV types might circulate in bats.
Haggard, Megan C; Kang, Linda L; Rowatt, Wade C; Shen, Megan Johnson
2015-01-01
The connection between religiousness and volunteering for the community can be explained through two distinct features of religion. First, religious organizations are social groups that encourage members to help others through planned opportunities. Second, helping others is regarded as an important value for members in religious organizations to uphold. We examined the relationship between religiousness and self-reported community volunteering in two independent national random surveys of American adults (i.e., the 2005 and 2007 waves of the Baylor Religion Survey). In both waves, frequency of religious service attendance was associated with an increase in likelihood that individuals would volunteer, whether through their religious organization or not, whereas frequency of reading sacred texts outside of religious services was associated with an increase in likelihood of volunteering only for or through their religious organization. The role of religion in community volunteering is discussed in light of these findings.
International Nuclear Information System (INIS)
Lutz, W.K.; Gaylor, D.W.; Conolly, R.B.; Lutz, R.W.
2005-01-01
Nonlinear and threshold-like shapes of dose-response curves are often observed in tests for carcinogenicity. Here, we present three examples where an apparent threshold is spurious and can be misleading for low dose extrapolation and human cancer risk assessment. Case 1: For experiments that are not replicated, such as rodent bioassays for carcinogenicity, random variation can lead to misinterpretation of the result. This situation was simulated by 20 random binomial samplings of 50 animals per group, assuming a true linear dose response from 5% to 25% tumor incidence at arbitrary dose levels 0, 0.5, 1, 2, and 4. Linearity was suggested only by 8 of the 20 simulations. Four simulations did not reveal the carcinogenicity at all. Three exhibited thresholds, two showed a nonmonotonic behavior with a decrease at low dose, followed by a significant increase at high dose ('hormesis'). Case 2: Logarithmic representation of the dose axis transforms a straight line into a sublinear (up-bent) curve, which can be misinterpreted to indicate a threshold. This is most pronounced if the dose scale includes a wide low dose range. Linear regression of net tumor incidences and intersection with the dose axis results in an apparent threshold, even with an underlying true linear dose-incidence relationship. Case 3: Nonlinear shapes of dose-cancer incidence curves are rarely seen with epidemiological data in humans. The discrepancy to data in rodents may in part be explained by a wider span of individual susceptibilities for tumor induction in humans due to more diverse genetic background and modulation by co-carcinogenic lifestyle factors. Linear extrapolation of a human cancer risk could therefore be appropriate even if animal bioassays show nonlinearity
Microenvironmental variation in preassay rearing conditions can ...
Indian Academy of Sciences (India)
alternatively in the presence of some random environmen- tal noise affecting the ... variation leading to a systematic increase or decrease in the fecundity of all pairs of flies that ... can potentially arise due to nonrandom sampling across the.
Re-estimating sample size in cluster randomized trials with active recruitment within clusters
van Schie, Sander; Moerbeek, Mirjam
2014-01-01
Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae
Huillet, Thierry E.
2017-07-01
We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.
Seroincidence of non-typhoid Salmonella infections: convenience vs. random community-based sampling.
Emborg, H-D; Simonsen, J; Jørgensen, C S; Harritshøj, L H; Krogfelt, K A; Linneberg, A; Mølbak, K
2016-01-01
The incidence of reported infections of non-typhoid Salmonella is affected by biases inherent to passive laboratory surveillance, whereas analysis of blood sera may provide a less biased alternative to estimate the force of Salmonella transmission in humans. We developed a mathematical model that enabled a back-calculation of the annual seroincidence of Salmonella based on measurements of specific antibodies. The aim of the present study was to determine the seroincidence in two convenience samples from 2012 (Danish blood donors, n = 500, and pregnant women, n = 637) and a community-based sample of healthy individuals from 2006 to 2007 (n = 1780). The lowest antibody levels were measured in the samples from the community cohort and the highest in pregnant women. The annual Salmonella seroincidences were 319 infections/1000 pregnant women [90% credibility interval (CrI) 210-441], 182/1000 in blood donors (90% CrI 85-298) and 77/1000 in the community cohort (90% CrI 45-114). Although the differences between study populations decreased when accounting for different age distributions the estimates depend on the study population. It is important to be aware of this issue and define a certain population under surveillance in order to obtain consistent results in an application of serological measures for public health purposes.
Active Learning Not Associated with Student Learning in a Random Sample of College Biology Courses
Andrews, T. M.; Leonard, M. J.; Colgrove, C. A.; Kalinowski, S. T.
2011-01-01
Previous research has suggested that adding active learning to traditional college science lectures substantially improves student learning. However, this research predominantly studied courses taught by science education researchers, who are likely to have exceptional teaching expertise. The present study investigated introductory biology courses randomly selected from a list of prominent colleges and universities to include instructors representing a broader population. We examined the relationship between active learning and student learning in the subject area of natural selection. We found no association between student learning gains and the use of active-learning instruction. Although active learning has the potential to substantially improve student learning, this research suggests that active learning, as used by typical college biology instructors, is not associated with greater learning gains. We contend that most instructors lack the rich and nuanced understanding of teaching and learning that science education researchers have developed. Therefore, active learning as designed and implemented by typical college biology instructors may superficially resemble active learning used by education researchers, but lacks the constructivist elements necessary for improving learning. PMID:22135373
Oh, Paul; Lee, Sukho; Kang, Moon Gi
2017-06-28
Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.
2014-01-01
Background Current guidelines recommend measuring plasma lipids in fasting patients. Recent studies, however, suggest that variation in plasma lipid concentrations secondary to fasting time may be minimal. Objective of the present study was to investigate the impact of fasting time on plasma lipid concentrations (total cholesterol, HDL and LDL cholesterol, triglycerides). A second objective was to determine the effect of non-alcoholic fatty liver disease exerted on the above-mentioned lipid levels. Method Subjects participating in a population-based cross-sectional study (2,445 subjects; 51.7% females) were questioned at time of phlebotomy regarding duration of pre-phlebotomy fasting. Total cholesterol, LDL and HDL cholesterol, and triglycerides were determined and correlated with length of fasting. An upper abdominal ultrasonographic examination was performed and body-mass index (BMI) and waist-to-hip ratio (WHR) were calculated. Subjects were divided into three groups based on their reported fasting periods of 1–4 h, 4–8 h and > 8 h. After application of the exclusion criteria, a total of 1,195 subjects (52.4% females) were included in the study collective. The Kruskal-Wallis test was used for continuous variables and the chi-square test for categorical variables. The effects of age, BMI, WHR, alcohol consumption, fasting time and hepatic steatosis on the respective lipid variables were analyzed using multivariate logistic regression. Results At multivariate analysis, fasting time was associated with elevated triglycerides (p = 0.0047 for 1–4 h and p = 0.0147 for 4–8 h among females; p fasting period. LDL cholesterol and triglycerides exhibit highly significant variability; the greatest impact is seen with the triglycerides. Fasting time represents an independent factor for reduced LDL cholesterol and elevated triglyceride concentrations. There is a close association between elevated lipids and hepatic steatosis. PMID:24447492
Directory of Open Access Journals (Sweden)
Gunter eSpöck
2015-05-01
Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.
International Nuclear Information System (INIS)
Matsui, E.; Salati, E.; Ribeiro, M.N.G.; Tancredi, A.C.F.N.S.; Reis, C.M. dos
1984-01-01
The movement of rain water in the soil from 0 to 120 cm depth using delta 18 O weekly variations is studied. A study of the delta D variability in water vapour and rain water samples during precipitation was also done, the samples being collected a 3 minute intervals from the beginning to the end of precipitation. (M.A.C.) [pt
Global Stratigraphy of Venus: Analysis of a Random Sample of Thirty-Six Test Areas
Basilevsky, Alexander T.; Head, James W., III
1995-01-01
The age relations between 36 impact craters with dark paraboloids and other geologic units and structures at these localities have been studied through photogeologic analysis of Magellan SAR images of the surface of Venus. Geologic settings in all 36 sites, about 1000 x 1000 km each, could be characterized using only 10 different terrain units and six types of structures. These units and structures form a major stratigraphic and geologic sequence (from oldest to youngest): (1) tessera terrain; (2) densely fractured terrains associated with coronae and in the form of remnants among plains; (3) fractured and ridged plains and ridge belts; (4) plains with wrinkle ridges; (5) ridges associated with coronae annulae and ridges of arachnoid annulae which are contemporary with wrinkle ridges of the ridged plains; (6) smooth and lobate plains; (7) fractures of coronae annulae, and fractures not related to coronae annulae, which disrupt ridged and smooth plains; (8) rift-associated fractures; and (9) craters with associated dark paraboloids, which represent the youngest 1O% of the Venus impact crater population (Campbell et al.), and are on top of all volcanic and tectonic units except the youngest episodes of rift-associated fracturing and volcanism; surficial streaks and patches are approximately contemporary with dark-paraboloid craters. Mapping of such units and structures in 36 randomly distributed large regions (each approximately 10(exp 6) sq km) shows evidence for a distinctive regional and global stratigraphic and geologic sequence. On the basis of this sequence we have developed a model that illustrates several major themes in the history of Venus. Most of the history of Venus (that of its first 80% or so) is not preserved in the surface geomorphological record. The major deformation associated with tessera formation in the period sometime between 0.5-1.0 b.y. ago (Ivanov and Basilevsky) is the earliest event detected. In the terminal stages of tessera fon
Active learning for clinical text classification: is it better than random sampling?
Figueroa, Rosa L; Ngo, Long H; Goryachev, Sergey; Wiechmann, Eduardo P
2012-01-01
Objective This study explores active learning algorithms as a way to reduce the requirements for large training sets in medical text classification tasks. Design Three existing active learning algorithms (distance-based (DIST), diversity-based (DIV), and a combination of both (CMB)) were used to classify text from five datasets. The performance of these algorithms was compared to that of passive learning on the five datasets. We then conducted a novel investigation of the interaction between dataset characteristics and the performance results. Measurements Classification accuracy and area under receiver operating characteristics (ROC) curves for each algorithm at different sample sizes were generated. The performance of active learning algorithms was compared with that of passive learning using a weighted mean of paired differences. To determine why the performance varies on different datasets, we measured the diversity and uncertainty of each dataset using relative entropy and correlated the results with the performance differences. Results The DIST and CMB algorithms performed better than passive learning. With a statistical significance level set at 0.05, DIST outperformed passive learning in all five datasets, while CMB was found to be better than passive learning in four datasets. We found strong correlations between the dataset diversity and the DIV performance, as well as the dataset uncertainty and the performance of the DIST algorithm. Conclusion For medical text classification, appropriate active learning algorithms can yield performance comparable to that of passive learning with considerably smaller training sets. In particular, our results suggest that DIV performs better on data with higher diversity and DIST on data with lower uncertainty. PMID:22707743
DEFF Research Database (Denmark)
Veraart, Almut
2011-01-01
and present a new estimator for the asymptotic "variance" of the centered realised variance in the presence of jumps. Next, we compare the finite sample performance of the various estimators by means of detailed Monte Carlo studies. Here we study the impact of the jump activity, of the jump size of the jumps......This paper studies the impact of jumps on volatility estimation and inference based on various realised variation measures such as realised variance, realised multipower variation and truncated realised multipower variation. We review the asymptotic theory of those realised variation measures...... in the price and of the presence of additional independent or dependent jumps in the volatility. We find that the finite sample performance of realised variance and, in particular, of log--transformed realised variance is generally good, whereas the jump--robust statistics tend to struggle in the presence...
Fensham, J R; Bubner, E; D'Antignana, T; Landos, M; Caraguel, C G B
2018-05-01
The Australian farmed yellowtail kingfish (Seriola lalandi, YTK) industry monitor skin fluke (Benedenia seriolae) and gill fluke (Zeuxapta seriolae) burden by pooling the fluke count of 10 hooked YTK. The random and systematic error of this sampling strategy was evaluated to assess potential impact on treatment decisions. Fluke abundance (fluke count per fish) in a study cage (estimated 30,502 fish) was assessed five times using the current sampling protocol and its repeatability was estimated the repeatability coefficient (CR) and the coefficient of variation (CV). Individual body weight, fork length, fluke abundance, prevalence, intensity (fluke count per infested fish) and density (fluke count per Kg of fish) were compared between 100 hooked and 100 seined YTK (assumed representative of the entire population) to estimate potential selection bias. Depending on the fluke species and age category, CR (expected difference in parasite count between 2 sampling iterations) ranged from 0.78 to 114 flukes per fish. Capturing YTK by hooking increased the selection of fish of a weight and length in the lowest 5th percentile of the cage (RR = 5.75, 95% CI: 2.06-16.03, P-value = 0.0001). These lower end YTK had on average an extra 31 juveniles and 6 adults Z. seriolae per Kg of fish and an extra 3 juvenile and 0.4 adult B. seriolae per Kg of fish, compared to the rest of the cage population (P-value sampling towards the smallest and most heavily infested fish in the population, resulting in poor repeatability (more variability amongst sampled fish) and an overestimation of parasite burden in the population. In this particular commercial situation these finding supported that health management program, where the finding of an underestimation of parasite burden could provide a production impact on the study population. In instances where fish populations and parasite burdens are more homogenous, sampling error may be less severe. Sampling error when capturing fish
Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Frédéric; Petignat, Patrick
2015-01-01
Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY) or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA). After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET). HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic. HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2) detected by s-DRY, 56.2% (95%CI: 47.6-64.4) by Dr-WET, and 54.6% (95%CI: 46.1-62.9) by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34), and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56). Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+) were: 64.0% (95%CI: 44.5-79.8) for s-FTA, 84.6% (95%CI: 66.5-93.9) for s-DRY, and 76.9% (95%CI: 58.0-89.0) for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%). Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD)/per card vs. ~1 USD/per swab). Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA. International Standard Randomized Controlled Trial Number (ISRCTN): 43310942.
Directory of Open Access Journals (Sweden)
Rosa Catarino
Full Text Available Human papillomavirus (HPV self-sampling (self-HPV is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab.A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA. After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET. HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic.HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2 detected by s-DRY, 56.2% (95%CI: 47.6-64.4 by Dr-WET, and 54.6% (95%CI: 46.1-62.9 by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34, and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56. Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+ were: 64.0% (95%CI: 44.5-79.8 for s-FTA, 84.6% (95%CI: 66.5-93.9 for s-DRY, and 76.9% (95%CI: 58.0-89.0 for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%. Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD/per card vs. ~1 USD/per swab.Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA.International Standard Randomized Controlled Trial Number (ISRCTN: 43310942.
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
International Nuclear Information System (INIS)
Galloway, R.B.
1991-01-01
Gamma ray spectrometry is a well established method of determining the activity of radioactive components in environmental samples. It is usual to maintain precisely the same counting geometry in measurements on samples under investigation as in the calibration measurements on standard materials of known activity, thus avoiding perceived uncertainties and complications in correcting for changes in counting geometry. However this may not always be convenient if, as on some occasions, only a small quantity of sample material is available for analysis. A procedure which avoids re-calibration for each sample size is described and is shown to be simple to use without significantly reducing the accuracy of measurement of the activity of typical environmental samples. The correction procedure relates to the use of cylindrical samples at a constant distance from the detector, the samples all having the same diameter but various thicknesses being permissible. (author)
Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey
Directory of Open Access Journals (Sweden)
Steven R. Corman
2013-12-01
Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.
Fowkes, F G; Lowe, G D; Rumley, A; Lennie, S E; Smith, F B; Donnan, P T
1993-05-01
Blood viscosity is elevated in hypertensive subjects, but the association of viscosity with arterial blood pressure in the general population, and the influence of social, lifestyle and disease characteristics on this association, are not established. In the Edinburgh Artery Study, 1592 men and women aged 55-74 years selected randomly from the general population attended a university clinic. A fasting blood sample was taken for the measurement of blood viscosity and its major determinants (haematocrit, plasma viscosity and fibrinogen). Systolic pressure was related univariately to blood viscosity (P viscosity (P index. Diastolic pressure was related univariately to blood viscosity (P viscosity (P viscosity and systolic pressure was confined to males. Blood viscosity was associated equally with systolic and diastolic pressures in males, and remained independently related on multivariate analysis adjusting for age, sex, body mass index, social class, smoking, alcohol intake, exercise, angina, HDL and non-HDL cholesterol, diabetes mellitus, plasma viscosity, fibrinogen, and haematocrit.
International Nuclear Information System (INIS)
Ito, Motohiro; Endo, Tomohiro; Yamamoto, Akio; Kuroda, Yusuke; Yoshii, Takashi
2017-01-01
The bias factor method based on the random sampling technique is applied to the benchmark problem of Peach Bottom Unit 2. Validity and availability of the present method, i.e. correction of calculation results and reduction of uncertainty, are confirmed in addition to features and performance of the present method. In the present study, core characteristics in cycle 3 are corrected with the proposed method using predicted and 'measured' critical eigenvalues in cycles 1 and 2. As the source of uncertainty, variance-covariance of cross sections is considered. The calculation results indicate that bias between predicted and measured results, and uncertainty owing to cross section can be reduced. Extension to other uncertainties such as thermal hydraulics properties will be a future task. (author)
DEFF Research Database (Denmark)
Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan
implementation generally improved the algorithm’s ability to predict the correct soil class. The implementation of soil-landscape relationships and area-proportional sampling generally increased the calculation time, while the random forest implementation reduced the calculation time. In the most successful......Detailed soil information is often needed to support agricultural practices, environmental protection and policy decisions. Several digital approaches can be used to map soil properties based on field observations. When soil observations are sparse or missing, an alternative approach...... is to disaggregate existing conventional soil maps. At present, the DSMART algorithm represents the most sophisticated approach for disaggregating conventional soil maps (Odgers et al., 2014). The algorithm relies on classification trees trained from resampled points, which are assigned classes according...
Verweij, Karin J H; Treur, Jorien L; Vink, Jacqueline M
2018-07-01
Epidemiological studies consistently show co-occurrence of use of different addictive substances. Whether these associations are causal or due to overlapping underlying influences remains an important question in addiction research. Methodological advances have made it possible to use published genetic associations to infer causal relationships between phenotypes. In this exploratory study, we used Mendelian randomization (MR) to examine the causality of well-established associations between nicotine, alcohol, caffeine and cannabis use. Two-sample MR was employed to estimate bidirectional causal effects between four addictive substances: nicotine (smoking initiation and cigarettes smoked per day), caffeine (cups of coffee per day), alcohol (units per week) and cannabis (initiation). Based on existing genome-wide association results we selected genetic variants associated with the exposure measure as an instrument to estimate causal effects. Where possible we applied sensitivity analyses (MR-Egger and weighted median) more robust to horizontal pleiotropy. Most MR tests did not reveal causal associations. There was some weak evidence for a causal positive effect of genetically instrumented alcohol use on smoking initiation and of cigarettes per day on caffeine use, but these were not supported by the sensitivity analyses. There was also some suggestive evidence for a positive effect of alcohol use on caffeine use (only with MR-Egger) and smoking initiation on cannabis initiation (only with weighted median). None of the suggestive causal associations survived corrections for multiple testing. Two-sample Mendelian randomization analyses found little evidence for causal relationships between nicotine, alcohol, caffeine and cannabis use. © 2018 Society for the Study of Addiction.
Directory of Open Access Journals (Sweden)
Jennifer L Smith
Full Text Available Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF, generally collected using the recommended gold-standard cluster randomized surveys (CRS. Integrated Threshold Mapping (ITM has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters.Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i the district prevalence of TF; (ii the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii the enrollment rate in schools.Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates
Ansari, Imran Shafique; Yilmaz, Ferkan; Alouini, Mohamed-Slim
2013-01-01
The probability density function (PDF) and cumulative distribution function of the sum of L independent but not necessarily identically distributed squared η-μ variates, applicable to the output statistics of maximal ratio combining (MRC) receiver
International Nuclear Information System (INIS)
Kramer, S.J.; Milton, G.M.; Repta, C.J.W.
1995-06-01
The effect of variations in sample preparation and storage on the counting efficiency for 14 C using a Carbo-Sorb/PermafluorE+ liquid scintillation cocktail has been studied, and optimum conditions are recommended. (author). 2 refs., 2 tabs., 4 figs
Susukida, Ryoko; Crum, Rosa M; Stuart, Elizabeth A; Ebnesajjad, Cyrus; Mojtabai, Ramin
2016-07-01
To compare the characteristics of individuals participating in randomized controlled trials (RCTs) of treatments of substance use disorder (SUD) with individuals receiving treatment in usual care settings, and to provide a summary quantitative measure of differences between characteristics of these two groups of individuals using propensity score methods. Design Analyses using data from RCT samples from the National Institute of Drug Abuse Clinical Trials Network (CTN) and target populations of patients drawn from the Treatment Episodes Data Set-Admissions (TEDS-A). Settings Multiple clinical trial sites and nation-wide usual SUD treatment settings in the United States. A total of 3592 individuals from 10 CTN samples and 1 602 226 individuals selected from TEDS-A between 2001 and 2009. Measurements The propensity scores for enrolling in the RCTs were computed based on the following nine observable characteristics: sex, race/ethnicity, age, education, employment status, marital status, admission to treatment through criminal justice, intravenous drug use and the number of prior treatments. Findings The proportion of those with ≥ 12 years of education and the proportion of those who had full-time jobs were significantly higher among RCT samples than among target populations (in seven and nine trials, respectively, at P difference in the mean propensity scores between the RCTs and the target population was 1.54 standard deviations and was statistically significant at P different from individuals receiving treatment in usual care settings. Notably, RCT participants tend to have more years of education and a greater likelihood of full-time work compared with people receiving care in usual care settings. © 2016 Society for the Study of Addiction.
Li, Ningzhi; Li, Shizhe; Shen, Jun
2017-06-01
In vivo 13C magnetic resonance spectroscopy (MRS) is a unique and effective tool for studying dynamic human brain metabolism and the cycling of neurotransmitters. One of the major technical challenges for in vivo 13C-MRS is the high radio frequency (RF) power necessary for heteronuclear decoupling. In the common practice of in vivo 13C-MRS, alkanyl carbons are detected in the spectra range of 10-65ppm. The amplitude of decoupling pulses has to be significantly greater than the large one-bond 1H-13C scalar coupling (1JCH=125-145 Hz). Two main proton decoupling methods have been developed: broadband stochastic decoupling and coherent composite or adiabatic pulse decoupling (e.g., WALTZ); the latter is widely used because of its efficiency and superb performance under inhomogeneous B1 field. Because the RF power required for proton decoupling increases quadratically with field strength, in vivo 13C-MRS using coherent decoupling is often limited to low magnetic fields (protons via weak long-range 1H-13C scalar couplings, which can be decoupled using low RF power broadband stochastic decoupling. Recently, the carboxylic/amide 13C-MRS technique using low power random RF heteronuclear decoupling was safely applied to human brain studies at 7T. Here, we review the two major decoupling methods and the carboxylic/amide 13C-MRS with low power decoupling strategy. Further decreases in RF power deposition by frequency-domain windowing and time-domain random under-sampling are also discussed. Low RF power decoupling opens the possibility of performing in vivo 13C experiments of human brain at very high magnetic fields (such as 11.7T), where signal-to-noise ratio as well as spatial and temporal spectral resolution are more favorable than lower fields.
Directory of Open Access Journals (Sweden)
Nguyen Phuong H
2012-10-01
Full Text Available Abstract Background Low birth weight and maternal anemia remain intractable problems in many developing countries. The adequacy of the current strategy of providing iron-folic acid (IFA supplements only during pregnancy has been questioned given many women enter pregnancy with poor iron stores, the substantial micronutrient demand by maternal and fetal tissues, and programmatic issues related to timing and coverage of prenatal care. Weekly IFA supplementation for women of reproductive age (WRA improves iron status and reduces the burden of anemia in the short term, but few studies have evaluated subsequent pregnancy and birth outcomes. The Preconcept trial aims to determine whether pre-pregnancy weekly IFA or multiple micronutrient (MM supplementation will improve birth outcomes and maternal and infant iron status compared to the current practice of prenatal IFA supplementation only. This paper provides an overview of study design, methodology and sample characteristics from baseline survey data and key lessons learned. Methods/design We have recruited 5011 WRA in a double-blind stratified randomized controlled trial in rural Vietnam and randomly assigned them to receive weekly supplements containing either: 1 2800 μg folic acid 2 60 mg iron and 2800 μg folic acid or 3 MM. Women who become pregnant receive daily IFA, and are being followed through pregnancy, delivery, and up to three months post-partum. Study outcomes include birth outcomes and maternal and infant iron status. Data are being collected on household characteristics, maternal diet and mental health, anthropometry, infant feeding practices, morbidity and compliance. Discussion The study is timely and responds to the WHO Global Expert Consultation which identified the need to evaluate the long term benefits of weekly IFA and MM supplementation in WRA. Findings will generate new information to help guide policy and programs designed to reduce the burden of anemia in women and
Iterative random vs. Kennard-Stone sampling for IR spectrum-based classification task using PLS2-DA
Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz
2018-04-01
External testing (ET) is preferred over auto-prediction (AP) or k-fold-cross-validation in estimating more realistic predictive ability of a statistical model. With IR spectra, Kennard-stone (KS) sampling algorithm is often used to split the data into training and test sets, i.e. respectively for model construction and for model testing. On the other hand, iterative random sampling (IRS) has not been the favored choice though it is theoretically more likely to produce reliable estimation. The aim of this preliminary work is to compare performances of KS and IRS in sampling a representative training set from an attenuated total reflectance - Fourier transform infrared spectral dataset (of four varieties of blue gel pen inks) for PLS2-DA modeling. The `best' performance achievable from the dataset is estimated with AP on the full dataset (APF, error). Both IRS (n = 200) and KS were used to split the dataset in the ratio of 7:3. The classic decision rule (i.e. maximum value-based) is employed for new sample prediction via partial least squares - discriminant analysis (PLS2-DA). Error rate of each model was estimated repeatedly via: (a) AP on full data (APF, error); (b) AP on training set (APS, error); and (c) ET on the respective test set (ETS, error). A good PLS2-DA model is expected to produce APS, error and EVS, error that is similar to the APF, error. Bearing that in mind, the similarities between (a) APS, error vs. APF, error; (b) ETS, error vs. APF, error and; (c) APS, error vs. ETS, error were evaluated using correlation tests (i.e. Pearson and Spearman's rank test), using series of PLS2-DA models computed from KS-set and IRS-set, respectively. Overall, models constructed from IRS-set exhibits more similarities between the internal and external error rates than the respective KS-set, i.e. less risk of overfitting. In conclusion, IRS is more reliable than KS in sampling representative training set.
Directory of Open Access Journals (Sweden)
Alanis Kelly L
2006-02-01
Full Text Available Abstract Background Establishing more sensible measures to treat cocaine-addicted mothers and their children is essential for improving U.S. drug policy. Favorable post-natal environments have moderated potential deleterious prenatal effects. However, since cocaine is an illicit substance having long been demonized, we hypothesized that attitudes toward prenatal cocaine exposure would be more negative than for licit substances, alcohol, nicotine and caffeine. Further, media portrayals about long-term outcomes were hypothesized to influence viewers' attitudes, measured immediately post-viewing. Reducing popular crack baby stigmas could influence future policy decisions by legislators. In Study 1, 336 participants were randomly assigned to 1 of 4 conditions describing hypothetical legal sanction scenarios for pregnant women using cocaine, alcohol, nicotine or caffeine. Participants rated legal sanctions against pregnant women who used one of these substances and risk potential for developing children. In Study 2, 139 participants were randomly assigned to positive, neutral and negative media conditions. Immediately post-viewing, participants rated prenatal cocaine-exposed or non-exposed teens for their academic performance and risk for problems at age18. Results Participants in Study 1 imposed significantly greater legal sanctions for cocaine, perceiving prenatal cocaine exposure as more harmful than alcohol, nicotine or caffeine. A one-way ANOVA for independent samples showed significant differences, beyond .0001. Post-hoc Sheffe test illustrated that cocaine was rated differently from other substances. In Study 2, a one-way ANOVA for independent samples was performed on difference scores for the positive, neutral or negative media conditions about prenatal cocaine exposure. Participants in the neutral and negative media conditions estimated significantly lower grade point averages and more problems for the teen with prenatal cocaine exposure
McKinney, Cushla; Fanciulli, Manuela; Merriman, Marilyn E.; Phipps-Green, Amanda; Alizadeh, Behrooz Z.; Koeleman, Bobby P. C.; Dalbeth, Nicola; Gow, Peter J.; Harrison, Andrew A.; Highton, John; Jones, Peter B.; Stamp, Lisa K.; Steer, Sophia; Barrera, Pilar; Coenen, Marieke J. H.; Franke, Barbara; van Riel, Piet L. C. M.; Vyse, Tim J.; Aitman, Tim J.; Radstake, Timothy R. D. J.; Merriman, Tony R.
2010-01-01
Objective There is increasing evidence that variation in gene copy number (CN) influences clinical phenotype. The low-affinity Fc gamma receptor 3B (FCGR3B) located in the FCGR gene cluster is a CN polymorphic gene involved in the recruitment to sites of inflammation and activation of
McKinney, C.; Fanciulli, M.; Merriman, M.E.; Phipps-Green, A.; Alizadeh, B.Z.; Koeleman, B.P.; Dalbeth, N.; Gow, P.J.; Harrison, A.A.; Highton, J.; Jones, P.B.; Stamp, L.K.; Steer, S.; Barrera, P.; Coenen, M.J.H.; Franke, B.; Riel, P.L.C.M. van; Vyse, T.J.; Aitman, T.J.; Radstake, T.R.D.J.; Merriman, T.R.
2010-01-01
OBJECTIVE: There is increasing evidence that variation in gene copy number (CN) influences clinical phenotype. The low-affinity Fcgamma receptor 3B (FCGR3B) located in the FCGR gene cluster is a CN polymorphic gene involved in the recruitment to sites of inflammation and activation of
Anhøj, Jacob; Olesen, Anne Vingaard
2014-01-01
A run chart is a line graph of a measure plotted over time with the median as a horizontal line. The main purpose of the run chart is to identify process improvement or degradation, which may be detected by statistical tests for non-random patterns in the data sequence. We studied the sensitivity to shifts and linear drifts in simulated processes using the shift, crossings and trend rules for detecting non-random variation in run charts. The shift and crossings rules are effective in detecting shifts and drifts in process centre over time while keeping the false signal rate constant around 5% and independent of the number of data points in the chart. The trend rule is virtually useless for detection of linear drift over time, the purpose it was intended for.
DEFF Research Database (Denmark)
Kristensen, Erling Lundager; Østergaard, Søren; Krogh, Mogens Agerbo
2008-01-01
The manager of a dairy herd and the affiliated consultants constantly need to judge whether financial performance of the production system is satisfactory and whether financial performance relates to real (systematic) effects of changes in management. This is no easy task because the dairy herd...... is a very complex system. Thus, it is difficult to obtain empirical data that allows a valid estimation of the random (within-herd) variation in financial performance corrected for management changes. Thus, simulation seems to be the only option. This study suggests that much caution must be recommended...
Tran, Kathy V; Azhar, Gulrez S; Nair, Rajesh; Knowlton, Kim; Jaiswal, Anjali; Sheffield, Perry; Mavalankar, Dileep; Hess, Jeremy
2013-06-18
Extreme heat is a significant public health concern in India; extreme heat hazards are projected to increase in frequency and severity with climate change. Few of the factors driving population heat vulnerability are documented, though poverty is a presumed risk factor. To facilitate public health preparedness, an assessment of factors affecting vulnerability among slum dwellers was conducted in summer 2011 in Ahmedabad, Gujarat, India. Indicators of heat exposure, susceptibility to heat illness, and adaptive capacity, all of which feed into heat vulnerability, was assessed through a cross-sectional household survey using randomized multistage cluster sampling. Associations between heat-related morbidity and vulnerability factors were identified using multivariate logistic regression with generalized estimating equations to account for clustering effects. Age, preexisting medical conditions, work location, and access to health information and resources were associated with self-reported heat illness. Several of these variables were unique to this study. As sociodemographics, occupational heat exposure, and access to resources were shown to increase vulnerability, future interventions (e.g., health education) might target specific populations among Ahmedabad urban slum dwellers to reduce vulnerability to extreme heat. Surveillance and evaluations of future interventions may also be worthwhile.
Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2015-04-10
A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.
Messiah, Antoine; Lacoste, Jérôme; Gokalsing, Erick; Shultz, James M; Rodríguez de la Vega, Pura; Castro, Grettel; Acuna, Juan M
2016-08-01
Studies on the mental health of families hosting disaster refugees are lacking. This study compares participants in households that hosted 2010 Haitian earthquake disaster refugees with their nonhost counterparts. A random sample survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants were assessed regarding their 2010 earthquake exposure and impact on family and friends and whether they hosted earthquake refugees. Using standardized scores and thresholds, they were evaluated for symptoms of three common mental disorders (CMDs): posttraumatic stress disorder, generalized anxiety disorder, and major depressive disorder (MDD). Participants who hosted refugees (n = 51) had significantly higher percentages of scores beyond thresholds for MDD than those who did not host refugees (n = 365) and for at least one CMD, after adjusting for participants' earthquake exposures and effects on family and friends. Hosting refugees from a natural disaster appears to elevate the risk for MDD and possibly other CMDs, independent of risks posed by exposure to the disaster itself. Families hosting refugees deserve special attention.
Clagett, Bartholt; Nathanson, Katherine L.; Ciosek, Stephanie L.; McDermoth, Monique; Vaughn, David J.; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A.
2013-01-01
Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-...
Voineskos, Sophocles H; Coroneos, Christopher J; Ziolkowski, Natalia I; Kaur, Manraj N; Banfield, Laura; Meade, Maureen O; Chung, Kevin C; Thoma, Achilleas; Bhandari, Mohit
2016-02-01
The authors examined industry support, conflict of interest, and sample size in plastic surgery randomized controlled trials that compared surgical interventions. They hypothesized that industry-funded trials demonstrate statistically significant outcomes more often, and randomized controlled trials with small sample sizes report statistically significant results more frequently. An electronic search identified randomized controlled trials published between 2000 and 2013. Independent reviewers assessed manuscripts and performed data extraction. Funding source, conflict of interest, primary outcome direction, and sample size were examined. Chi-squared and independent-samples t tests were used in the analysis. The search identified 173 randomized controlled trials, of which 100 (58 percent) did not acknowledge funding status. A relationship between funding source and trial outcome direction was not observed. Both funding status and conflict of interest reporting improved over time. Only 24 percent (six of 25) of industry-funded randomized controlled trials reported authors to have independent control of data and manuscript contents. The mean number of patients randomized was 73 per trial (median, 43, minimum, 3, maximum, 936). Small trials were not found to be positive more often than large trials (p = 0.87). Randomized controlled trials with small sample size were common; however, this provides great opportunity for the field to engage in further collaboration and produce larger, more definitive trials. Reporting of trial funding and conflict of interest is historically poor, but it greatly improved over the study period. Underreporting at author and journal levels remains a limitation when assessing the relationship between funding source and trial outcomes. Improved reporting and manuscript control should be goals that both authors and journals can actively achieve.
Glasscock, David J; Carstensen, Ole; Dalgaard, Vita Ligaya
2018-05-28
Randomized controlled trials (RCTs) of interventions aimed at reducing work-related stress indicate that cognitive behavioural therapy (CBT) is more effective than other interventions. However, definitions of study populations are often unclear and there is a lack of interventions targeting both the individual and the workplace. The aim of this study was to determine whether a stress management intervention combining individual CBT and a workplace focus is superior to no treatment in the reduction of perceived stress and stress symptoms and time to lasting return to work (RTW) in a clinical sample. Patients with work-related stress reactions or adjustment disorders were randomly assigned to an intervention group (n = 57, 84.2% female) or a control group (n = 80, 83.8% female). Subjects were followed via questionnaires and register data. The intervention contained individual CBT and the offer of a workplace meeting. We examined intervention effects by analysing group differences in score changes on the Perceived Stress Scale (PSS-10) and the General Health Questionnaire (GHQ-30). We also tested if intervention led to faster lasting RTW. Mean baseline values of PSS were 24.79 in the intervention group and 23.26 in the control group while the corresponding values for GHQ were 21.3 and 20.27, respectively. There was a significant effect of time. 10 months after baseline, both groups reported less perceived stress and improved mental health. 4 months after baseline, we found significant treatment effects for both perceived stress and mental health. The difference in mean change in PSS after 4 months was - 3.09 (- 5.47, - 0.72), while for GHQ it was - 3.91 (- 7.15, - 0.68). There were no group differences in RTW. The intervention led to faster reductions in perceived stress and stress symptoms amongst patients with work-related stress reactions and adjustment disorders. 6 months after the intervention ended there were no longer differences between
Directory of Open Access Journals (Sweden)
Romain Guignard
Full Text Available OBJECTIVES: It is crucial for policy makers to monitor the evolution of tobacco smoking prevalence. In France, this monitoring is based on a series of cross-sectional general population surveys, the Health Barometers, conducted every five years and based on random samples. A methodological study has been carried out to assess the reliability of a monitoring system based on regular quota sampling surveys for smoking prevalence. DESIGN / OUTCOME MEASURES: In 2010, current and daily tobacco smoking prevalences obtained in a quota survey on 8,018 people were compared with those of the 2010 Health Barometer carried out on 27,653 people. Prevalences were assessed separately according to the telephone equipment of the interviewee (landline phone owner vs "mobile-only", and logistic regressions were conducted in the pooled database to assess the impact of the telephone equipment and of the survey mode on the prevalences found. Finally, logistic regressions adjusted for sociodemographic characteristics were conducted in the random sample in order to determine the impact of the needed number of calls to interwiew "hard-to-reach" people on the prevalence found. RESULTS: Current and daily prevalences were higher in the random sample (respectively 33.9% and 27.5% in 15-75 years-old than in the quota sample (respectively 30.2% and 25.3%. In both surveys, current and daily prevalences were lower among landline phone owners (respectively 31.8% and 25.5% in the random sample and 28.9% and 24.0% in the quota survey. The required number of calls was slightly related to the smoking status after adjustment for sociodemographic characteristics. CONCLUSION: Random sampling appears to be more effective than quota sampling, mainly by making it possible to interview hard-to-reach populations.
Werner, Benjamin; Sottoriva, Andrea
2018-06-01
The immortal strand hypothesis poses that stem cells could produce differentiated progeny while conserving the original template strand, thus avoiding accumulating somatic mutations. However, quantitating the extent of non-random DNA strand segregation in human stem cells remains difficult in vivo. Here we show that the change of the mean and variance of the mutational burden with age in healthy human tissues allows estimating strand segregation probabilities and somatic mutation rates. We analysed deep sequencing data from healthy human colon, small intestine, liver, skin and brain. We found highly effective non-random DNA strand segregation in all adult tissues (mean strand segregation probability: 0.98, standard error bounds (0.97,0.99)). In contrast, non-random strand segregation efficiency is reduced to 0.87 (0.78,0.88) in neural tissue during early development, suggesting stem cell pool expansions due to symmetric self-renewal. Healthy somatic mutation rates differed across tissue types, ranging from 3.5 × 10-9/bp/division in small intestine to 1.6 × 10-7/bp/division in skin.
Directory of Open Access Journals (Sweden)
Benjamin Werner
2018-06-01
Full Text Available The immortal strand hypothesis poses that stem cells could produce differentiated progeny while conserving the original template strand, thus avoiding accumulating somatic mutations. However, quantitating the extent of non-random DNA strand segregation in human stem cells remains difficult in vivo. Here we show that the change of the mean and variance of the mutational burden with age in healthy human tissues allows estimating strand segregation probabilities and somatic mutation rates. We analysed deep sequencing data from healthy human colon, small intestine, liver, skin and brain. We found highly effective non-random DNA strand segregation in all adult tissues (mean strand segregation probability: 0.98, standard error bounds (0.97,0.99. In contrast, non-random strand segregation efficiency is reduced to 0.87 (0.78,0.88 in neural tissue during early development, suggesting stem cell pool expansions due to symmetric self-renewal. Healthy somatic mutation rates differed across tissue types, ranging from 3.5 × 10-9/bp/division in small intestine to 1.6 × 10-7/bp/division in skin.
de Freitas, Patricia Moreira; Menezes, Andressa Nery; da Mota, Ana Carolina Costa; Simões, Alyne; Mendes, Fausto Medeiros; Lago, Andrea Dias Neves; Ferreira, Leila Soares; Ramos-Oliveira, Thayanne Monteiro
2016-01-01
The present study investigated how a hybrid light source (LED/laser) influences temperature variation on the enamel surfaces during 35% hydrogen peroxide (HP) bleaching. Effects on the whitening effectiveness and tooth sensitivity were analyzed. Twenty-two volunteers were randomly assigned to two different treatments in a split-mouth experimental model: group 1 (control), 35% HP; group 2 (experimental), 35% HP + LED/laser. Color evaluation was performed before treatment, and 7 and 14 days after completion of bleaching, using a color shade scale. Tooth sensitivity was assessed using a visual analog scale (VAS; before, immediately, and 24 hours after bleaching). During the bleaching treatment, thermocouple channels positioned on the tooth surfaces recorded the temperature. Data on color and temperature changes were subjected to statistical analysis (α = 5%). Tooth sensitivity data were evaluated descriptively. Groups 1 and 2 showed mean temperatures (± standard deviation) of 30.7 ± 1.2 °C and 34.1 ± 1.3 °C, respectively. It was found that there were statistically significant differences between the groups, with group 2 showing higher mean variation (P enamel surface. The color change results showed no differences in bleaching between the two treatment groups (P = .177). The variation of the average temperature during the treatments was not statistically associated with color variation (P = .079). Immediately after bleaching, it was found that 36.4% of the subjects in group 2 had mild to moderate sensitivity. In group 1, 45.5% showed moderate sensitivity. In both groups, the sensitivity ceased within 24 hours. Hybrid light source (LED/ laser) influences temperature variation on the enamel surface during 35% HP bleaching and is not related to greater tooth sensitivity.
Goenka, Ajit H; Remer, Erick M; Veniero, Joseph C; Thupili, Chakradhar R; Klein, Eric A
2015-09-01
The objective of our study was to review our experience with CT-guided transgluteal prostate biopsy in patients without rectal access. Twenty-one CT-guided transgluteal prostate biopsy procedures were performed in 16 men (mean age, 68 years; age range, 60-78 years) who were under conscious sedation. The mean prostate-specific antigen (PSA) value was 11.4 ng/mL (range, 2.3-39.4 ng/mL). Six had seven prior unsuccessful transperineal or transurethral biopsies. Biopsy results, complications, sedation time, and radiation dose were recorded. The mean PSA values and number of core specimens were compared between patients with malignant results and patients with nonmalignant results using the Student t test. The average procedural sedation time was 50.6 minutes (range, 15-90 minutes) (n = 20), and the mean effective radiation dose was 8.2 mSv (median, 6.6 mSv; range 3.6-19.3 mSv) (n = 13). Twenty of the 21 (95%) procedures were technically successful. The only complication was a single episode of gross hematuria and penile pain in one patient, which resolved spontaneously. Of 20 successful biopsies, 8 (40%) yielded adenocarcinoma (Gleason score: mean, 8; range, 7-9). Twelve biopsies yielded nonmalignant results (60%): high-grade prostatic intraepithelial neoplasia (n = 3) or benign prostatic tissue with or without inflammation (n = 9). Three patients had carcinoma diagnosed on subsequent biopsies (second biopsy, n = 2 patients; third biopsy, n = 1 patient). A malignant biopsy result was not significantly associated with the number of core specimens (p = 0.3) or the mean PSA value (p = 0.1). CT-guided transgluteal prostate biopsy is a safe and reliable technique for the systematic random sampling of the prostate in patients without a rectal access. In patients with initial negative biopsy results, repeat biopsy should be considered if there is a persistent rise in the PSA value.
Carobene, Anna; Strollo, Marta; Jonker, Niels; Barla, Gerhard; Bartlett, William A; Sandberg, Sverre; Sylte, Marit Sverresdotter; Røraas, Thomas; Sølvik, Una Ørvim; Fernandez-Calle, Pilar; Díaz-Garzón, Jorge; Tosato, Francesca; Plebani, Mario; Coşkun, Abdurrahman; Serteser, Mustafa; Unsal, Ibrahim; Ceriotti, Ferruccio
2016-10-01
Biological variation (BV) data have many fundamental applications in laboratory medicine. At the 1st Strategic Conference of the European Federation of Clinical Chemistry and Laboratory Medicine (EFLM) the reliability and limitations of current BV data were discussed. The EFLM Working Group on Biological Variation is working to increase the quality of BV data by developing a European project to establish a biobank of samples from healthy subjects to be used to produce high quality BV data. The project involved six European laboratories (Milan, Italy; Bergen, Norway; Madrid, Spain; Padua, Italy; Istanbul, Turkey; Assen, The Netherlands). Blood samples were collected from 97 volunteers (44 men, aged 20-60 years; 43 women, aged 20-50 years; 10 women, aged 55-69 years). Initial subject inclusion required that participants completed an enrolment questionnaire to verify their health status. The volunteers provided blood specimens once per week for 10 weeks. A short questionnaire was completed and some laboratory tests were performed at each sampling consisting of blood collected under controlled conditions to provide serum, K2EDTA-plasma and citrated-plasma samples. Samples from six out of the 97 enroled subjects were discarded as a consequence of abnormal laboratory measurements. A biobank of 18,000 aliquots was established consisting of 120 aliquots of serum, 40 of EDTA-plasma, and 40 of citrated-plasma from each subject. The samples were stored at -80 °C. A biobank of well-characterised samples collected under controlled conditions has been established delivering a European resource to enable production of contemporary BV data.
DEFF Research Database (Denmark)
Rietschel, M; Mattheisen, M; Degenhardt, F
2012-01-01
the recruitment of very large samples of patients and controls (that is tens of thousands), or large, potentially more homogeneous samples that have been recruited from confined geographical areas using identical diagnostic criteria. Applying the latter strategy, we performed a genome-wide association study (GWAS...... between emotion regulation and cognition that is structurally and functionally abnormal in SCZ and bipolar disorder.Molecular Psychiatry advance online publication, 12 July 2011; doi:10.1038/mp.2011.80....
Startsev, V. O.; Il'ichev, A. V.
2018-05-01
The effect of mechanical impact energy on the sorption and diffusion of moisture in polymer composite samples on variation of their sizes was investigated. Square samples, with sides of 40, 60, 80, and 100 mm, made of a KMKU-2m-120.E0,1 carbon-fiber and KMKS-2m.120.T10 glass-fiber plastics with different resistances to calibrated impacts, were compared. Impact loading diagrams of the samples in relation to their sizes and impact energy were analyzed. It is shown that the moisture saturation and moisture diffusion coefficient of the impact-damaged materials can be modeled by Fick's second law with account of impact energy and sample sizes.
International Nuclear Information System (INIS)
Makepeace, C.E.
1981-01-01
Sampling strategies for the monitoring of deleterious agents present in uranium mine air in underground and surface mining areas are described. These methods are designed to prevent overexposure of the lining of the respiratory system of uranium miners to ionizing radiation from radon and radon daughters, and whole body overexposure to external gamma radiation. A detailed description is provided of stratified random sampling monitoring methodology for obtaining baseline data to be used as a reference for subsequent compliance assessment
Arnup, Sarah J; McKenzie, Joanne E; Hemming, Karla; Pilcher, David; Forbes, Andrew B
2017-08-15
In a cluster randomised crossover (CRXO) design, a sequence of interventions is assigned to a group, or 'cluster' of individuals. Each cluster receives each intervention in a separate period of time, forming 'cluster-periods'. Sample size calculations for CRXO trials need to account for both the cluster randomisation and crossover aspects of the design. Formulae are available for the two-period, two-intervention, cross-sectional CRXO design, however implementation of these formulae is known to be suboptimal. The aims of this tutorial are to illustrate the intuition behind the design; and provide guidance on performing sample size calculations. Graphical illustrations are used to describe the effect of the cluster randomisation and crossover aspects of the design on the correlation between individual responses in a CRXO trial. Sample size calculations for binary and continuous outcomes are illustrated using parameters estimated from the Australia and New Zealand Intensive Care Society - Adult Patient Database (ANZICS-APD) for patient mortality and length(s) of stay (LOS). The similarity between individual responses in a CRXO trial can be understood in terms of three components of variation: variation in cluster mean response; variation in the cluster-period mean response; and variation between individual responses within a cluster-period; or equivalently in terms of the correlation between individual responses in the same cluster-period (within-cluster within-period correlation, WPC), and between individual responses in the same cluster, but in different periods (within-cluster between-period correlation, BPC). The BPC lies between zero and the WPC. When the WPC and BPC are equal the precision gained by crossover aspect of the CRXO design equals the precision lost by cluster randomisation. When the BPC is zero there is no advantage in a CRXO over a parallel-group cluster randomised trial. Sample size calculations illustrate that small changes in the specification of
Boezen, H M; Schouten, J. P.; Postma, D S; Rijcken, B
1994-01-01
Peak expiratory flow (PEF) variability can be considered as an index of bronchial lability. Population studies on PEF variability are few. The purpose of the current paper is to describe the distribution of PEF variability in a random population sample of adults with a wide age range (20-70 yrs),
Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino
2012-01-01
Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...
Ansari, Imran Shafique
2013-06-01
The probability density function (PDF) and cumulative distribution function of the sum of L independent but not necessarily identically distributed squared η-μ variates, applicable to the output statistics of maximal ratio combining (MRC) receiver operating over η-μ fading channels that includes the Hoyt and the Nakagami-m models as special cases, is presented in closed-form in terms of the Fox\\'s H function. Further analysis, particularly on the bit error rate via PDF-based approach, is also represented in closed form in terms of the extended Fox\\'s H function (H). The proposed new analytical results complement previous results and are illustrated by extensive numerical and Monte Carlo simulation results. © 2013 IEEE.
Simuta-Champo, R.; Herrera-Zamarrón, G. S.
2010-01-01
The Monte Carlo technique provides a natural method for evaluating uncertainties. The uncertainty is represented by a probability distribution or by related quantities such as statistical moments. When the groundwater flow and transport governing equations are solved and the hydraulic conductivity field is treated as a random spatial function, the hydraulic head, velocities and concentrations also become random spatial functions. When that is the case, for the stochastic simulation of groundw...
Directory of Open Access Journals (Sweden)
Siwatt Pongpiachan
2012-01-01
Full Text Available This paper focuses on providing new results relating to the impacts of Diurnal variation, Vertical distribution, and Emission source on sulfur K-edge XANES spectrum of aerosol samples. All aerosol samples used in the diurnal variation experiment were preserved using anoxic preservation stainless cylinders (APSCs and pressure-controlled glove boxes (PCGBs, which were specially designed to prevent oxidation of the sulfur states in PM10. Further investigation of sulfur K-edge XANES spectra revealed that PM10 samples were dominated by S(VI, even when preserved in anoxic conditions. The “Emission source effect” on the sulfur oxidation state of PM10 was examined by comparing sulfur K-edge XANES spectra collected from various emission sources in southern Thailand, while “Vertical distribution effects” on the sulfur oxidation state of PM10 were made with samples collected from three different altitudes from rooftops of the highest buildings in three major cities in Thailand. The analytical results have demonstrated that neither “Emission source” nor “Vertical distribution” appreciably contribute to the characteristic fingerprint of sulfur K-edge XANES spectrum in PM10.
Directory of Open Access Journals (Sweden)
Ming-Hung Chien
Full Text Available Falls are common in older people and may lead to functional decline, disability, and death. Many risk factors have been identified, but studies evaluating effects of nutritional status are limited. To determine whether nutritional status is a predictor of falls in older people living in the community, we analyzed data collected through the Survey of Health and Living Status of the Elderly in Taiwan (SHLSET.SHLSET include a series of interview surveys conducted by the government on a random sample of people living in community dwellings in the nation. We included participants who received nutritional status assessment using the Mini Nutritional Assessment Taiwan Version 2 (MNA-T2 in the 1999 survey when they were 53 years or older and followed up on the cumulative incidence of falls in the one-year period before the interview in the 2003 survey.At the beginning of follow-up, the 4440 participants had a mean age of 69.5 (standard deviation= 9.1 years, and 467 participants were "not well-nourished," which was defined as having an MNA-T2 score of 23 or less. In the one-year study period, 659 participants reported having at least one fall. After adjusting for other risk factors, we found the associated odds ratio for falls was 1.73 (95% confidence interval, 1.23, 2.42 for "not well-nourished," 1.57 (1.30, 1.90 for female gender, 1.03 (1.02, 1.04 for one-year older, 1.55 (1.22, 1.98 for history of falls, 1.34 (1.05, 1.72 for hospital stay during the past 12 months, 1.66 (1.07, 2.58 for difficulties in activities of daily living, and 1.53 (1.23, 1.91 for difficulties in instrumental activities of daily living.Nutritional status is an independent predictor of falls in older people living in the community. Further studies are warranted to identify nutritional interventions that can help prevent falls in the elderly.
DEFF Research Database (Denmark)
Fuglsang, Karsten; Pedersen, Niels Hald; Larsen, Anna Warberg
2014-01-01
A dedicated sampling and measurement method was developed for long-term measurements of biogenic and fossil-derived CO2 from thermal waste-to-energy processes. Based on long-term sampling of CO2 and 14C determination, plant-specific emission factors can be determined more accurately, and the annual...... emission of fossil CO2 from waste-to-energy plants can be monitored according to carbon trading schemes and renewable energy certificates. Weekly and monthly measurements were performed at five Danish waste incinerators. Significant variations between fractions of biogenic CO2 emitted were observed...... was ± 4.0 pmC (95 % confidence interval) at 62 pmC. The long-term sampling method was found to be useful for waste incinerators for determination of annual fossil and biogenic CO2 emissions with relatively low uncertainty....
International Nuclear Information System (INIS)
Lopes, Patricia da Costa
2005-01-01
We describe here an application of excess 222 Rn to estimate SGD in a series of small embayments of Ubatuba, Sao Paulo State, Brazil, covering latitudes between 23 deg 26'S and 23 deg 46'S and longitudes between 45 deg 02'W e 45 deg 11'W. Excess 222 Rn inventories obtained in 24 vertical profiles established from March/03 to July/05 varied from 345 ±±24 to 18,700 ± 4,900 dpm/m 2 . The highest inventories of excess 222 Rn were observed both in Flamengo and Fortaleza embayments, during summer campaigns (rainy season). The estimated total fluxes required to support inventories measured varied from 62 ± 4 to 3,385 +- 880 dpm/m 2 d. Considering these results, the SGD advective rates necessary to balance the fluxes calculated in Ubatuba embayments ranged from 0.1 x 10 -1 to 1.9 cm/d. Taking into account all SGD fluxes obtained, the percentual variability was 89% (seasonal variation in 3 years period, n = 24 measurements). Although, if we consider each year of study separately, the respective percentual variabilities estimated are 72% in 2003 (n = 10 measurements), 127% in 2004 (n = 6 measurements) and 97% in 2005 (n = 8 measurements). (author)
Allton, J. H.; Kuhlman, K. R.; Allums, K. K.; Gonzalez, C. P.; Jurewicz, A. J. G.; Burnett, D. S.; Woolum, D. S.
2015-01-01
The recovered Genesis collector fragments are heavily contaminated with crash-derived particulate debris. However, megasonic treatment with ultra-pure-water (UPW; resistivity (is) greater than18 meg-ohm-cm) removes essentially all particulate contamination greater than 5 microns in size [e.g.1] and is thus of considerable importance. Optical imaging of Si sample 60336 revealed the presence of a large C-rich particle after UPW treatment that was not present prior to UPW. Such handling contamination is occasionally observed, but such contaminants are normally easily removed by UPW cleaning. The 60336 particle was exceptional in that, surprisingly, it was not removed by additional UPW or by hot xylene or by aqua regia treatment. It was eventually removed by treatment with NH3-H2O2. Our best interpretation of the origin of the 60336 particle was that it was adhesive from the Post-It notes used to stabilize samples for transport from Utah after the hard landing. It is possible that the insoluble nature of the 60336 particle comes from interaction of the Post-It adhesive with UPW. An occasional bit of Post-It adhesive is not a major concern, but C particulate contamination also occurs from the heat shield of the Sample Return Capsule (SRC) and this is mixed with inorganic contamination from the SRC and the Utah landing site. If UPW exposure also produced an insoluble residue from SRC C, this would be a major problem in chemical treatments to produce clean surfaces for analysis. This paper reports experiments to test whether particulate contamination was removed more easily if UPW treatment was not used.
Mahoney, Patrick
2006-01-01
Dental microwear was recorded in a Bronze-Iron Age (3570–3000 BP) sample of modern humans recovered from Tell es-Sa'idiyeh in the Jordan Valley. Microwear patterns were compared between mandibular molars, and between the upper and lower part of facet 9. The comparison revealed a greater frequency of pits and shorter scratches on the second and third molars, compared to the first. Pit frequency also increased on the lower part of the facet on the first molar, compared to the upper part. These ...
Egede, Leonard E; Gebregziabher, Mulugeta; Hunt, Kelly J; Axon, Robert N; Echols, Carrae; Gilbert, Gregory E; Mauldin, Patrick D
2011-04-01
We performed a retrospective analysis of a national cohort of veterans with diabetes to better understand regional, geographic, and racial/ethnic variation in diabetes control as measured by HbA(1c). A retrospective cohort study was conducted in a national cohort of 690,968 veterans with diabetes receiving prescriptions for insulin or oral hypoglycemic agents in 2002 that were followed over a 5-year period. The main outcome measures were HbA(1c) levels (as continuous and dichotomized at ≥8.0%). Relative to non-Hispanic whites (NHWs), HbA(1c) levels remained 0.25% higher in non-Hispanic blacks (NHBs), 0.31% higher in Hispanics, and 0.14% higher in individuals with other/unknown/missing racial/ethnic group after controlling for demographics, type of medication used, medication adherence, and comorbidities. Small but statistically significant geographic differences were also noted with HbA(1c) being lowest in the South and highest in the Mid-Atlantic. Rural/urban location of residence was not associated with HbA(1c) levels. For the dichotomous outcome poor control, results were similar with race/ethnic group being strongly associated with poor control (i.e., odds ratios of 1.33 [95% CI 1.31-1.35] and 1.57 [1.54-1.61] for NHBs and Hispanics vs. NHWs, respectively), geographic region being weakly associated with poor control, and rural/urban residence being negligibly associated with poor control. In a national longitudinal cohort of veterans with diabetes, we found racial/ethnic disparities in HbA(1c) levels and HbA(1c) control; however, these disparities were largely, but not completely, explained by adjustment for demographic characteristics, medication adherence, type of medication used to treat diabetes, and comorbidities.
Energy Technology Data Exchange (ETDEWEB)
Albuquerque, Antonio Morais de Sa; Fragoso, Maria Conceicao de Farias; Oliveira, Mercia L. [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)
2011-07-01
In the nuclear medicine practice, the accurate knowledge of the activity of radiopharmaceuticals which will be administered to the subjects is an important factor to ensure the success of diagnosis or therapy. The instrument used for this purpose is the radionuclide calibrator. The radiopharmaceuticals are usually contained on glass vials or syringes. However, the radionuclide calibrators response is sensitive to the measurement geometry. In addition, the calibration factors supplied by manufactures are valid only for single sample geometry. To minimize the uncertainty associated with the activity measurements, it is important to use the appropriate corrections factors for the each radionuclide in the specific geometry in which the measurement is to be made. The aims of this work were to evaluate the behavior of radionuclide calibrators varying the geometry of radioactive sources and to determine experimentally the correction factors for different volumes and containers types commonly used in nuclear medicine practice. The measurements were made in two ionization chambers of different manufacturers (Capintec and Biodex), using four radionuclides with different photon energies: {sup 18}F, {sup 99m}Tc, {sup 131}I and {sup 201}Tl. The results confirm the significant dependence of radionuclide calibrators reading on the sample geometry, showing the need of use correction factors in order to minimize the errors which affect the activity measurements. (author)
Liu, Xiaofeng
2003-01-01
This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…
Ross, Donald S.; Bailiey, Scott W; Briggs, Russell D; Curry, Johanna; Fernandez, Ivan J.; Fredriksen, Guinevere; Goodale, Christine L.; Hazlett, Paul W.; Heine, Paul R; Johnson, Chris E.; Larson, John T; Lawrence, Gregory B.; Kolka, Randy K; Ouimet, Rock; Pare, D; Richter, Daniel D.; Shirmer, Charles D; Warby, Richard A.F.
2015-01-01
Long-term forest soil monitoring and research often requires a comparison of laboratory data generated at different times and in different laboratories. Quantifying the uncertainty associated with these analyses is necessary to assess temporal changes in soil properties. Forest soil chemical properties, and methods to measure these properties, often differ from agronomic and horticultural soils. Soil proficiency programs do not generally include forest soil samples that are highly acidic, high in extractable Al, low in extractable Ca and often high in carbon. To determine the uncertainty associated with specific analytical methods for forest soils, we collected and distributed samples from two soil horizons (Oa and Bs) to 15 laboratories in the eastern United States and Canada. Soil properties measured included total organic carbon and nitrogen, pH and exchangeable cations. Overall, results were consistent despite some differences in methodology. We calculated the median absolute deviation (MAD) for each measurement and considered the acceptable range to be the median 6 2.5 3 MAD. Variability among laboratories was usually as low as the typical variability within a laboratory. A few areas of concern include a lack of consistency in the measurement and expression of results on a dry weight basis, relatively high variability in the C/N ratio in the Bs horizon, challenges associated with determining exchangeable cations at concentrations near the lower reporting range of some laboratories and the operationally defined nature of aluminum extractability. Recommendations include a continuation of reference forest soil exchange programs to quantify the uncertainty associated with these analyses in conjunction with ongoing efforts to review and standardize laboratory methods.
Talamo, Giampaolo; Mir Muhammad, A; Pandey, Manoj K; Zhu, Junjia; Creer, Michael H; Malysz, Jozef
2015-02-11
Measurement of daily proteinuria in patients with amyloidosis is recommended at the time of diagnosis for assessing renal involvement, and for monitoring disease activity. Renal involvement is usually defined by proteinuria >500 mg/day. We evaluated the accuracy of the random urine protein-to-creatinine ratio (Pr/Cr) in predicting 24 hour proteinuria in patient with amyloidosis. We compared results of random urine Pr/Cr ratio and concomitant 24-hour urine collections in 44 patients with amyloidosis. We found a strong correlation (Spearman's ρ=0.874) between the Pr/Cr ratio and the 24 hour urine protein excretion. For predicting renal involvement, the optimal cut-off point of the Pr/Cr ratio was 715 mg/g. The sensitivity and specificity for this point were 91.8% and 95.5%, respectively, and the area under the curve value was 97.4%. We conclude that the random urine Pr/Cr ratio could be useful in the screening of renal involvement in patients with amyloidosis. If validated in a prospective study, the random urine Pr/Cr ratio could replace the 24 hour urine collection for the assessment of daily proteinuria and presence of nephrotic syndrome in patients with amyloidosis.
Directory of Open Access Journals (Sweden)
Giampaolo Talamo
2015-02-01
Full Text Available Measurement of daily proteinuria in patients with amyloidosis is recommended at the time of diagnosis for assessing renal involvement, and for monitoring disease activity. Renal involvement is usually defined by proteinuria >500 mg/day. We evaluated the accuracy of the random urine protein-to-creatinine ratio (Pr/Cr in predicting 24 hour proteinuria in patient with amyloidosis. We com- pared results of random urine Pr/Cr ratio and concomitant 24-hour urine collections in 44 patients with amyloidosis. We found a strong correlation (Spearman’s ρ=0.874 between the Pr/Cr ratio and the 24 hour urine protein excretion. For predicting renal involvement, the optimal cut-off point of the Pr/Cr ratio was 715 mg/g. The sensitivity and specificity for this point were 91.8% and 95.5%, respectively, and the area under the curve value was 97.4%. We conclude that the random urine Pr/Cr ratio could be useful in the screening of renal involvement in patients with amyloidosis. If validated in a prospective study, the random urine Pr/Cr ratio could replace the 24 hour urine collection for the assessment of daily proteinuria and presence of nephrotic syndrome in patients with amyloidosis.
Dai, Ting; Davey, Adam; Woodard, John L; Miller, Lloyd Stephen; Gondo, Yasuyuki; Kim, Seock-Ho; Poon, Leonard W
2013-08-01
Centenarians represent a rare but rapidly growing segment of the oldest-old. This study presents item-level data from the Mini-Mental State Examination (MMSE) in a cross-sectional, population-based sample of 244 centenarians and near-centenarians (aged 98-108, 16% men, 21% African-American, 38% community dwelling) from the Georgia Centenarian Study (2001-2008) according to age, education, sex, race, and residential status. Multiple-Indicator Multiple-Cause (MIMIC) models were used to identify systematic domain-level differences in MMSE scores according to demographic characteristics in this age group. Indirect effects of age, educational attainment, race, and residential status were found on MMSE scores. Direct effects were limited to concentration for education and race and orientation for residential status. Mean levels of cognitive functioning in centenarians were low, with mean values below most commonly-used cutoffs. Overall scores on the MMSE differed as a function of age, education, race, and residential status, with differences in scale performance limited primarily to concentration and orientation and no evidence of interactions between centenarian characteristics. Adjusting for education was not sufficient to account for differences according to race, and adjusting for residential status was not sufficient to account for differences according to age. © 2013, Copyright the Authors Journal compilation © 2013, The American Geriatrics Society.
Jalava, P. I.; Wang, Q.; Kuuspalo, K.; Ruusunen, J.; Hao, L.; Fang, D.; Väisänen, O.; Ruuskanen, A.; Sippula, O.; Happo, M. S.; Uski, O.; Kasurinen, S.; Torvela, T.; Koponen, H.; Lehtinen, K. E. J.; Komppula, M.; Gu, C.; Jokiniemi, J.; Hirvonen, M.-R.
2015-11-01
Urban air particulate pollution is a known cause for adverse human health effects worldwide. China has encountered air quality problems in recent years due to rapid industrialization. Toxicological effects induced by particulate air pollution vary with particle sizes and season. However, it is not known how distinctively different photochemical activity and different emission sources during the day and the night affect the chemical composition of the PM size ranges and subsequently how it is reflected to the toxicological properties of the PM exposures. The particulate matter (PM) samples were collected in four different size ranges (PM10-2.5; PM2.5-1; PM1-0.2 and PM0.2) with a high volume cascade impactor. The PM samples were extracted with methanol, dried and thereafter used in the chemical and toxicological analyses. RAW264.7 macrophages were exposed to the particulate samples in four different doses for 24 h. Cytotoxicity, inflammatory parameters, cell cycle and genotoxicity were measured after exposure of the cells to particulate samples. Particles were characterized for their chemical composition, including ions, element and PAH compounds, and transmission electron microscopy (TEM) was used to take images of the PM samples. Chemical composition and the induced toxicological responses of the size segregated PM samples showed considerable size dependent differences as well as day to night variation. The PM10-2.5 and the PM0.2 samples had the highest inflammatory potency among the size ranges. Instead, almost all the PM samples were equally cytotoxic and only minor differences were seen in genotoxicity and cell cycle effects. Overall, the PM0.2 samples had the highest toxic potential among the different size ranges in many parameters. PAH compounds in the samples and were generally more abundant during the night than the day, indicating possible photo-oxidation of the PAH compounds due to solar radiation. This was reflected to different toxicity in the PM
Energy Technology Data Exchange (ETDEWEB)
Lind, Lars [Department of Medical Sciences, Cardiovascular Epidemiology, Uppsala University, Uppsala (Sweden); Penell, Johanna [Department of Medical Sciences, Occupational and Environmental Medicine, Uppsala University, Uppsala (Sweden); Syvänen, Anne-Christine; Axelsson, Tomas [Department of Medical Sciences, Molecular Medicine and Science for Life Laboratory, Uppsala University, Uppsala (Sweden); Ingelsson, Erik [Department of Medical Sciences, Molecular Epidemiology and Science for Life Laboratory, Uppsala University, Uppsala (Sweden); Wellcome Trust Centre for Human Genetics, University of Oxford, Oxford (United Kingdom); Morris, Andrew P.; Lindgren, Cecilia [Wellcome Trust Centre for Human Genetics, University of Oxford, Oxford (United Kingdom); Salihovic, Samira; Bavel, Bert van [MTM Research Centre, School of Science and Technology, Örebro University, Örebro (Sweden); Lind, P. Monica, E-mail: monica.lind@medsci.uu.se [Department of Medical Sciences, Occupational and Environmental Medicine, Uppsala University, Uppsala (Sweden)
2014-08-15
Several of the polychlorinated biphenyls (PCBs), i.e. the dioxin-like PCBs, are known to induce the P450 enzymes CYP1A1, CYP1A2 and CYP1B1 by activating the aryl hydrocarbon receptor (Ah)-receptor. We evaluated if circulating levels of PCBs in a population sample were related to genetic variation in the genes encoding these CYPs. In the population-based Prospective Investigation of the Vasculature in Uppsala Seniors (PIVUS) study (1016 subjects all aged 70), 21 SNPs in the CYP1A1, CYP1A2 and CYP1B1 genes were genotyped. Sixteen PCB congeners were analysed by high-resolution chromatography coupled to high-resolution mass spectrometry (HRGC/ HRMS). Of the investigated relationships between SNPs in the CYP1A1, CYP1A2 and CYP1B1 and six PCBs (congeners 118, 126, 156, 169, 170 and 206) that captures >80% of the variation of all PCBs measured, only the relationship between CYP1A1 rs2470893 was significantly related to PCB118 levels following strict adjustment for multiple testing (p=0.00011). However, there were several additional SNPs in the CYP1A2 and CYP1B1 that showed nominally significant associations with PCB118 levels (p-values in the 0.003–0.05 range). Further, several SNPs in the CYP1B1 gene were related to both PCB156 and PCB206 with p-values in the 0.005–0.05 range. Very few associations with p<0.05 were seen for PCB126, PCB169 or PCB170. Genetic variation in the CYP1A1 was related to circulating PCB118 levels in the general elderly population. Genetic variation in CYP1A2 and CYP1B1 might also be associated with other PCBs. - Highlights: • We studied the relationship between PCBs and the genetic variation in the CYP genes. • Cross sectional data from a cohort of elderly were analysed. • The PCB levels were evaluated versus 21 SNPs in three CYP genes. • PCB 118 was related to variation in the CYP1A1 gene.
Brus, D.J.; Saby, N.P.A.
2016-01-01
In France like in many other countries, the soil is monitored at the locations of a regular, square grid thus forming a systematic sample (SY). This sampling design leads to good spatial coverage, enhancing the precision of design-based estimates of spatial means and totals. Design-based
International Nuclear Information System (INIS)
Wandiga, S.O.; Jumba, I.O.
1982-01-01
An intercomparative analysis of the concentration of heavy metals:zinc, cadmium, lead, copper, mercury, iron and calcium in head hair of a randomly selected sample of Kenyan people using the techniques of atomic absorption spectrophotometry (AAS) and differential pulse anodic stripping voltammetry (DPAS) has been undertaken. The percent relative standard deviation for each sample analysed using either of the techniques show good sensitivity and correlation between the techniques. The DPAS was found to be slightly sensitive than the AAs instrument used. The recalculated body burden rations of Cd to Zn, Pb to Fe reveal no unusual health impairement symptoms and suggest a relatively clean environment in Kenya.(author)
Kelly, Heath; Riddell, Michaela A; Gidding, Heather F; Nolan, Terry; Gilbert, Gwendolyn L
2002-08-19
We compared estimates of the age-specific population immunity to measles, mumps, rubella, hepatitis B and varicella zoster viruses in Victorian school children obtained by a national sero-survey, using a convenience sample of residual sera from diagnostic laboratories throughout Australia, with those from a three-stage random cluster survey. When grouped according to school age (primary or secondary school) there was no significant difference in the estimates of immunity to measles, mumps, hepatitis B or varicella. Compared with the convenience sample, the random cluster survey estimated higher immunity to rubella in samples from both primary (98.7% versus 93.6%, P = 0.002) and secondary school students (98.4% versus 93.2%, P = 0.03). Despite some limitations, this study suggests that the collection of a convenience sample of sera from diagnostic laboratories is an appropriate sampling strategy to provide population immunity data that will inform Australia's current and future immunisation policies. Copyright 2002 Elsevier Science Ltd.
Zheng, Lianqing; Chen, Mengen; Yang, Wei
2009-06-21
To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.
Sager, Manfred; Chon, Hyo-Taek; Marton, Laszlo
2015-02-01
Roadside dusts were studied to explain the spatial variation and present levels of contaminant elements including Pt, Pd and Ir in urban environment and around Budapest (Hungary) and Seoul (Republic of Korea). The samples were collected from six sites of high traffic volumes in Seoul metropolitan city and from two control sites within the suburbs of Seoul, for comparison. Similarly, road dust samples were obtained two times from traffic focal points in Budapest, from the large bridges across the River Danube, from Margitsziget (an island in the Danube in the northern part of Budapest, used for recreation) as well as from main roads (no highways) outside Budapest. The samples were analysed for contaminant elements by ICP-AES and for Pt, Pd and Ir by ICP-MS. The highest Pt, Pd and Ir levels in road dusts were found from major roads with high traffic volume, but correlations with other contaminant elements were low, however. This reflects automobile catalytic converter to be an important source. To interpret the obtained multi-element results in short, pollution index, contamination index and geo-accumulation index were calculated. Finally, the obtained data were compared with total concentrations encountered in dust samples from Madrid, Oslo, Tokyo and Muscat (Oman). Dust samples from Seoul reached top level concentrations for Cd-Zn-As-Co-Cr-Cu-Mo-Ni-Sn. Just Pb was rather low because unleaded gasoline was introduced as compulsory in 1993. Concentrations in Budapest dust samples were lower than from Seoul, except for Pb and Mg. Compared with Madrid as another continental site, Budapest was higher in Co-V-Zn. Dust from Oslo, which is not so large, contained more Mn-Na-Sr than dust from other towns, but less other metals.
Kashdan, Todd B; Farmer, Antonina S
2014-06-01
The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning.
Kashdan, Todd B.; Farmer, Antonina S.
2014-01-01
The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning. PMID:24512246
Directory of Open Access Journals (Sweden)
Emma Lightfoot
Full Text Available Oxygen isotope analysis of archaeological skeletal remains is an increasingly popular tool to study past human migrations. It is based on the assumption that human body chemistry preserves the δ18O of precipitation in such a way as to be a useful technique for identifying migrants and, potentially, their homelands. In this study, the first such global survey, we draw on published human tooth enamel and bone bioapatite data to explore the validity of using oxygen isotope analyses to identify migrants in the archaeological record. We use human δ18O results to show that there are large variations in human oxygen isotope values within a population sample. This may relate to physiological factors influencing the preservation of the primary isotope signal, or due to human activities (such as brewing, boiling, stewing, differential access to water sources and so on causing variation in ingested water and food isotope values. We compare the number of outliers identified using various statistical methods. We determine that the most appropriate method for identifying migrants is dependent on the data but is likely to be the IQR or median absolute deviation from the median under most archaeological circumstances. Finally, through a spatial assessment of the dataset, we show that the degree of overlap in human isotope values from different locations across Europe is such that identifying individuals' homelands on the basis of oxygen isotope analysis alone is not possible for the regions analysed to date. Oxygen isotope analysis is a valid method for identifying first-generation migrants from an archaeological site when used appropriately, however it is difficult to identify migrants using statistical methods for a sample size of less than c. 25 individuals. In the absence of local previous analyses, each sample should be treated as an individual dataset and statistical techniques can be used to identify migrants, but in most cases pinpointing a specific
International Nuclear Information System (INIS)
Lin, Emile van; Vight, Lisette van der; Witjes, J. Alfred; Huisman, HenkJan J.; Leer, Jan Willem; Visser, Andries G.
2005-01-01
Purpose: To investigate the effect of an endorectal balloon (ERB) and an off-line correction protocol on the day-to-day, interfraction prostate gland motion, in patients receiving external beam radiotherapy for prostate cancer. Methods and materials: In 22 patients, irradiated with an ERB in situ (ERB group) and in 30 patients without an ERB (No-ERB group), prostate displacements were measured daily in three orthogonal directions with portal images. Implanted gold markers and an off-line electronic portal imaging correction protocol were used for prostate position verification and correction. Movie loops were analyzed to evaluate prostate motion and rectal filling variations. Results: The off-line correction protocol reduced the systematic prostate displacements, equally for the ERB and No-ERB group, to 1.3-1.8 mm (1 SD). The mean 3D displacement was reduced to 2.8 mm and 2.4 mm for the ERB and No-ERB group, respectively. The random interfraction displacements, relative to the treatment isocenter, were not reduced by the ERB and remained nearly unchanged in all three directions: 3.1 mm (1 SD) left-right, 2.6 mm (1 SD) superior-inferior, and 4.7 mm (1 SD) for the anterior-posterior direction. These day-to-day prostate position variations can be explained by the presence of gas and stool beside the ERB. Conclusions: The off-line corrections on the fiducial markers are effective in reducing the systematic prostate displacements. The investigated ERB does not reduce the interfraction prostate motion. Although the overall mean displacement is low, the day-to-day interfraction motion, especially in anterior-posterior direction, remains high compared with the systematic displacements
Directory of Open Access Journals (Sweden)
Annhild Mosdøl
2009-11-01
Full Text Available ABSTRACTSemi-quantitative food frequency data from a nation-wide, representative sample of 2677 Norwegianmen and women were analysed to identify food categories contributing most to absolute intake andbetween-person variation in intake of energy and nine nutrients. The 149 food categories in the questionnairewere ranked according to their contribution to absolute nutrient intake, and categories contributingat least 0.5% to the average absolute intake were included in a stepwise regression model. Thenumber of food categories explaining 90% of the between-person variation varied from 2 categories forb -carotene to 33 for a-tocopherol. The models accounted for 53–76% of the estimated absolute nutrientintakes. These analyses present a meaningful way of restricting the number of food categories inquestionnaires aimed at capturing the between-person variation in energy or specific nutrient intakes.NORSK SAMMENDRAGSemikvantitative matvarefrekvensdata fra et landsrepresentativt utvalg av 2677 norske menn og kvinnerble analysert for å identifisere de matvarekategoriene som bidro mest til absolutt inntak og til variasjoni inntak mellom individer for energi og ni næringsstoffer. De 149 matvarekategoriene ble rangert iforhold til deres bidrag til inntaket av et næringsstoff, og de kategoriene som bidro med minst 0,5% avgjennomsnittlig inntak ble inkludert i en trinnvis regresjonsmodell. Antallet kategorier som forklarte90% av variasjonen mellom individer varierte fra 2 kategorier for b-karoten til 33 for a-tokoferol.Modellene inkluderte 53–76% av det estimerte absoluttinntaket av næringsstoffene. Disse analysenepeker på en meningsfylt måte å redusere antall kostspørsmål i spørreskjema som er rettet mot å fangeopp variasjonen i inntak av energi og utvalgte næringsstoffer mellom personer.
Kerr, William C; Greenfield, Thomas K; Tujague, Jennifer; Brown, Stephan E
2005-11-01
Empirically based estimates of the mean alcohol content of beer, wine and spirits drinks from a national sample of US drinkers are not currently available. A sample of 310 drinkers from the 2000 National Alcohol Survey were re-contacted to participate in a telephone survey with specific questions about the drinks they consume. Subjects were instructed to prepare their usual drink of each beverage at home and to measure each alcoholic beverage and other ingredients with a provided beaker. Information on the brand or type of each beverage was used to specify the percentage of alcohol. The weighted mean alcohol content of respondents' drinks was 0.67 ounces overall, 0.56 ounces for beer, 0.66 ounces for wine and 0.89 ounces for spirits. Spirits and wine drink contents were particularly variable with many high-alcohol drinks observed. While the 0.6-ounce of alcohol drink standard appears to be a reasonable single standard, it cannot capture the substantial variation evident in this sample and it underestimates average wine and spirits ethanol content. Direct measurement or beverage-specific mean ethanol content estimates would improve the precision of survey alcohol assessment.
Woodall, W Gill; Delaney, Harold D; Kunitz, Stephen J; Westerberg, Verner S; Zhao, Hongwei
2007-06-01
Randomized trial evidence on the effectiveness of incarceration and treatment of first-time driving while intoxicated (DWI) offenders who are primarily American Indian has yet to be reported in the literature on DWI prevention. Further, research has confirmed the association of antisocial personality disorder (ASPD) with problems with alcohol including DWI. A randomized clinical trial was conducted, in conjunction with 28 days of incarceration, of a treatment program incorporating motivational interviewing principles for first-time DWI offenders. The sample of 305 offenders including 52 diagnosed as ASPD by the Diagnostic Interview Schedule were assessed before assignment to conditions and at 6, 12, and 24 months after discharge. Self-reported frequency of drinking and driving as well as various measures of drinking over the preceding 90 days were available at all assessments for 244 participants. Further, DWI rearrest data for 274 participants were available for analysis. Participants randomized to receive the first offender incarceration and treatment program reported greater reductions in alcohol consumption from baseline levels when compared with participants who were only incarcerated. Antisocial personality disorder participants reported heavier and more frequent drinking but showed significantly greater declines in drinking from intake to posttreatment assessments. Further, the treatment resulted in larger effects relative to the control on ASPD than non-ASPD participants. Nonconfrontational treatment may significantly enhance outcomes for DWI offenders with ASPD when delivered in an incarcerated setting, and in the present study, such effects were found in a primarily American-Indian sample.
Burt, Richard D; Thiede, Hanne
2014-11-01
Respondent-driven sampling (RDS) is a form of peer-based study recruitment and analysis that incorporates features designed to limit and adjust for biases in traditional snowball sampling. It is being widely used in studies of hidden populations. We report an empirical evaluation of RDS's consistency and variability, comparing groups recruited contemporaneously, by identical methods and using identical survey instruments. We randomized recruitment chains from the RDS-based 2012 National HIV Behavioral Surveillance survey of injection drug users in the Seattle area into two groups and compared them in terms of sociodemographic characteristics, drug-associated risk behaviors, sexual risk behaviors, human immunodeficiency virus (HIV) status and HIV testing frequency. The two groups differed in five of the 18 variables examined (P ≤ .001): race (e.g., 60% white vs. 47%), gender (52% male vs. 67%), area of residence (32% downtown Seattle vs. 44%), an HIV test in the previous 12 months (51% vs. 38%). The difference in serologic HIV status was particularly pronounced (4% positive vs. 18%). In four further randomizations, differences in one to five variables attained this level of significance, although the specific variables involved differed. We found some material differences between the randomized groups. Although the variability of the present study was less than has been reported in serial RDS surveys, these findings indicate caution in the interpretation of RDS results. Copyright © 2014 Elsevier Inc. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Muetzell, S. (Univ. Hospital of Uppsala (Sweden). Dept. of Family Medicine)
1992-01-01
Computed tomography (CT) of the brain was performed in a random sample of a total of 195 men and 211 male alcoholic patients admitted for the first time during a period of two years from the same geographically limited area of Greater Stockholm as the sample. Laboratory tests were performed, including liver and pancreatic tests. Toxicological screening was performed and the consumption of hepatotoxic drugs was also investigated. The groups were then subdivided with respect to alcohol consumption and use of hepatotoxic drugs: group IA, men from the random sample with low or moderate alcohol consumption and no use of hepatotoxic drugs; IB, men from the random sample with low or moderate alcohol consumption with use of hepatotoxic drugs; IIA, alcoholic inpatients with use of alcohol and no drugs; and IIB, alcoholic inpatients with use of alcohol and drugs. Group IIB was found to have a higher incidence of cortical and subcortical changes than group IA. Group IB had a higher incidence of subcortical changes than group IA, and they differed only in drug use. Groups IIN and IIA only differed in drug use, and IIB had a higher incidence of brian damage except for anterior horn index and wide cerebellar sulci indicating vermian atrophy. Significantly higher serum levels of bilirubin, GGT, ASAT, ALAT, CK LD, and amylase were found in IIB. The results indicate that drug use influences the incidence of cortical and subcortical aberrations, except anterior horn index. It is concluded that the groups with alcohol abuse who used hepatotoxic drugs showed a picture of cortical changes (wide transport sulci and clear-cut of high-grade cortical changes) and also of subcortical aberrations, expressed as an increased widening on the third ventricle.
International Nuclear Information System (INIS)
Muetzell, S.
1992-01-01
Computed tomography (CT) of the brain was performed in a random sample of a total of 195 men and 211 male alcoholic patients admitted for the first time during a period of two years from the same geographically limited area of Greater Stockholm as the sample. Laboratory tests were performed, including liver and pancreatic tests. Toxicological screening was performed and the consumption of hepatotoxic drugs was also investigated. The groups were then subdivided with respect to alcohol consumption and use of hepatotoxic drugs: group IA, men from the random sample with low or moderate alcohol consumption and no use of hepatotoxic drugs; IB, men from the random sample with low or moderate alcohol consumption with use of hepatotoxic drugs; IIA, alcoholic inpatients with use of alcohol and no drugs; and IIB, alcoholic inpatients with use of alcohol and drugs. Group IIB was found to have a higher incidence of cortical and subcortical changes than group IA. Group IB had a higher incidence of subcortical changes than group IA, and they differed only in drug use. Groups IIN and IIA only differed in drug use, and IIB had a higher incidence of brian damage except for anterior horn index and wide cerebellar sulci indicating vermian atrophy. Significantly higher serum levels of bilirubin, GGT, ASAT, ALAT, CK LD, and amylase were found in IIB. The results indicate that drug use influences the incidence of cortical and subcortical aberrations, except anterior horn index. It is concluded that the groups with alcohol abuse who used hepatotoxic drugs showed a picture of cortical changes (wide transport sulci and clear-cut of high-grade cortical changes) and also of subcortical aberrations, expressed as an increased widening on the third ventricle
Jackson, George L.; Weinberger, Morris; Kirshner, Miriam A.; Stechuchak, Karen M.; Melnyk, Stephanie D.; Bosworth, Hayden B.; Coffman, Cynthia J.; Neelon, Brian; Van Houtven, Courtney; Gentry, Pamela W.; Morris, Isis J.; Rose, Cynthia M.; Taylor, Jennifer P.; May, Carrie L.; Han, Byungjoo; Wainwright, Christi; Alkon, Aviel; Powell, Lesa; Edelman, David
2016-01-01
Despite the availability of efficacious treatments, only half of patients with hypertension achieve adequate blood pressure (BP) control. This paper describes the protocol and baseline subject characteristics of a 2-arm, 18-month randomized clinical trial of titrated disease management (TDM) for patients with pharmaceutically-treated hypertension for whom systolic blood pressure (SBP) is not controlled (≥140mmHg for non-diabetic or ≥130mmHg for diabetic patients). The trial is being conducted among patients of four clinic locations associated with a Veterans Affairs Medical Center. An intervention arm has a TDM strategy in which patients' hypertension control at baseline, 6, and 12 months determines the resource intensity of disease management. Intensity levels include: a low-intensity strategy utilizing a licensed practical nurse to provide bi-monthly, non-tailored behavioral support calls to patients whose SBP comes under control; medium-intensity strategy utilizing a registered nurse to provide monthly tailored behavioral support telephone calls plus home BP monitoring; and high-intensity strategy utilizing a pharmacist to provide monthly tailored behavioral support telephone calls, home BP monitoring, and pharmacist-directed medication management. Control arm patients receive the low-intensity strategy regardless of BP control. The primary outcome is SBP. There are 385 randomized (192 intervention; 193 control) veterans that are predominately older (mean age 63.5 years) men (92.5%). 61.8% are African American, and the mean baseline SBP for all subjects is 143.6mmHg. This trial will determine if a disease management program that is titrated by matching the intensity of resources to patients' BP control leads to superior outcomes compared to a low-intensity management strategy. PMID:27417982
Billong, Serge Clotaire; Fokam, Joseph; Penda, Calixte Ida; Amadou, Salmon; Kob, David Same; Billong, Edson-Joan; Colizzi, Vittorio; Ndjolo, Alexis; Bisseck, Anne-Cecile Zoung-Kani; Elat, Jean-Bosco Nfetam
2016-11-15
Retention on lifelong antiretroviral therapy (ART) is essential in sustaining treatment success while preventing HIV drug resistance (HIVDR), especially in resource-limited settings (RLS). In an era of rising numbers of patients on ART, mastering patients in care is becoming more strategic for programmatic interventions. Due to lapses and uncertainty with the current WHO sampling approach in Cameroon, we thus aimed to ascertain the national performance of, and determinants in, retention on ART at 12 months. Using a systematic random sampling, a survey was conducted in the ten regions (56 sites) of Cameroon, within the "reporting period" of October 2013-November 2014, enrolling 5005 eligible adults and children. Performance in retention on ART at 12 months was interpreted following the definition of HIVDR early warning indicator: excellent (>85%), fair (85-75%), poor (sampling strategy could be further strengthened for informed ART monitoring and HIVDR prevention perspectives.
Budiarto, E; Keijzer, M; Storchi, P R M; Heemink, A W; Breedveld, S; Heijmen, B J M
2014-01-20
Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements.
International Nuclear Information System (INIS)
Budiarto, E; Keijzer, M; Heemink, A W; Storchi, P R M; Breedveld, S; Heijmen, B J M
2014-01-01
Radiotherapy dose delivery in the tumor and surrounding healthy tissues is affected by movements and deformations of the corresponding organs between fractions. The random variations may be characterized by non-rigid, anisotropic principal component analysis (PCA) modes. In this article new dynamic dose deposition matrices, based on established PCA modes, are introduced as a tool to evaluate the mean and the variance of the dose at each target point resulting from any given set of fluence profiles. The method is tested for a simple cubic geometry and for a prostate case. The movements spread out the distributions of the mean dose and cause the variance of the dose to be highest near the edges of the beams. The non-rigidity and anisotropy of the movements are reflected in both quantities. The dynamic dose deposition matrices facilitate the inclusion of the mean and the variance of the dose in the existing fluence-profile optimizer for radiotherapy planning, to ensure robust plans with respect to the movements. (paper)
Burger, Rulof P; McLaren, Zoë M
2017-09-01
The problem of sample selection complicates the process of drawing inference about populations. Selective sampling arises in many real world situations when agents such as doctors and customs officials search for targets with high values of a characteristic. We propose a new method for estimating population characteristics from these types of selected samples. We develop a model that captures key features of the agent's sampling decision. We use a generalized method of moments with instrumental variables and maximum likelihood to estimate the population prevalence of the characteristic of interest and the agents' accuracy in identifying targets. We apply this method to tuberculosis (TB), which is the leading infectious disease cause of death worldwide. We use a national database of TB test data from South Africa to examine testing for multidrug resistant TB (MDR-TB). Approximately one quarter of MDR-TB cases was undiagnosed between 2004 and 2010. The official estimate of 2.5% is therefore too low, and MDR-TB prevalence is as high as 3.5%. Signal-to-noise ratios are estimated to be between 0.5 and 1. Our approach is widely applicable because of the availability of routinely collected data and abundance of potential instruments. Using routinely collected data to monitor population prevalence can guide evidence-based policy making. Copyright © 2017 John Wiley & Sons, Ltd.
DEFF Research Database (Denmark)
Puri, Rajesh; Vilmann, Peter; Saftoiu, Adrian
2009-01-01
). The samples were characterized for cellularity and bloodiness, with a final cytology diagnosis established blindly. The final diagnosis was reached either by EUS-FNA if malignancy was definite, or by surgery and/or clinical follow-up of a minimum of 6 months in the cases of non-specific benign lesions...
DEFF Research Database (Denmark)
Moeller, Niels C; Korsholm, Lars; Kristensen, Peter L
2008-01-01
BACKGROUND: Potentially, unit-specific in-vitro calibration of accelerometers could increase field data quality and study power. However, reduced inter-unit variability would only be important if random instrument variability contributes considerably to the total variation in field data. Therefor...
Agashiwala, Rajiv M; Louis, Elan D; Hof, Patrick R; Perl, Daniel P
2008-10-21
Non-biased systematic sampling using the principles of stereology provides accurate quantitative estimates of objects within neuroanatomic structures. However, the basic principles of stereology are not optimally suited for counting objects that selectively exist within a limited but complex and convoluted portion of the sample, such as occurs when counting cerebellar Purkinje cells. In an effort to quantify Purkinje cells in association with certain neurodegenerative disorders, we developed a new method for stereologic sampling of the cerebellar cortex, involving calculating the volume of the cerebellar tissues, identifying and isolating the Purkinje cell layer and using this information to extrapolate non-biased systematic sampling data to estimate the total number of Purkinje cells in the tissues. Using this approach, we counted Purkinje cells in the right cerebella of four human male control specimens, aged 41, 67, 70 and 84 years, and estimated the total Purkinje cell number for the four entire cerebella to be 27.03, 19.74, 20.44 and 22.03 million cells, respectively. The precision of the method is seen when comparing the density of the cells within the tissue: 266,274, 173,166, 167,603 and 183,575 cells/cm3, respectively. Prior literature documents Purkinje cell counts ranging from 14.8 to 30.5 million cells. These data demonstrate the accuracy of our approach. Our novel approach, which offers an improvement over previous methodologies, is of value for quantitative work of this nature. This approach could be applied to morphometric studies of other similarly complex tissues as well.
Directory of Open Access Journals (Sweden)
Karunamuni Nandini
2008-12-01
Full Text Available Abstract Background Aerobic physical activity (PA and resistance training are paramount in the treatment and management of type 2 diabetes (T2D, but few studies have examined the determinants of both types of exercise in the same sample. Objective The primary purpose was to investigate the utility of the Theory of Planned Behavior (TPB in explaining aerobic PA and resistance training in a population sample of T2D adults. Methods A total of 244 individuals were recruited through a random national sample which was created by generating a random list of household phone numbers. The list was proportionate to the actual number of household telephone numbers for each Canadian province (with the exception of Quebec. These individuals completed self-report TPB constructs of attitude, subjective norm, perceived behavioral control and intention, and a 3-month follow-up that assessed aerobic PA and resistance training. Results TPB explained 10% and 8% of the variance respectively for aerobic PA and resistance training; and accounted for 39% and 45% of the variance respectively for aerobic PA and resistance training intentions. Conclusion These results may guide the development of appropriate PA interventions for aerobic PA and resistance training based on the TPB.
Sluis-Cremer, G. K.; Walters, L. G.; Sichel, H. S.
1967-01-01
The ventilatory capacity of a random sample of men over the age of 35 years in the town of Carletonville was estimated by the forced expiratory volume and the peak expiratory flow rate. Five hundred and sixty-two persons were working or had worked in gold-mines and 265 had never worked in gold-mines. No difference in ventilatory function was found between the miners and non-miners other than that due to the excess of chronic bronchitis in miners. PMID:6017134
International Nuclear Information System (INIS)
Kanu, I.; Achi, O. K.; Ezeronye, O. U.; Anyanwu, E. C.
2006-01-01
Seasonal variation in bacterial heavy metals bio sorption from soap and brewery industrial effluent samples from Eziama River in Abia State were analyzed for Pb, Hg, Fe, Zn, As, and Mn, using atomic absorption spectrophotometry. Bioaccumulation of the metals by bacteria showed the following trend > Fe >Zn >As > Pb > Mn (Rainy Season) and Zn > Fe > Mn > As > Hg > Pb (Dry season). Statistical analysis using of variance (ANOVA) showed significant differences in concentrations of Pb, Hg, Fe, Zn, As, and Mn level between the sampling zones at Eziama River. Seasonal changes in heavy metal concentrations, showed increases in Pb, Fe, and As from 1.32 x 10 5m g/L in the rainy season to 1.42 x 10 5m g/L in the dry season. Fe increased from 40.35 x 10 5m g/L to 42.1 x 10 5m g/L while As increased from 2.32 to 2.48 x 10 5m g/L with a net increases of +56 and + 69 x 10 5m g/L respectively. However, Hg, Zn, and Mn concentrations decreased in the rainy season from 40.54 x 10 5m g/L to 39.24 x l0 5m g/L 1.65 to 0.62 x l0 5m g/L respectively
Job strain and resting heart rate: a cross-sectional study in a Swedish random working sample
Directory of Open Access Journals (Sweden)
Peter Eriksson
2016-03-01
Full Text Available Abstract Background Numerous studies have reported an association between stressing work conditions and cardiovascular disease. However, more evidence is needed, and the etiological mechanisms are unknown. Elevated resting heart rate has emerged as a possible risk factor for cardiovascular disease, but little is known about the relation to work-related stress. This study therefore investigated the association between job strain, job control, and job demands and resting heart rate. Methods We conducted a cross-sectional survey of randomly selected men and women in Västra Götalandsregionen, Sweden (West county of Sweden (n = 1552. Information about job strain, job demands, job control, heart rate and covariates was collected during the period 2001–2004 as part of the INTERGENE/ADONIX research project. Six different linear regression models were used with adjustments for gender, age, BMI, smoking, education, and physical activity in the fully adjusted model. Job strain was operationalized as the log-transformed ratio of job demands over job control in the statistical analyses. Results No associations were seen between resting heart rate and job demands. Job strain was associated with elevated resting heart rate in the unadjusted model (linear regression coefficient 1.26, 95 % CI 0.14 to 2.38, but not in any of the extended models. Low job control was associated with elevated resting heart rate after adjustments for gender, age, BMI, and smoking (linear regression coefficient −0.18, 95 % CI −0.30 to −0.02. However, there were no significant associations in the fully adjusted model. Conclusions Low job control and job strain, but not job demands, were associated with elevated resting heart rate. However, the observed associations were modest and may be explained by confounding effects.
Taylor, C. Barr; Kass, Andrea E.; Trockel, Mickey; Cunning, Darby; Weisman, Hannah; Bailey, Jakki; Sinton, Meghan; Aspen, Vandana; Schecthman, Kenneth; Jacobi, Corinna; Wilfley, Denise E.
2015-01-01
Objective Eating disorders (EDs) are serious problems among college-age women and may be preventable. An indicated on-line eating disorder (ED) intervention, designed to reduce ED and comorbid pathology, was evaluated. Method 206 women (M age = 20 ± 1.8 years; 51% White/Caucasian, 11% African American, 10% Hispanic, 21% Asian/Asian American, 7% other) at very high risk for ED onset (i.e., with high weight/shape concerns plus a history of being teased, current or lifetime depression, and/or non-clinical levels of compensatory behaviors) were randomized to a 10-week, Internet-based, cognitive-behavioral intervention or wait-list control. Assessments included the Eating Disorder Examination (EDE to assess ED onset), EDE-Questionnaire, Structured Clinical Interview for DSM Disorders, and Beck Depression Inventory-II. Results ED attitudes and behaviors improved more in the intervention than control group (p = 0.02, d = 0.31); although ED onset rate was 27% lower, this difference was not significant (p = 0.28, NNT = 15). In the subgroup with highest shape concerns, ED onset rate was significantly lower in the intervention than control group (20% versus 42%, p = 0.025, NNT = 5). For the 27 individuals with depression at baseline, depressive symptomatology improved more in the intervention than control group (p = 0.016, d = 0.96); although ED onset rate was lower in the intervention than control group, this difference was not significant (25% versus 57%, NNT = 4). Conclusions An inexpensive, easily disseminated intervention might reduce ED onset among those at highest risk. Low adoption rates need to be addressed in future research. PMID:26795936
Taylor, C Barr; Kass, Andrea E; Trockel, Mickey; Cunning, Darby; Weisman, Hannah; Bailey, Jakki; Sinton, Meghan; Aspen, Vandana; Schecthman, Kenneth; Jacobi, Corinna; Wilfley, Denise E
2016-05-01
Eating disorders (EDs) are serious problems among college-age women and may be preventable. An indicated online eating disorder (ED) intervention, designed to reduce ED and comorbid pathology, was evaluated. 206 women (M age = 20 ± 1.8 years; 51% White/Caucasian, 11% African American, 10% Hispanic, 21% Asian/Asian American, 7% other) at very high risk for ED onset (i.e., with high weight/shape concerns plus a history of being teased, current or lifetime depression, and/or nonclinical levels of compensatory behaviors) were randomized to a 10-week, Internet-based, cognitive-behavioral intervention or waitlist control. Assessments included the Eating Disorder Examination (EDE, to assess ED onset), EDE-Questionnaire, Structured Clinical Interview for DSM Disorders, and Beck Depression Inventory-II. ED attitudes and behaviors improved more in the intervention than control group (p = .02, d = 0.31); although ED onset rate was 27% lower, this difference was not significant (p = .28, NNT = 15). In the subgroup with highest shape concerns, ED onset rate was significantly lower in the intervention than control group (20% vs. 42%, p = .025, NNT = 5). For the 27 individuals with depression at baseline, depressive symptomatology improved more in the intervention than control group (p = .016, d = 0.96); although ED onset rate was lower in the intervention than control group, this difference was not significant (25% vs. 57%, NNT = 4). An inexpensive, easily disseminated intervention might reduce ED onset among those at highest risk. Low adoption rates need to be addressed in future research. (c) 2016 APA, all rights reserved).
Directory of Open Access Journals (Sweden)
Hallett Jonathan
2012-01-01
Full Text Available Abstract Background There is considerable interest in university student hazardous drinking among the media and policy makers. However there have been no population-based studies in Australia to date. We sought to estimate the prevalence and correlates of hazardous drinking and secondhand effects among undergraduates at a Western Australian university. Method We invited 13,000 randomly selected undergraduate students from a commuter university in Australia to participate in an online survey of university drinking. Responses were received from 7,237 students (56%, who served as participants in this study. Results Ninety percent had consumed alcohol in the last 12 months and 34% met criteria for hazardous drinking (AUDIT score ≥ 8 and greater than 6 standard drinks in one sitting in the previous month. Men and Australian/New Zealand residents had significantly increased odds (OR: 2.1; 95% CI: 1.9-2.3; OR: 5.2; 95% CI: 4.4-6.2 of being categorised as dependent (AUDIT score 20 or over than women and non-residents. In the previous 4 weeks, 13% of students had been insulted or humiliated and 6% had been pushed, hit or otherwise assaulted by others who were drinking. One percent of respondents had experienced sexual assault in this time period. Conclusions Half of men and over a third of women were drinking at hazardous levels and a relatively large proportion of students were negatively affected by their own and other students' drinking. There is a need for intervention to reduce hazardous drinking early in university participation. Trial registration ACTRN12608000104358
Castellano, Sergio; Cermelli, Paolo
2011-04-07
Mate choice depends on mating preferences and on the manner in which mate-quality information is acquired and used to make decisions. We present a model that describes how these two components of mating decision interact with each other during a comparative evaluation of prospective mates. The model, with its well-explored precedents in psychology and neurophysiology, assumes that decisions are made by the integration over time of noisy information until a stopping-rule criterion is reached. Due to this informational approach, the model builds a coherent theoretical framework for developing an integrated view of functions and mechanisms of mating decisions. From a functional point of view, the model allows us to investigate speed-accuracy tradeoffs in mating decision at both population and individual levels. It shows that, under strong time constraints, decision makers are expected to make fast and frugal decisions and to optimally trade off population-sampling accuracy (i.e. the number of sampled males) against individual-assessment accuracy (i.e. the time spent for evaluating each mate). From the proximate-mechanism point of view, the model makes testable predictions on the interactions of mating preferences and choosiness in different contexts and it might be of compelling empirical utility for a context-independent description of mating preference strength. Copyright © 2011 Elsevier Ltd. All rights reserved.
Kiessling, Michael Karl-Heinz
2017-10-01
Let z\\in C, let σ ^2>0 be a variance, and for N\\in N define the integrals E_N^{}(z;σ ) := {1/σ } \\int _R\\ (x^2+z^2) e^{-{1/2σ^2 x^2}}{√{2π }}/dx \\quad if N=1, {1/σ } \\int _{R^N} \\prod \\prod \\limits _{1≤ k1. These are expected values of the polynomials P_N^{}(z)=\\prod _{1≤ n≤ N}(X_n^2+z^2) whose 2 N zeros ± i X_k^{}_{k=1,\\ldots ,N} are generated by N identically distributed multi-variate mean-zero normal random variables {X_k}N_{k=1} with co-variance {Cov}_N^{}(X_k,X_l)=(1+σ ^2-1/N)δ _{k,l}+σ ^2-1/N(1-δ _{k,l}). The E_N^{}(z;σ ) are polynomials in z^2, explicitly computable for arbitrary N, yet a list of the first three E_N^{}(z;σ ) shows that the expressions become unwieldy already for moderate N—unless σ = 1, in which case E_N^{}(z;1) = (1+z^2)^N for all z\\in C and N\\in N. (Incidentally, commonly available computer algebra evaluates the integrals E_N^{}(z;σ ) only for N up to a dozen, due to memory constraints). Asymptotic evaluations are needed for the large- N regime. For general complex z these have traditionally been limited to analytic expansion techniques; several rigorous results are proved for complex z near 0. Yet if z\\in R one can also compute this "infinite-degree" limit with the help of the familiar relative entropy principle for probability measures; a rigorous proof of this fact is supplied. Computer algebra-generated evidence is presented in support of a conjecture that a generalization of the relative entropy principle to signed or complex measures governs the N→ ∞ asymptotics of the regime iz\\in R. Potential generalizations, in particular to point vortex ensembles and the prescribed Gauss curvature problem, and to random matrix ensembles, are emphasized.
Control charts for location based on different sampling schemes
Mehmood, R.; Riaz, M.; Does, R.J.M.M.
2013-01-01
Control charts are the most important statistical process control tool for monitoring variations in a process. A number of articles are available in the literature for the X̄ control chart based on simple random sampling, ranked set sampling, median-ranked set sampling (MRSS), extreme-ranked set
DEFF Research Database (Denmark)
Andersson, Anna-Maria; Carlsen, Elisabeth; Petersen, Jørgen Holm
2003-01-01
To obtain information on the scale of the intraindividual variation in testicular hormone, blood samples for inhibin B determination were collected monthly in 27 healthy male volunteers during a 17-month period. In addition, the traditional reproductive hormones FSH, LH, testosterone, estradiol....... A seasonal variation was observed in LH and testosterone levels, but not in the levels of the other hormones. The seasonal variation in testosterone levels could be explained by the variation in LH levels. The seasonal variation in LH levels seemed to be related to the mean air temperature during the month...... levels in men. The peak levels of both LH and testosterone were observed during June-July, with minimum levels present during winter-early spring. Air temperature, rather than light exposure, seems to be a possible climatic variable explaining the seasonal variation in LH levels....
Sulaiman, Nabil; Albadawi, Salah; Abusnana, Salah; Fikri, Mahmoud; Madani, Abdulrazzag; Mairghani, Maisoon; Alawadi, Fatheya; Zimmet, Paul; Shaw, Jonathan
2015-09-01
The prevalence of diabetes has risen rapidly in the Middle East, particularly in the Gulf Region. However, some prevalence estimates have not fully accounted for large migrant worker populations and have focused on minority indigenous populations. The objectives of the UAE National Diabetes and Lifestyle Study are to: (i) define the prevalence of, and risk factors for, T2DM; (ii) describe the distribution and determinants of T2DM risk factors; (iii) study health knowledge, attitudes, and (iv) identify gene-environment interactions; and (v) develop baseline data for evaluation of future intervention programs. Given the high burden of diabetes in the region and the absence of accurate data on non-UAE nationals in the UAE, a representative sample of the non-UAE nationals was essential. We used an innovative methodology in which non-UAE nationals were sampled when attending the mandatory biannual health check that is required for visa renewal. Such an approach could also be used in other countries in the region. Complete data were available for 2719 eligible non-UAE nationals (25.9% Arabs, 70.7% Asian non-Arabs, 1.1% African non-Arabs, and 2.3% Westerners). Most were men < 65 years of age. The response rate was 68%, and the non-response was greater among women than men; 26.9% earned less than UAE Dirham (AED) 24 000 (US$6500) and the most common areas of employment were as managers or professionals, in service and sales, and unskilled occupations. Most (37.4%) had completed high school and 4.1% had a postgraduate degree. This novel methodology could provide insights for epidemiological studies in the UAE and other Gulf States, particularly for expatriates. © 2015 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
Directory of Open Access Journals (Sweden)
Christopher J. Rogers
2018-06-01
Full Text Available Introduction: Financial strain and discrimination are consistent predictors of negative health outcomes and maladaptive coping behaviors, including tobacco use. Although there is considerable information exploring stress and smoking, limited research has examined the relationship between patterns of stress domains and specific tobacco/nicotine product use. Even fewer studies have assessed ethnic variations in these relationships. Methods: This study investigated the relationship between discrimination and financial strain and current tobacco/nicotine product use and explored the ethnic variation in these relationships among diverse sample of US adults (N = 1068. Separate logistic regression models assessed associations between stress domains and tobacco/nicotine product use, adjusting for covariates (e.g., age, gender, race/ethnicity, and household income. Due to statistically significant differences, the final set of models was stratified by race/ethnicity. Results: Higher levels of discrimination were associated with higher odds of all three tobacco/nicotine product categories. Financial strain was positively associated with combustible tobacco and combined tobacco/nicotine product use. Financial strain was especially risky for Non-Hispanic Whites (AOR:1.191, 95%CI:1.083–1.309 and Blacks/African Americans (AOR:1.542, 95%CI:1.106–2.148, as compared to other groups, whereas discrimination was most detrimental for Asians/Pacific Islanders (AOR:3.827, 95%CI:1.832–7.997 and Hispanics/Latinas/Latinos (AOR:2.517, 95%CI:1.603–3.952. Conclusions: Findings suggest discrimination and financial stressors are risk factors for use of multiple tobacco/nicotine products, highlighting the importance of prevention research that accounts for these stressors. Because ethnic groups may respond differently to stress/strain, prevention research needs to identify cultural values, beliefs, and coping strategies that can buffer the negative consequences of
Lee, Jiwon; Kim, Won Ho; Ryu, Ho-Geol; Lee, Hyung-Chul; Chung, Eun-Jin; Yang, Seong-Mi; Jung, Chul-Woo
2017-08-01
We previously demonstrated the usefulness of milrinone for living donor hepatectomy. However, a less-invasive alternative to central venous catheterization and perioperative contributors to good surgical outcomes remain undetermined. The current study evaluated whether the stroke volume variation (SVV)-guided method can substitute central venous catheterization during milrinone-induced profound vasodilation. We randomly assigned 42 living liver donors to receive either SVV guidance or central venous pressure (CVP) guidance to obtain milrinone-induced low CVP. Target SVV of 9% was used as a substitute for CVP of 5 mm Hg. The surgical field grade evaluated by 2 attending surgeons on a 4-point scale was compared between the CVP- and SVV-guided groups (n = 19, total number of scores = 38 per group) as a primary outcome variable. Multivariable analysis was performed to identify independent factors associated with the best surgical field as a post hoc analysis. Surgical field grades, which were either 1 or 2, were not found to be different between the 2 groups via Mann-Whitney U test (P = .358). There was a very weak correlation between SVV and CVP during profound vasodilation such as CVP ≤ 5 mm Hg (R = -0.06; 95% confidence interval, -0.09 to -0.04; P milrinone infusion might be helpful in providing the best surgical field. Milrinone-induced vasodilation resulted in favorable surgical environment regardless of guidance methods of low CVP during living donor hepatectomy. However, SVV was not a useful indicator of low CVP because of very weak correlation between SVV and CVP during profound vasodilation. In addition, factors contributing to the best surgical field such as donor age, proactive fasting, and proper dosing of milrinone need to be investigated further, ideally through prospective studies.
Directory of Open Access Journals (Sweden)
Smedslund Geir
2013-02-01
Full Text Available Abstract Background Patient reported outcomes are accepted as important outcome measures in rheumatology. The fluctuating symptoms in patients with rheumatic diseases have serious implications for sample size in clinical trials. We estimated the effects of measuring the outcome 1-5 times on the sample size required in a two-armed trial. Findings In a randomized controlled trial that evaluated the effects of a mindfulness-based group intervention for patients with inflammatory arthritis (n=71, the outcome variables Numerical Rating Scales (NRS (pain, fatigue, disease activity, self-care ability, and emotional wellbeing and General Health Questionnaire (GHQ-20 were measured five times before and after the intervention. For each variable we calculated the necessary sample sizes for obtaining 80% power (α=.05 for one up to five measurements. Two, three, and four measures reduced the required sample sizes by 15%, 21%, and 24%, respectively. With three (and five measures, the required sample size per group was reduced from 56 to 39 (32 for the GHQ-20, from 71 to 60 (55 for pain, 96 to 71 (73 for fatigue, 57 to 51 (48 for disease activity, 59 to 44 (45 for self-care, and 47 to 37 (33 for emotional wellbeing. Conclusions Measuring the outcomes five times rather than once reduced the necessary sample size by an average of 27%. When planning a study, researchers should carefully compare the advantages and disadvantages of increasing sample size versus employing three to five repeated measurements in order to obtain the required statistical power.
Energy Technology Data Exchange (ETDEWEB)
Donner, P.
2016-07-01
Citation counts of scientific research contributions are one fundamental data in scientometrics. Accuracy and completeness of citation links are therefore crucial data quality issues (Moed, 2005, Ch. 13). However, despite the known flaws of reference matching algorithms, usually no attempts are made to incorporate uncertainty about citation counts into indicators. This study is a step towards that goal. Particular attention is paid to the question whether publications from countries not using basic Latin script are differently affected by missed citations. The proprietary reference matching procedure of Web of Science (WoS) is based on (near) exact agreement of cited reference data (normalized during processing) to the target papers bibliographical data. Consequently, the procedure has near-optimal precision but incomplete recall - it is known to miss some slightly inaccurate reference links (Olensky, 2015). However, there has been no attempt so far to estimate the rate of missed citations by a principled method for a random sample. For this study a simple random sample of WoS source papers was drawn and it was attempted to find all reference strings of WoS indexed documents that refer to them, in particular inexact matches. The objective is to give a statistical estimate of the proportion of missed citations and to describe the relationship of the number of found citations to the number of missed citations, i.e. the conditional error distribution. The empirical error distribution is statistically analyzed and modelled. (Author)
International Nuclear Information System (INIS)
Gogolak, C.V.
1986-11-01
The concentration of a contaminant measured in a particular medium might be distributed as a positive random variable when it is present, but it may not always be present. If there is a level below which the concentration cannot be distinguished from zero by the analytical apparatus, a sample from such a population will be censored on the left. The presence of both zeros and positive values in the censored portion of such samples complicates the problem of estimating the parameters of the underlying positive random variable and the probability of a zero observation. Using the method of maximum likelihood, it is shown that the solution to this estimation problem reduces largely to that of estimating the parameters of the distribution truncated at the point of censorship. The maximum likelihood estimate of the proportion of zero values follows directly. The derivation of the maximum likelihood estimates for a lognormal population with zeros is given in detail, and the asymptotic properties of the estimates are examined. The estimation method was used to fit several different distributions to a set of severely censored 85 Kr monitoring data from six locations at the Savannah River Plant chemical separations facilities
Clagett, Bartholt; Nathanson, Katherine L; Ciosek, Stephanie L; McDermoth, Monique; Vaughn, David J; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A
2013-12-01
Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-phone numbers and address-based sampling (ABS), to recruit primarily white men aged 18-55 years into a study of testicular cancer susceptibility conducted in the Philadelphia, Pennsylvania, metropolitan area between 2009 and 2012. With few exceptions, eligible and enrolled controls recruited by means of RDD and ABS were similar with regard to characteristics for which data were collected on the screening survey. While we find ABS to be a comparably effective method of recruiting young males compared with landline RDD, we acknowledge the potential impact that selection bias may have had on our results because of poor overall response rates, which ranged from 11.4% for landline RDD to 1.7% for ABS.
Zhang, Xin; Wu, Yuxia; Ren, Pengwei; Liu, Xueting; Kang, Deying
2015-10-30
To explore the relationship between the external validity and the internal validity of hypertension RCTs conducted in China. Comprehensive literature searches were performed in Medline, Embase, Cochrane Central Register of Controlled Trials (CCTR), CBMdisc (Chinese biomedical literature database), CNKI (China National Knowledge Infrastructure/China Academic Journals Full-text Database) and VIP (Chinese scientific journals database) as well as advanced search strategies were used to locate hypertension RCTs. The risk of bias in RCTs was assessed by a modified scale, Jadad scale respectively, and then studies with 3 or more grading scores were included for the purpose of evaluating of external validity. A data extract form including 4 domains and 25 items was used to explore relationship of the external validity and the internal validity. Statistic analyses were performed by using SPSS software, version 21.0 (SPSS, Chicago, IL). 226 hypertension RCTs were included for final analysis. RCTs conducted in university affiliated hospitals (P internal validity. Multi-center studies (median = 4.0, IQR = 2.0) were scored higher internal validity score than single-center studies (median = 3.0, IQR = 1.0) (P internal validity (P = 0.004). Multivariate regression indicated sample size, industry-funding, quality of life (QOL) taken as measure and the university affiliated hospital as trial setting had statistical significance (P external validity of RCTs do associate with the internal validity, that do not stand in an easy relationship to each other. Regarding the poor reporting, other possible links between two variables need to trace in the future methodological researches.
Tempia, S; Salman, M D; Keefe, T; Morley, P; Freier, J E; DeMartini, J C; Wamwayi, H M; Njeumi, F; Soumaré, B; Abdi, A M
2010-12-01
A cross-sectional sero-survey, using a two-stage cluster sampling design, was conducted between 2002 and 2003 in ten administrative regions of central and southern Somalia, to estimate the seroprevalence and geographic distribution of rinderpest (RP) in the study area, as well as to identify potential risk factors for the observed seroprevalence distribution. The study was also used to test the feasibility of the spatially integrated investigation technique in nomadic and semi-nomadic pastoral systems. In the absence of a systematic list of livestock holdings, the primary sampling units were selected by generating random map coordinates. A total of 9,216 serum samples were collected from cattle aged 12 to 36 months at 562 sampling sites. Two apparent clusters of RP seroprevalence were detected. Four potential risk factors associated with the observed seroprevalence were identified: the mobility of cattle herds, the cattle population density, the proximity of cattle herds to cattle trade routes and cattle herd size. Risk maps were then generated to assist in designing more targeted surveillance strategies. The observed seroprevalence in these areas declined over time. In subsequent years, similar seroprevalence studies in neighbouring areas of Kenya and Ethiopia also showed a very low seroprevalence of RP or the absence of antibodies against RP. The progressive decline in RP antibody prevalence is consistent with virus extinction. Verification of freedom from RP infection in the Somali ecosystem is currently in progress.
Candel, Math J J M; Van Breukelen, Gerard J P
2010-06-30
Adjustments of sample size formulas are given for varying cluster sizes in cluster randomized trials with a binary outcome when testing the treatment effect with mixed effects logistic regression using second-order penalized quasi-likelihood estimation (PQL). Starting from first-order marginal quasi-likelihood (MQL) estimation of the treatment effect, the asymptotic relative efficiency of unequal versus equal cluster sizes is derived. A Monte Carlo simulation study shows this asymptotic relative efficiency to be rather accurate for realistic sample sizes, when employing second-order PQL. An approximate, simpler formula is presented to estimate the efficiency loss due to varying cluster sizes when planning a trial. In many cases sampling 14 per cent more clusters is sufficient to repair the efficiency loss due to varying cluster sizes. Since current closed-form formulas for sample size calculation are based on first-order MQL, planning a trial also requires a conversion factor to obtain the variance of the second-order PQL estimator. In a second Monte Carlo study, this conversion factor turned out to be 1.25 at most. (c) 2010 John Wiley & Sons, Ltd.
Tadić, Bosiljka
2018-03-01
We study dynamics of a built-in domain wall (DW) in 2-dimensional disordered ferromagnets with different sample shapes using random-field Ising model on a square lattice rotated by 45 degrees. The saw-tooth DW of the length Lx is created along one side and swept through the sample by slow ramping of the external field until the complete magnetisation reversal and the wall annihilation at the open top boundary at a distance Ly. By fixing the number of spins N =Lx ×Ly = 106 and the random-field distribution at a value above the critical disorder, we vary the ratio of the DW length to the annihilation distance in the range Lx /Ly ∈ [ 1 / 16 , 16 ] . The periodic boundary conditions are applied in the y-direction so that these ratios comprise different samples, i.e., surfaces of cylinders with the changing perimeter Lx and height Ly. We analyse the avalanches of the DW slips between following field updates, and the multifractal structure of the magnetisation fluctuation time series. Our main findings are that the domain-wall lengths materialised in different sample shapes have an impact on the dynamics at all scales. Moreover, the domain-wall motion at the beginning of the hysteresis loop (HLB) probes the disorder effects resulting in the fluctuations that are significantly different from the large avalanches in the central part of the loop (HLC), where the strong fields dominate. Specifically, the fluctuations in HLB exhibit a wide multi-fractal spectrum, which shifts towards higher values of the exponents when the DW length is reduced. The distributions of the avalanches in this segments of the loops obey power-law decay and the exponential cutoffs with the exponents firmly in the mean-field universality class for long DW. In contrast, the avalanches in the HLC obey Tsallis density distribution with the power-law tails which indicate the new categories of the scale invariant behaviour for different ratios Lx /Ly. The large fluctuations in the HLC, on the other
Directory of Open Access Journals (Sweden)
Catherine Mooney
Full Text Available MicroRNAs are a class of small non-coding RNA that regulate gene expression at a post-transcriptional level. MicroRNAs have been identified in various body fluids under normal conditions and their stability as well as their dysregulation in disease opens up a new field for biomarker study. However, diurnal and day-to-day variation in plasma microRNA levels, and differential regulation between males and females, may affect biomarker stability. A QuantStudio 12K Flex Real-Time PCR System was used to profile plasma microRNA levels using OpenArray in male and female healthy volunteers, in the morning and afternoon, and at four time points over a one month period. Using this system we were able to run four OpenArray plates in a single run, the equivalent of 32 traditional 384-well qPCR plates or 12,000 data points. Up to 754 microRNAs can be identified in a single plasma sample in under two hours. 108 individual microRNAs were identified in at least 80% of all our samples which compares favourably with other reports of microRNA profiles in serum or plasma in healthy adults. Many of these microRNAs, including miR-16-5p, miR-17-5p, miR-19a-3p, miR-24-3p, miR-30c-5p, miR-191-5p, miR-223-3p and miR-451a are highly expressed and consistent with previous studies using other platforms. Overall, microRNA levels were very consistent between individuals, males and females, and time points and we did not detect significant differences in levels of microRNAs. These results suggest the suitability of this platform for microRNA profiling and biomarker discovery and suggest minimal confounding influence of sex or sample timing. However, the platform has not been subjected to rigorous validation which must be demonstrated in future biomarker studies where large differences may exist between disease and control samples.
International Nuclear Information System (INIS)
Ofoegbu, C.O.; Adjepong, S.K.
1987-11-01
Results of an investigation of the variation, with moisture content, of the specific heat capacity of samples of three texturally different types of soil (clayey, sandy and sandy loam) obtained from the Niger delta area of Nigeria, are presented. The results show that the specific heat capacities of the soils studied, increase with moisture content. This increase is found to be linear for the entire range of moisture contents considered (0-25%), in the case of the sandy loam soil while for the clayey and sandy soils the specific heat capacity is found to increase linearly with moisture content up to about 15% after which the increase becomes parabolic. The rate of increase of specific heat capacity with moisture content appears to be highest in the clayey soil and lowest in the sandy soil. It is thought that the differences in the rates of increase of specific heat capacity with moisture content, observed for the soils, reflect the soils' water-retention capacities. (author) 3 refs, 5 figs
Petpairote, Chayanut; Madarasmi, Suthep; Chamnongthai, Kosin
2018-01-01
The practical identification of individuals using facial recognition techniques requires the matching of faces with specific expressions to faces from a neutral face database. A method for facial recognition under varied expressions against neutral face samples of individuals via recognition of expression warping and the use of a virtual expression-face database is proposed. In this method, facial expressions are recognized and the input expression faces are classified into facial expression groups. To aid facial recognition, the virtual expression-face database is sorted into average facial-expression shapes and by coarse- and fine-featured facial textures. Wrinkle information is also employed in classification by using a process of masking to adjust input faces to match the expression-face database. We evaluate the performance of the proposed method using the CMU multi-PIE, Cohn-Kanade, and AR expression-face databases, and we find that it provides significantly improved results in terms of face recognition accuracy compared to conventional methods and is acceptable for facial recognition under expression variation.
Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi
2018-04-01
The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.
Directory of Open Access Journals (Sweden)
Chunrong Mi
2017-01-01
Full Text Available Species distribution models (SDMs have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane (Grus monacha, n = 33, White-naped Crane (Grus vipio, n = 40, and Black-necked Crane (Grus nigricollis, n = 75 in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model, Random Forest, CART (Classification and Regression Tree and Maxent (Maximum Entropy Models. In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC and true skill statistic (TSS were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid
Mi, Chunrong; Huettmann, Falk; Guo, Yumin; Han, Xuesong; Wen, Lijia
2017-01-01
Species distribution models (SDMs) have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane ( Grus monacha , n = 33), White-naped Crane ( Grus vipio , n = 40), and Black-necked Crane ( Grus nigricollis , n = 75) in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model), Random Forest, CART (Classification and Regression Tree) and Maxent (Maximum Entropy Models). In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC) and true skill statistic (TSS)) were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial) ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid
Heo, Moonseong; Litwin, Alain H; Blackstock, Oni; Kim, Namhee; Arnsten, Julia H
2017-02-01
We derived sample size formulae for detecting main effects in group-based randomized clinical trials with different levels of data hierarchy between experimental and control arms. Such designs are necessary when experimental interventions need to be administered to groups of subjects whereas control conditions need to be administered to individual subjects. This type of trial, often referred to as a partially nested or partially clustered design, has been implemented for management of chronic diseases such as diabetes and is beginning to emerge more commonly in wider clinical settings. Depending on the research setting, the level of hierarchy of data structure for the experimental arm can be three or two, whereas that for the control arm is two or one. Such different levels of data hierarchy assume correlation structures of outcomes that are different between arms, regardless of whether research settings require two or three level data structure for the experimental arm. Therefore, the different correlations should be taken into account for statistical modeling and for sample size determinations. To this end, we considered mixed-effects linear models with different correlation structures between experimental and control arms to theoretically derive and empirically validate the sample size formulae with simulation studies.
Zhang, Fujin; He, Jiang; Yao, Yiping; Hou, Dekun; Jiang, Cai; Zhang, Xinxin; Di, Caixia; Otgonbayar, Khureldavaa
2013-08-01
The spatial variability and temporal trend in concentrations of the organochlorine pesticides (OCPs), hexachlorocyclohexane (HCH) and dichlorodiphenyltrichloroethane (DDT), in soils and agricultural corps were investigated on an intensive horticulture area in Hohhot, North-West China, from 2008 to 2011. The most frequently found and abundant pesticides were the metabolites of DDT (p,p'-DDE, p,p'-DDT, o,p'-DDT and p,p'-DDD). Total DDT concentrations ranged from ND (not detectable) to 507.41 ng/g and were higher than the concentration of total HCHs measured for the range of 4.84-281.44 ng/g. There were significantly positive correlations between the ∑DDT and ∑HCH concentrations (r (2)>0.74) in soils, but no significant correlation was found between the concentrations of OCPs in soils and clay content while a relatively strong correlation was found between total OCP concentrations and total organic carbon (TOC). β-HCH was the main isomer of HCHs, and was detected in all samples; the maximum proportion of β-HCH compared to ∑HCHs (mean value 54%) was found, suggesting its persistence. The α/γ-HCH ratio was between 0.89 and 5.39, which signified the combined influence of technical HCHs and lindane. Low p,p'-DDE/p,p'-DDT in N1, N3 and N9 were found, reflecting the fresh input of DDTs, while the relatively high o,p'-DDT/p,p'-DDT ratios indicated the agricultural application of dicofol. Ratios of DDT/(DDE+DDD) in soils do not indicate recent inputs of DDT into Hohhot farmland soil environment. Seasonal variations of OCPs featured higher concentrations in autumn and lower concentrations in spring. This was likely associated with their temperature-driven re-volatilization and application of dicofol in late spring.
Cross, Simon S; Stephenson, Timothy J; Harrison, Robert F
2011-10-01
To investigate the role of random temporal order of patient arrival at screening centres in the variability seen in rates of node positivity and breast cancer grade between centres in the NHS Breast Screening Programme. Computer simulations were performed of the variation in node positivity and breast cancer grade with the random temporal arrival of patients at screening centres based on national UK audit data. Cumulative mean graphs of these data were plotted. Confidence intervals for the parameters were generated, using the binomial distribution. UK audit data were plotted on these control limit graphs. The results showed that much of the variability in the audit data could be accounted for by the effects of random order of arrival of cases at the screening centres. Confidence intervals of 99.7% identified true outliers in the data. Much of the variation in breast pathology quality assurance data in the UK can be explained by the random order in which cases arrive at individual centres. Control charts with confidence intervals of 99.7% plotted against the number of reported cases are useful tools for identification of true outliers. 2011 Blackwell Publishing Limited.
Kløvstad, Hilde; Natås, Olav; Tverdal, Aage; Aavitsland, Preben
2013-01-23
As most genital Chlamydia trachomatis infections are asymptomatic, many patients do not seek health care for testing. Infections remain undiagnosed and untreated. We studied whether screening with information and home sampling resulted in more young people getting tested, diagnosed and treated for chlamydia in the three months following the intervention compared to the current strategy of testing in the health care system. We conducted a population based randomized controlled trial among all persons aged 18-25 years in one Norwegian county (41 519 persons). 10 000 persons (intervention) received an invitation by mail with chlamydia information and a mail-back urine sampling kit. 31 519 persons received no intervention and continued with usual care (control). All samples from both groups were analysed in the same laboratory. Information on treatment was obtained from the Norwegian Prescription Database (NorPD). We estimated risk ratios and risk differences of being tested, diagnosed and treated in the intervention group compared to the control group. In the intervention group 16.5% got tested and in the control group 3.4%, risk ratio 4.9 (95% CI 4.5-5.2). The intervention led to 2.6 (95% CI 2.0-3.4) times as many individuals being diagnosed and 2.5 (95% CI 1.9-3.4) times as many individuals receiving treatment for chlamydia compared to no intervention in the three months following the intervention. In Norway, systematic screening with information and home sampling results in more young people being tested, diagnosed and treated for chlamydia in the three months following the intervention than the current strategy of testing in the health care system. However, the study has not established that the intervention will reduce the chlamydia prevalence or the risk of complications from chlamydia.
Vanmarcke, Erik
1983-03-01
Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems" confront astronomers, physicists, geologists, meteorologists, biologists, and other natural scientists. They appear in the artifacts developed by electrical, mechanical, civil, and other engineers. They even underlie the processes of social and economic change. The purpose of this book is to bring together existing and new methodologies of random field theory and indicate how they can be applied to these diverse areas where a "deterministic treatment is inefficient and conventional statistics insufficient." Many new results and methods are included. After outlining the extent and characteristics of the random field approach, the book reviews the classical theory of multidimensional random processes and introduces basic probability concepts and methods in the random field context. It next gives a concise amount of the second-order analysis of homogeneous random fields, in both the space-time domain and the wave number-frequency domain. This is followed by a chapter on spectral moments and related measures of disorder and on level excursions and extremes of Gaussian and related random fields. After developing a new framework of analysis based on local averages of one-, two-, and n-dimensional processes, the book concludes with a chapter discussing ramifications in the important areas of estimation, prediction, and control. The mathematical prerequisite has been held to basic college-level calculus.
Szirovicza, Leonóra; López, Pilar; Kopena, Renáta; Benkő, Mária; Martín, José; Pénzes, Judit J
2016-01-01
Here, we report the results of a large-scale PCR survey on the prevalence and diversity of adenoviruses (AdVs) in samples collected randomly from free-living reptiles. On the territories of the Guadarrama Mountains National Park in Central Spain and of the Chafarinas Islands in North Africa, cloacal swabs were taken from 318 specimens of eight native species representing five squamate reptilian families. The healthy-looking animals had been captured temporarily for physiological and ethological examinations, after which they were released. We found 22 AdV-positive samples in representatives of three species, all from Central Spain. Sequence analysis of the PCR products revealed the existence of three hitherto unknown AdVs in 11 Carpetane rock lizards (Iberolacerta cyreni), nine Iberian worm lizards (Blanus cinereus), and two Iberian green lizards (Lacerta schreiberi), respectively. Phylogeny inference showed every novel putative virus to be a member of the genus Atadenovirus. This is the very first description of the occurrence of AdVs in amphisbaenian and lacertid hosts. Unlike all squamate atadenoviruses examined previously, two of the novel putative AdVs had A+T rich DNA, a feature generally deemed to mirror previous host switch events. Our results shed new light on the diversity and evolution of atadenoviruses.
Directory of Open Access Journals (Sweden)
Leonóra Szirovicza
Full Text Available Here, we report the results of a large-scale PCR survey on the prevalence and diversity of adenoviruses (AdVs in samples collected randomly from free-living reptiles. On the territories of the Guadarrama Mountains National Park in Central Spain and of the Chafarinas Islands in North Africa, cloacal swabs were taken from 318 specimens of eight native species representing five squamate reptilian families. The healthy-looking animals had been captured temporarily for physiological and ethological examinations, after which they were released. We found 22 AdV-positive samples in representatives of three species, all from Central Spain. Sequence analysis of the PCR products revealed the existence of three hitherto unknown AdVs in 11 Carpetane rock lizards (Iberolacerta cyreni, nine Iberian worm lizards (Blanus cinereus, and two Iberian green lizards (Lacerta schreiberi, respectively. Phylogeny inference showed every novel putative virus to be a member of the genus Atadenovirus. This is the very first description of the occurrence of AdVs in amphisbaenian and lacertid hosts. Unlike all squamate atadenoviruses examined previously, two of the novel putative AdVs had A+T rich DNA, a feature generally deemed to mirror previous host switch events. Our results shed new light on the diversity and evolution of atadenoviruses.
Energy Technology Data Exchange (ETDEWEB)
Kim, Hojin; Li Ruijiang; Lee, Rena; Goldstein, Thomas; Boyd, Stephen; Candes, Emmanuel; Xing Lei [Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States) and Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5847 (United States); Department of Radiation Oncology, Ehwa University, Seoul 158-710 (Korea, Republic of); Department of Electrical Engineering, Stanford University, Stanford, California 94305-9505 (United States); Department of Statistics, Stanford University, Stanford, California 94305-4065 (United States); Department of Radiation Oncology, Stanford University, Stanford, California 94305-5304 (United States)
2012-07-15
Purpose: A new treatment scheme coined as dense angularly sampled and sparse intensity modulated radiation therapy (DASSIM-RT) has recently been proposed to bridge the gap between IMRT and VMAT. By increasing the angular sampling of radiation beams while eliminating dispensable segments of the incident fields, DASSIM-RT is capable of providing improved conformity in dose distributions while maintaining high delivery efficiency. The fact that DASSIM-RT utilizes a large number of incident beams represents a major computational challenge for the clinical applications of this powerful treatment scheme. The purpose of this work is to provide a practical solution to the DASSIM-RT inverse planning problem. Methods: The inverse planning problem is formulated as a fluence-map optimization problem with total-variation (TV) minimization. A newly released L1-solver, template for first-order conic solver (TFOCS), was adopted in this work. TFOCS achieves faster convergence with less memory usage as compared with conventional quadratic programming (QP) for the TV form through the effective use of conic forms, dual-variable updates, and optimal first-order approaches. As such, it is tailored to specifically address the computational challenges of large-scale optimization in DASSIM-RT inverse planning. Two clinical cases (a prostate and a head and neck case) are used to evaluate the effectiveness and efficiency of the proposed planning technique. DASSIM-RT plans with 15 and 30 beams are compared with conventional IMRT plans with 7 beams in terms of plan quality and delivery efficiency, which are quantified by conformation number (CN), the total number of segments and modulation index, respectively. For optimization efficiency, the QP-based approach was compared with the proposed algorithm for the DASSIM-RT plans with 15 beams for both cases. Results: Plan quality improves with an increasing number of incident beams, while the total number of segments is maintained to be about the
Caballero-Ortega, Heriberto; Castillo-Cruz, Rocío; Murieta, Sandra; Ortíz-Alegría, Luz Belinda; Calderón-Segura, Esther; Conde-Glez, Carlos J; Cañedo-Solares, Irma; Correa, Dolores
2014-05-14
There are few articles on evaluation of Toxoplasma gondii serological tests. Besides, commercially available tests are not always useful and are expensive for studies in open population. The aim of this study was to evaluate in-house ELISA and western blot for IgG antibodies in a representative sample of people living in Mexico. Three hundred and five serum samples were randomly selected from two national seroepidemiological survey banks; they were taken from men and women of all ages and from all areas of the country. ELISA cut-off was established using the mean plus three standard deviations of negative samples. Western blots were analysed by two experienced technicians and positivity was established according to the presence of at least three diagnostic bands. A commercial ELISA kit was used as a third test. Two reference standards were built up: one using concordant results of two assays leaving the evaluated test out and the other in which the evaluated test was included (IN) with at least two concordant results to define diagnosis. the lowest values of diagnostic parameters were obtained with the OUT reference standards: in-house ELISA had 96.9% sensitivity, 62.1% specificity, 49.6% PPV, 98.1% NPV and 71.8% accuracy, while western blot presented 81.8%, 89.7%, 84.0%, 88.2% and 86.6% values and the best kappa coefficient (0.72-0.82). The in-house ELISA is useful for screening people of Mexico, due to its high sensitivity, while western blot may be used to confirm diagnosis. These techniques might prove useful in other Latin American countries.
Nicklas, Jacinda M; Skurnik, Geraldine; Zera, Chloe A; Reforma, Liberty G; Levkoff, Sue E; Seely, Ellen W
2016-02-01
The postpartum period is a window of opportunity for diabetes prevention in women with recent gestational diabetes (GDM), but recruitment for clinical trials during this period of life is a major challenge. We adapted a social-ecologic model to develop a multi-level recruitment strategy at the macro (high or institutional level), meso (mid or provider level), and micro (individual) levels. Our goal was to recruit 100 women with recent GDM into the Balance after Baby randomized controlled trial over a 17-month period. Participants were asked to attend three in-person study visits at 6 weeks, 6, and 12 months postpartum. They were randomized into a control arm or a web-based intervention arm at the end of the baseline visit at six weeks postpartum. At the end of the recruitment period, we compared population characteristics of our enrolled subjects to the entire population of women with GDM delivering at Brigham and Women's Hospital (BWH). We successfully recruited 107 of 156 (69 %) women assessed for eligibility, with the majority (92) recruited during pregnancy at a mean 30 (SD ± 5) weeks of gestation, and 15 recruited postpartum, at a mean 2 (SD ± 3) weeks postpartum. 78 subjects attended the initial baseline visit, and 75 subjects were randomized into the trial at a mean 7 (SD ± 2) weeks postpartum. The recruited subjects were similar in age and race/ethnicity to the total population of 538 GDM deliveries at BWH over the 17-month recruitment period. Our multilevel approach allowed us to successfully meet our recruitment goal and recruit a representative sample of women with recent GDM. We believe that our most successful strategies included using a dedicated in-person recruiter, integrating recruitment into clinical flow, allowing for flexibility in recruitment, minimizing barriers to participation, and using an opt-out strategy with providers. Although the majority of women were recruited while pregnant, women recruited in the early postpartum period were
Variate generation for probabilistic fracture mechanics and fitness-for-service studies
International Nuclear Information System (INIS)
Walker, J.R.
1987-01-01
Atomic Energy of Canada Limited is conducting studies in Probabilistic Fracture Mechanics. These studies are being conducted as part of a fitness-for-service programme in support of CANDU reactors. The Monte Carlo analyses, which form part of the Probabilistic Fracture Mechanics studies, require that variates can be sampled from probability density functions. Accurate pseudo-random numbers are necessary for accurate variate generation. This report details the principles of variate generation, and describes the production and testing of pseudo-random numbers. A new algorithm has been produced for the correct performance of the lattice test for the independence of pseudo-random numbers. Two new pseudo-random number generators have been produced. These generators have excellent randomness properties and can be made fully machine-independent. Versions, in FORTRAN, for VAX and CDC computers are given. Accurate and efficient algorithms for the generation of variates from the specialized probability density functions of Probabilistic Fracture Mechanics are given. 38 refs
Variation: Use It or Misuse It--Replication and Its Variants
Drummond, Gordon B.; Vowler, Sarah L.
2012-01-01
In this article, the authors talk about variation and how variation between measurements may be reduced if sampling is not random. They also talk about replication and its variants. A replicate is a repeated measurement from the same experimental unit. An experimental unit is the smallest part of an experiment or a study that can be subject to a…
Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R
2016-12-01
: MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We
Neutron monitor generated data distributions in quantum variational Monte Carlo
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
Garboś, Sławomir; Święcicka, Dorota
2015-11-01
The random daytime (RDT) sampling method was used for the first time in the assessment of average weekly exposure to uranium through drinking water in a large water supply zone. Data set of uranium concentrations determined in 106 RDT samples collected in three runs from the water supply zone in Wroclaw (Poland), cannot be simply described by normal or log-normal distributions. Therefore, a numerical method designed for the detection and calculation of bimodal distribution was applied. The extracted two distributions containing data from the summer season of 2011 and the winter season of 2012 (nI=72) and from the summer season of 2013 (nII=34) allowed to estimate means of U concentrations in drinking water: 0.947 μg/L and 1.23 μg/L, respectively. As the removal efficiency of uranium during applied treatment process is negligible, the effect of increase in uranium concentration can be explained by higher U concentration in the surface-infiltration water used for the production of drinking water. During the summer season of 2013, heavy rains were observed in Lower Silesia region, causing floods over the territory of the entire region. Fluctuations in uranium concentrations in surface-infiltration water can be attributed to releases of uranium from specific sources - migration from phosphate fertilizers and leaching from mineral deposits. Thus, exposure to uranium through drinking water may increase during extreme rainfall events. The average chronic weekly intakes of uranium through drinking water, estimated on the basis of central values of the extracted normal distributions, accounted for 3.2% and 4.1% of tolerable weekly intake. Copyright © 2015 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Gaëtan Sossauer
Full Text Available OBJECTIVE: Human papillomavirus (HPV self-sampling (Self-HPV may be used as a primary cervical cancer screening method in a low resource setting. Our aim was to evaluate whether an educational intervention would improve women's knowledge and confidence in the Self-HPV method. METHOD: Women aged between 25 and 65 years old, eligible for cervical cancer screening, were randomly chosen to receive standard information (control group or standard information followed by educational intervention (interventional group. Standard information included explanations about what the test detects (HPV, the link between HPV and cervical cancer and how to perform HPV self-sampling. The educational intervention consisted of a culturally tailored video about HPV, cervical cancer, Self-HPV and its relevancy as a screening test. All participants completed a questionnaire that assessed sociodemographic data, women's knowledge about cervical cancer and acceptability of Self-HPV. RESULTS: A total of 302 women were enrolled in 4 health care centers in Yaoundé and the surrounding countryside. 301 women (149 in the "control group" and 152 in the "intervention group" completed the full process and were included into the analysis. Participants who received the educational intervention had a significantly higher knowledge about HPV and cervical cancer than the control group (p<0.05, but no significant difference on Self-HPV acceptability and confidence in the method was noticed between the two groups. CONCLUSION: Educational intervention promotes an increase in knowledge about HPV and cervical cancer. Further investigation should be conducted to determine if this intervention can be sustained beyond the short term and influences screening behavior. TRIALS REGISTRATION: International Standard Randomised Controlled Trial Number (ISRCTN Register ISRCTN78123709.
Guo, Yan; Chen, Xinguang; Gong, Jie; Li, Fang; Zhu, Chaoyang; Yan, Yaqiong; Wang, Liang
2016-01-01
Millions of people move from rural areas to urban areas in China to pursue new opportunities while leaving their spouses and children at rural homes. Little is known about the impact of migration-related separation on mental health of these rural migrants in urban China. Survey data from a random sample of rural-to-urban migrants (n = 1113, aged 18-45) from Wuhan were analyzed. The Domestic Migration Stress Questionnaire (DMSQ), an instrument with four subconstructs, was used to measure migration-related stress. The relationship between spouse/child separation and stress was assessed using survey estimation methods to account for the multi-level sampling design. 16.46% of couples were separated from their spouses (spouse-separation only), 25.81% of parents were separated from their children (child separation only). Among the participants who married and had children, 5.97% were separated from both their spouses and children (double separation). Spouse-separation only and double separation did not scored significantly higher on DMSQ than those with no separation. Compared to parents without child separation, parents with child separation scored significantly higher on DMSQ (mean score = 2.88, 95% CI: [2.81, 2.95] vs. 2.60 [2.53, 2.67], p separation type and by gender indicated that the association was stronger for child-separation only and for female participants. Child-separation is an important source of migration-related stress, and the effect is particularly strong for migrant women. Public policies and intervention programs should consider these factors to encourage and facilitate the co-migration of parents with their children to mitigate migration-related stress.
Directory of Open Access Journals (Sweden)
Yan Guo
Full Text Available Millions of people move from rural areas to urban areas in China to pursue new opportunities while leaving their spouses and children at rural homes. Little is known about the impact of migration-related separation on mental health of these rural migrants in urban China.Survey data from a random sample of rural-to-urban migrants (n = 1113, aged 18-45 from Wuhan were analyzed. The Domestic Migration Stress Questionnaire (DMSQ, an instrument with four subconstructs, was used to measure migration-related stress. The relationship between spouse/child separation and stress was assessed using survey estimation methods to account for the multi-level sampling design.16.46% of couples were separated from their spouses (spouse-separation only, 25.81% of parents were separated from their children (child separation only. Among the participants who married and had children, 5.97% were separated from both their spouses and children (double separation. Spouse-separation only and double separation did not scored significantly higher on DMSQ than those with no separation. Compared to parents without child separation, parents with child separation scored significantly higher on DMSQ (mean score = 2.88, 95% CI: [2.81, 2.95] vs. 2.60 [2.53, 2.67], p < .05. Stratified analysis by separation type and by gender indicated that the association was stronger for child-separation only and for female participants.Child-separation is an important source of migration-related stress, and the effect is particularly strong for migrant women. Public policies and intervention programs should consider these factors to encourage and facilitate the co-migration of parents with their children to mitigate migration-related stress.
Analysis of the variation of the activity of a "9"9"mTc sample after dilution with saline solution
International Nuclear Information System (INIS)
Kuahara, L.T.; Correa, E.L.; Potiens, M.P.A.
2016-01-01
The activity meter is essential equipment in nuclear medicine services.To ensure its good operation and know the factors which may influence its readings is vital for the activity administered to the patient be correct. Many factors may influence the activity meter accuracy, such as the type of container, geometry, and radioactive material volume. The aim of this study was to analyze the measurements variations in 0.5 ml and 1.0 ml of "9"9"mTc pure and diluted in 2.5 ml of saline solution, in containers used in nuclear medicine. Variations of up to 4 % in measured values were found. (author)
Directory of Open Access Journals (Sweden)
Orgül Selim
2010-06-01
Full Text Available Abstract Background The aim of this epidemiological study was to investigate the relationship of thermal discomfort with cold extremities (TDCE to age, gender, and body mass index (BMI in a Swiss urban population. Methods In a random population sample of Basel city, 2,800 subjects aged 20-40 years were asked to complete a questionnaire evaluating the extent of cold extremities. Values of cold extremities were based on questionnaire-derived scores. The correlation of age, gender, and BMI to TDCE was analyzed using multiple regression analysis. Results A total of 1,001 women (72.3% response rate and 809 men (60% response rate returned a completed questionnaire. Statistical analyses revealed the following findings: Younger subjects suffered more intensely from cold extremities than the elderly, and women suffered more than men (particularly younger women. Slimmer subjects suffered significantly more often from cold extremities than subjects with higher BMIs. Conclusions Thermal discomfort with cold extremities (a relevant symptom of primary vascular dysregulation occurs at highest intensity in younger, slimmer women and at lowest intensity in elderly, stouter men.
Thorslund, Karin; Johansson Hanse, Jan; Axberg, Ulf
2017-07-01
Universal parental support intended to enhance parents' capacity for parenting is an important aspect of public health strategies. However, support has mostly been aimed at parents, especially mothers, of younger children. There is a gap in the research concerning parents of adolescents and fathers' interest in parenting support. To investigate and compare the interest in parenting support of parents of adolescents and younger children, potential differences between mothers and fathers, and their knowledge of what is being offered to them already, and to explore their requirements for future universal parental support. Telephone interviews were conducted with a random sample of 1336 parents. Quantitative methods were used to analyze differences between groups and qualitative methods were used to analyze open-ended questions in regard to parents' requirements for future universal parental support. About 82% of the parents of adolescents interviewed think that offering universal parental support is most important during child's adolescence. There is a substantial interest, particularly among mothers, in most forms of support. Despite their interest, parents have limited awareness of the support available. Only 7% knew about the local municipality website, although 70% reported a possible interest in such a website. Similarly, 3% knew that a parent phone line was available to them, while 59% reported a possible interest. It poses a challenge but is nevertheless important for municipalities to develop support targeted at parents of adolescents which is tailored to their needs, and to reach out with information.
General inverse problems for regular variation
DEFF Research Database (Denmark)
Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan
2014-01-01
Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...
Beaver, Kevin M.; Wright, John Paul
2011-01-01
Research has consistently revealed that average IQ scores vary significantly across macro-level units, such as states and nations. The reason for this variation in IQ, however, has remained at the center of much controversy. One of the more provocative explanations is that IQ across macro-level units is the result of genetic differences, but…
Directory of Open Access Journals (Sweden)
Serge Clotaire Billong
2016-11-01
Full Text Available Abstract Background Retention on lifelong antiretroviral therapy (ART is essential in sustaining treatment success while preventing HIV drug resistance (HIVDR, especially in resource-limited settings (RLS. In an era of rising numbers of patients on ART, mastering patients in care is becoming more strategic for programmatic interventions. Due to lapses and uncertainty with the current WHO sampling approach in Cameroon, we thus aimed to ascertain the national performance of, and determinants in, retention on ART at 12 months. Methods Using a systematic random sampling, a survey was conducted in the ten regions (56 sites of Cameroon, within the “reporting period” of October 2013–November 2014, enrolling 5005 eligible adults and children. Performance in retention on ART at 12 months was interpreted following the definition of HIVDR early warning indicator: excellent (>85%, fair (85–75%, poor (<75; and factors with p-value < 0.01 were considered statistically significant. Results Majority (74.4% of patients were in urban settings, and 50.9% were managed in reference treatment centres. Nationwide, retention on ART at 12 months was 60.4% (2023/3349; only six sites and one region achieved acceptable performances. Retention performance varied in reference treatment centres (54.2% vs. management units (66.8%, p < 0.0001; male (57.1% vs. women (62.0%, p = 0.007; and with WHO clinical stage I (63.3% vs. other stages (55.6%, p = 0.007; but neither for age (adults [60.3%] vs. children [58.8%], p = 0.730 nor for immune status (CD4351–500 [65.9%] vs. other CD4-staging [59.86%], p = 0.077. Conclusions Poor retention in care, within 12 months of ART initiation, urges active search for lost-to-follow-up targeting preferentially male and symptomatic patients, especially within reference ART clinics. Such sampling strategy could be further strengthened for informed ART monitoring and HIVDR prevention perspectives.
Directory of Open Access Journals (Sweden)
Faria AD
2014-06-01
Full Text Available Augusto Duarte Faria,1 Luciano Dias de Mattos Souza,2 Taiane de Azevedo Cardoso,2 Karen Amaral Tavares Pinheiro,2 Ricardo Tavares Pinheiro,2 Ricardo Azevedo da Silva,2 Karen Jansen21Department of Clinical and Health Psychology, Universidade Federal do Rio Grande – FURG, Rio Grande, RS, Brazil; 2Health and Behavior Postgraduate Program, Universidade Católica de Pelotas – UCPEL, Pelotas, RS, BrazilIntroduction: Changes in biological rhythm are among the various characteristics of bipolar disorder, and have long been associated with the functional impairment of the disease. There are only a few viable options of psychosocial interventions that deal with this specific topic; one of them is psychoeducation, a model that, although it has been used by practitioners for some time, only recently have studies shown its efficacy in clinical practice.Aim: To assess if patients undergoing psychosocial intervention in addition to a pharmacological treatment have better regulation of their biological rhythm than those only using medication.Method: This study is a randomized clinical trial that compares a standard medication intervention to an intervention combined with drugs and psychoeducation. The evaluation of the biological rhythm was made using the Biological Rhythm Interview of Assessment in Neuropsychiatry, an 18-item scale divided in four areas (sleep, activity, social rhythm, and eating pattern. The combined intervention consisted of medication and a short-term psychoeducation model summarized in a protocol of six individual sessions of 1 hour each.Results: The sample consisted of 61 patients with bipolar II disorder, but during the study, there were 14 losses to follow-up. Therefore, the final sample consisted of 45 individuals (26 for standard intervention and 19 for combined. The results showed that, in this sample and time period evaluated, the combined treatment of medication and psychoeducation had no statistically significant impact on the
Directory of Open Access Journals (Sweden)
Jaishri Mehraj
Full Text Available OBJECTIVE: The findings from truly randomized community-based studies on Staphylococcus aureus nasal colonization are scarce. Therefore we have examined point prevalence and risk factors of S. aureus nasal carriage in a non-hospitalized population of Braunschweig, northern Germany. METHODS: A total of 2026 potential participants were randomly selected through the resident's registration office and invited by mail. They were requested to collect a nasal swab at home and return it by mail. S. aureus was identified by culture and PCR. Logistic regression was used to determine risk factors of S. aureus carriage. RESULTS: Among the invitees, 405 individuals agreed to participate and 389 provided complete data which was included in the analysis. The median age of the participants was 49 years (IQR: 39-61 and 61% were females. S. aureus was isolated in 85 (21.9%; 95% CI: 18.0-26.2% of the samples, five of which were MRSA (1.29%; 95% CI: 0.55-2.98%. In multiple logistic regression, male sex (OR = 3.50; 95% CI: 2.01-6.11 and presence of allergies (OR = 2.43; 95% CI: 1.39-4.24 were found to be associated with S. aureus nasal carriage. Fifty five different spa types were found, that clustered into nine distinct groups. MRSA belonged to the hospital-associated spa types t032 and t025 (corresponds to MLST CC 22, whereas MSSA spa types varied and mostly belonged to spa-CC 012 (corresponds to MLST CC 30, and spa-CC 084 (corresponds to MLST CC 15. CONCLUSION: This first point prevalence study of S. aureus in a non-hospitalized population of Germany revealed prevalence, consistent with other European countries and supports previous findings on male sex and allergies as risk factors of S. aureus carriage. The detection of hospital-associated MRSA spa types in the community indicates possible spread of these strains from hospitals into the community.
International Nuclear Information System (INIS)
Matsubayashi, H.; Mukai, Y.; Shin, J.K.; Ochiai, S.; Okuda, H.; Osamura, K.; Otto, A.; Malozemoff, A.
2008-01-01
Using the high critical current type BSCCO composite tape fabricated at American Superconductor Corporation, the relation of overall critical current to the distribution of local critical current and the dependence of overall critical current on sample length of the bent samples were studied experimentally and analytically. The measured overall critical current was described well from the distribution of local critical current and n-value of the constituting short elements, by regarding the overall sample to be composed of local series circuits and applying the voltage summation model. Also the dependence of overall critical current on sample length could be reproduced in computer satisfactorily by the proposed simulation method
Bahamondes, Luis; Brache, Vivian; Ali, Moazzam; Habib, Ndema
2018-05-16
To evaluate weight changes in women randomized to either the etonogestrel (ENG)- or the levonorgestrel (LNG)-releasing contraceptive implants and to compare with users of the TCu380A intrauterine device (IUD). A multi-center randomized trial with 1:1 allocation ratio of the ENG- and the LNG- implants with non-randomized, age-matched control group of women choosing TCu380A IUD. The primary objective was to assess contraceptive efficacy and method continuation rates, and secondarily the incidence of common complaints and side effects (including weight changes) associated with use of the three contraceptives. All women were enrolled in nine centers at seven countries. Weight change was evaluated from time at device(s) placement. Confounders were socio-demographic, baseline weight and body mass index, center, and time from insertion. We used a linear mixed effects regression modeling with random intercept and slope. Weight was compared between the two implants groups and between the implants and the IUD-groups, through linear mixed multivariable regression model. A total of 995, 997 and 971 users in the ENG-, LNG-implant and IUD-groups respectively, were included. At 36months of use, ENG- and LNG-implants users had similar significant mean weight increase of 3.0 kg (95% CI 2.5-3.5) and 2.9 kg (95% CI 2.4-3.4), respectively (p than 50 kg. These findings must be useful for clinicians to counsel implant-users which could improve method continuation. Copyright © 2018. Published by Elsevier Inc.
CSIR Research Space (South Africa)
Raju, M
2011-11-01
Full Text Available C Aspect grandis urophylla Variation in D13C 16.000 16.500 17.000 17.500 18.000 18.500 19.000 19.500 20.000 20.500 E. camal E urophylla E grandis E pellita E globulus D1 3C Variable N Level of significance Species 2 P<0...
Directory of Open Access Journals (Sweden)
Vicky Stergiopoulos
Full Text Available Housing First (HF is being widely disseminated in efforts to end homelessness among homeless adults with psychiatric disabilities. This study evaluates the effectiveness of HF with Intensive Case Management (ICM among ethnically diverse homeless adults in an urban setting. 378 participants were randomized to HF with ICM or treatment-as-usual (TAU in Toronto (Canada, and followed for 24 months. Measures of effectiveness included housing stability, physical (EQ5D-VAS and mental (CSI, GAIN-SS health, social functioning (MCAS, quality of life (QoLI20, and health service use. Two-thirds of the sample (63% was from racialized groups and half (50% were born outside Canada. Over the 24 months of follow-up, HF participants spent a significantly greater percentage of time in stable residences compared to TAU participants (75.1% 95% CI 70.5 to 79.7 vs. 39.3% 95% CI 34.3 to 44.2, respectively. Similarly, community functioning (MCAS improved significantly from baseline in HF compared to TAU participants (change in mean difference = +1.67 95% CI 0.04 to 3.30. There was a significant reduction in the number of days spent experiencing alcohol problems among the HF compared to TAU participants at 24 months (ratio of rate ratios = 0.47 95% CI 0.22 to 0.99 relative to baseline, a reduction of 53%. Although the number of emergency department visits and days in hospital over 24 months did not differ significantly between HF and TAU participants, fewer HF participants compared to TAU participants had 1 or more hospitalizations during this period (70.4% vs. 81.1%, respectively; P=0.044. Compared to non-racialized HF participants, racialized HF participants saw an increase in the amount of money spent on alcohol (change in mean difference = $112.90 95% CI 5.84 to 219.96 and a reduction in physical community integration (ratio of rate ratios = 0.67 95% CI 0.47 to 0.96 from baseline to 24 months. Secondary analyses found a significant reduction in the number of days
Chaudhuri, Arijit
2014-01-01
Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...
Directory of Open Access Journals (Sweden)
Anne H Berman
Full Text Available The KIDSCREEN-27 is a measure of child and adolescent quality of life (QoL, with excellent psychometric properties, available in child-report and parent-rating versions in 38 languages. This study provides child-reported and parent-rated norms for the KIDSCREEN-27 among Swedish 11-16 year-olds, as well as child-parent agreement. Sociodemographic correlates of self-reported wellbeing and parent-rated wellbeing were also measured.A random population sample consisting of 600 children aged 11-16, 100 per age group and one of their parents (N = 1200, were approached for response to self-reported and parent-rated versions of the KIDSCREEN-27. Parents were also asked about their education, employment status and their own QoL based on the 26-item WHOQOL-Bref. Based on the final sampling pool of 1158 persons, a 34.8% response rate of 403 individuals was obtained, including 175 child-parent pairs, 27 child singleton responders and 26 parent singletons. Gender and age differences for parent ratings and child-reported data were analyzed using t-tests and the Mann-Whitney U-test. Post-hoc Dunn tests were conducted for pairwise comparisons when the p-value for specific subscales was 0.05 or lower. Child-parent agreement was tested item-by-item, using the Prevalence- and Bias-Adjusted Kappa (PABAK coefficient for ordinal data (PABAK-OS; dimensional and total score agreement was evaluated based on dichotomous cut-offs for lower well-being, using the PABAK and total, continuous scores were evaluated using Bland-Altman plots.Compared to European norms, Swedish children in this sample scored lower on Physical wellbeing (48.8 SE/49.94 EU but higher on the other KIDSCREEN-27 dimensions: Psychological wellbeing (53.4/49.77, Parent relations and autonomy (55.1/49.99, Social Support and peers (54.1/49.94 and School (55.8/50.01. Older children self-reported lower wellbeing than younger children. No significant self-reported gender differences occurred and parent ratings
International Nuclear Information System (INIS)
Sulaiti, H.A.; Rega, P.H.; Bradley, D.; Dahan, N.A.; Mugren, K.A.; Dosari, M.A.
2014-01-01
Correlation between grain size and activity concentrations of soils and concentrations of various radionuclides in surface and subsurface soils has been measured for samples taken in the State of Qatar by gamma-spectroscopy using a high purity germanium detector. From the obtained gamma-ray spectra, the activity concentrations of the 238U (226Ra) and /sup 232/ Th (/sup 228/ Ac) natural decay series, the long-lived naturally occurring radionuclide 40 K and the fission product radionuclide 137CS have been determined. Gamma dose rate, radium equivalent, radiation hazard index and annual effective dose rates have also been estimated from these data. In order to observe the effect of grain size on the radioactivity of soil, three grain sizes were used i.e., smaller than 0.5 mm; smaller than 1 mm and greater than 0.5 mm; and smaller than 2 mm and greater than 1 mm. The weighted activity concentrations of the 238U series nuclides in 0.5-2 mm grain size of sample numbers was found to vary from 2.5:f:0.2 to 28.5+-0.5 Bq/kg, whereas, the weighted activity concentration of 4 degree K varied from 21+-4 to 188+-10 Bq/kg. The weighted activity concentrations of 238U series and 4 degree K have been found to be higher in the finest grain size. However, for the 232Th series, the activity concentrations in the 1-2 mm grain size of one sample were found to be higher than in the 0.5-1 mm grain size. In the study of surface and subsurface soil samples, the activity concentration levels of 238 U series have been found to range from 15.9+-0.3 to 24.1+-0.9 Bq/kg, in the surface soil samples (0-5 cm) and 14.5+-0.3 to 23.6+-0.5 Bq/kg in the subsurface soil samples (5-25 cm). The activity concentrations of 232Th series have been found to lie in the range 5.7+-0.2 to 13.7+-0.5 Bq/kg, in the surface soil samples (0-5 cm)and 4.1+-0.2 to 15.6+-0.3 Bq/kg in the subsurface soil samples (5-25 cm). The activity concentrations of 4 degree K were in the range 150+-8 to 290+-17 Bq/kg, in the surface
Hofmann, D; Gehre, M; Jung, K
2003-09-01
In order to identify natural nitrogen isotope variations of biologically important amino acids four derivatization reactions (t-butylmethylsilylation, esterification with subsequent trifluoroacetylation, acetylation and pivaloylation) were tested with standard mixtures of 17 proteinogenic amino acids and plant (moss) samples using GC-C-IRMS. The possible fractionation of the nitrogen isotopes, caused for instance by the formation of multiple reaction products, was investigated. For biological samples, the esterification of the amino acids with subsequent trifluoroacetylation is recommended for nitrogen isotope ratio analysis. A sample preparation technique is described for the isotope ratio mass spectrometric analysis of amino acids from the non-protein (NPN) fraction of terrestrial moss. 14N/15N ratios from moss (Scleropodium spec.) samples from different anthropogenically polluted areas were studied with respect to ecotoxicologal bioindication.
Variation in Incentive Effects across Neighbourhoods
Directory of Open Access Journals (Sweden)
Mark J Hanly
2014-03-01
Full Text Available Small monetary incentives increase survey cooperation rates, however evidence suggests that the appeal of incentives may vary across sample subgroups. Fieldwork budgets can be most effectively distributed by targeting those subgroups where incentives will have the strongest appeal. We examine data from a randomised experiment implemented in the pilot phase of the Irish Longitudinal Study of Ageing, which randomly assigned households to receive a higher (€25 or lower (€10 incentive amount. Using a random effects logistic regression model, we observe a variable effect of the higher incentive across geographic neighbourhoods. The higher incentive has the largest impact in neighbourhoods where baseline cooperation is low, as predicted by Leverage-Saliency theory. Auxiliary neighbourhood-level variables are linked to the sample frame to explore this variation further, however none of these moderate the incentive effect, suggesting that richer information is needed to identify sample subgroups where incentive budgets should be directed.
Chin, Lijin; Chung, Arthur Y C; Clarke, Charles
2014-01-01
Pitcher plants of the genus Nepenthes capture a wide range of arthropod prey for nutritional benefit, using complex combinations of visual and olfactory signals and gravity-driven pitfall trapping mechanisms. In many localities throughout Southeast Asia, several Nepenthes different species occur in mixed populations. Often, the species present at any given location have strongly divergent trap structures and preliminary surveys indicate that different species trap different combinations of arthropod prey, even when growing at the same locality. On this basis, it has been proposed that co-existing Nepenthes species may be engaged in niche segregation with regards to arthropod prey, avoiding direct competition with congeners by deploying traps that have modifications that enable them to target specific prey types. We examined prey capture among 3 multi-species Nepenthes populations in Borneo, finding that co-existing Nepenthes species do capture different combinations of prey, but that significant interspecific variations in arthropod prey combinations can often be detected only at sub-ordinal taxonomic ranks. In all lowland Nepenthes species examined, the dominant prey taxon is Formicidae, but montane Nepenthes trap few (or no) ants and 2 of the 3 species studied have evolved to target alternative sources of nutrition, such as tree shrew feces. Using similarity and null model analyses, we detected evidence for niche segregation with regards to formicid prey among 5 lowland, sympatric Nepenthes species in Sarawak. However, we were unable to determine whether these results provide support for the niche segregation hypothesis, or whether they simply reflect unquantified variation in heterogeneous habitats and/or ant communities in the study sites. These findings are used to propose improvements to the design of field experiments that seek to test hypotheses about targeted prey capture patterns in Nepenthes.
Directory of Open Access Journals (Sweden)
Nuria eRuffini
2015-08-01
Full Text Available Context: Heart Rate Variability (HRV indicates how heart rate changes in response to inner and external stimuli. HRV is linked to health status and it is an indirect marker of the autonomic nervous system (ANS function. Objective: To investigate the influence of osteopathic manipulative treatment (OMT on ANS activity through changes of High Frequency, a heart rate variability index indicating the parasympathetic activity, in healthy subjects, compared with sham therapy and control group.Methods: Sixty-six healthy subjects, both male and female, were included in the present 3-armed randomized placebo controlled within subject cross-over single blinded study. Participants were asymptomatic adults, both smokers and non-smokers and not on medications. At enrollment subjects were randomized in 3 groups: A, B, C. Standardized structural evaluation followed by a patient need-based osteopathic treatment was performed in the first session of group A and in the second session of group B. Standardized evaluation followed by a protocoled sham treatment was provided in the second session of group A and in the first session of group B. No intervention was performed in the two sessions of group C, acting as a time-control. The trial was registered on clinicaltrials.gov identifier: NCT01908920.Main Outcomes Measures: HRV was calculated from electrocardiography before, during and after the intervention, for a total amount time of 25 minutes.Results: OMT engendered a statistically significant increase of parasympathetic activity, as shown by High Frequency rate (p<0.001, and decrease of sympathetic activity, as revealed by Low Frequency rate (p<0.01; results also showed a reduction of Low Frequency/High Frequency ratio (p<0.001 and Detrended fluctuation scaling exponent (p<0.05. Conclusions: Findings suggested that OMT can influence ANS activity increasing parasympathetic function and decreasing sympathetic activity, compared to sham therapy and control group.
Digital Repository Service at National Institute of Oceanography (India)
Rao, P.P.S.
. The extracts from June to October months showed antibacterial activity while the samples from November to January and May did not show any activity over the test bacteria Proteus vulgaris a gram-negative organism which has shown high sensitivity towards...
Bendell, Leah I; Feng, Cindy
2009-08-01
Oysters from the north-west coast of Canada contain high levels of cadmium, a toxic metal, in amounts that exceed food safety guidelines for international markets. A first required step to determine the sources of cadmium is to identify possible spatial and temporal trends in the accumulation of cadmium by the oyster. To meet this objective, rather than sample wild and cultured oysters of unknown age and origin, an oyster "grow-out" experiment was initiated. Cultured oyster seed was suspended in the water column up to a depth of 7 m and the oyster seed allowed to mature a period of 3 years until market size. Oysters were sampled bimonthly and at time of sampling, temperature, chlorophyll-a, turbidity and salinity were measured. Oyster total shell length, dry tissue weights, cadmium concentrations (microg g(-1)) and burdens (microg of cadmium oyster(-1)) were determined. Oyster cadmium concentrations and burdens were then interpreted with respect to the spatial and temporal sampling design as well as to the measured physio-chemical and biotic variables. When expressed as a concentration, there was a marked seasonality with concentrations being greater in winter as compared in summer; however no spatial trend was evident. When expressed as a burden which corrects for differences in tissue mass, there was no seasonality, however cadmium oyster burdens increased from south to north. Comparison of cadmium accumulation rates oyster(-1) among sites indicated three locations, Webster Island, on the west side of Vancouver Island, and two within Desolation Sound, Teakerne Arm and Redonda Bay, where point sources of cadmium which are not present at all other sampling locations may be contributing to overall oyster cadmium burdens. Of the four physio-chemical factors measured only temperature and turbidity weakly correlated with tissue cadmium concentrations (r(2)=-0.13; p<0.05). By expressing oyster cadmium both as concentration and burden, regional and temporal patterns were
Energy Technology Data Exchange (ETDEWEB)
Polivka, Karl; Bennett, Rita L. [USDA Forest Service, Pacific Northwest Research Station, Wenatchee, WA
2009-03-31
We studied variation in productivity in headwater reaches of the Wenatchee subbasin for multiple field seasons with the objective that we could develop methods for monitoring headwater stream conditions at the subcatchment and stream levels, assign a landscape-scale context via the effects of geoclimatic parameters on biological productivity (macroinvertebrates and fish) and use this information to identify how variability in productivity measured in fishless headwaters is transmitted to fish communities in downstream habitats. In 2008, we addressed this final objective. In collaboration with the University of Alaska Fairbanks we found some broad differences in the production of aquatic macroinvertebrates and in fish abundance across categories that combine the effects of climate and management intensity within the subbasin (ecoregions). From a monitoring standpoint, production of benthic macroinvertebrates was not a good predictor of drifting macroinvertebrates and therefore might be a poor predictor of food resources available to fish. Indeed, there is occasionally a correlation between drifting macroinvertebrate abundance and fish abundance which suggests that headwater-derived resources are important. However, fish in the headwaters appeared to be strongly food-limited and there was no evidence that fishless headwaters provided a consistent subsidy to fish in reaches downstream. Fish abundance and population dynamics in first order headwaters may be linked with similar metrics further down the watershed. The relative strength of local dynamics and inputs into productivity may be constrained or augmented by large-scale biogeoclimatic control. Headwater streams are nested within watersheds, which are in turn nested within ecological subregions; thus, we hypothesized that local effects would not necessarily be mutually exclusive from large-scale influence. To test this we examined the density of primarily salmonid fishes at several spatial and temporal scales
International Nuclear Information System (INIS)
Bendell, Leah I.; Feng, Cindy
2009-01-01
Oysters from the north-west coast of Canada contain high levels of cadmium, a toxic metal, in amounts that exceed food safety guidelines for international markets. A first required step to determine the sources of cadmium is to identify possible spatial and temporal trends in the accumulation of cadmium by the oyster. To meet this objective, rather than sample wild and cultured oysters of unknown age and origin, an oyster 'grow-out' experiment was initiated. Cultured oyster seed was suspended in the water column up to a depth of 7 m and the oyster seed allowed to mature a period of 3 years until market size. Oysters were sampled bimonthly and at time of sampling, temperature, chlorophyll-a, turbidity and salinity were measured. Oyster total shell length, dry tissue weights, cadmium concentrations (μg g -1 ) and burdens (μg of cadmium oyster -1 ) were determined. Oyster cadmium concentrations and burdens were then interpreted with respect to the spatial and temporal sampling design as well as to the measured physio-chemical and biotic variables. When expressed as a concentration, there was a marked seasonality with concentrations being greater in winter as compared in summer; however no spatial trend was evident. When expressed as a burden which corrects for differences in tissue mass, there was no seasonality, however cadmium oyster burdens increased from south to north. Comparison of cadmium accumulation rates oyster -1 among sites indicated three locations, Webster Island, on the west side of Vancouver Island, and two within Desolation Sound, Teakerne Arm and Redonda Bay, where point sources of cadmium which are not present at all other sampling locations may be contributing to overall oyster cadmium burdens. Of the four physio-chemical factors measured only temperature and turbidity weakly correlated with tissue cadmium concentrations (r 2 = -0.13; p < 0.05). By expressing oyster cadmium both as concentration and burden, regional and temporal patterns were
Khan, Mustafa; Kazi, Tasneem Gul; Afridi, Hassan Imran; Sirajuddin; Bilal, Muhammad; Akhtar, Asma; Khan, Sabir; Kadar, Salma
2017-08-01
Epidemiological data among the human population has shown a significantly increased incidence of gallstone (GS) disease worldwide. It was studied that some essential (calcium) and transition elements (iron and copper) in bile play an important role in the development of GS. The estimation of calcium, copper and iron were carried out in the serum, gall bladder bile and different types of GS (cholesterol, mixed and pigmented) of 172 patients, age ranged 20-55years. For comparative purpose age matched referents not suffering from GS diseases were also selected. Biliary concentrations of calcium (Ca), iron (Fe) and copper (Cu) were correlated with their concentrations in serum and different types of GS samples. The ratio of Ca, Fe and Cu in bile with serum was also calculated. Understudy metals were determined by flame atomic absorption spectroscopy after acid decomposition of matrices of selected samples. The Ca concentrations in serum samples were significantly higher in patients with pigmented GS as compared to controls (p0.001). The contents of Cu and Fe in serum and bile of all patients (except female cholesterol GS patient have low serum iron concentration) were found to be higher than control, but difference was significant in those patients who have pigmented GS. The concentration of Ca, Fe and Cu in different types GS were found in the order, Pigmented>mixed>cholesterol. The bile/serum ratio for Ca, Cu and Fe was found to be significantly higher in pigmented GS patients. Gall bladder bile was slightly alkaline in patients as compared to referents. The density of bile was found to be higher in patients as compared to the referents. Various functional groups present in different types of GS samples were confirmed by Fourier transform infra-red spectroscopy. The higher density and pH of bile, elevated concentrations of transition elements in all types of biological samples (serum, bile and GS), could be an important factor for the formation of different types of
Directory of Open Access Journals (Sweden)
M. Dall'Osto
2013-04-01
Full Text Available Hourly-resolved aerosol chemical speciation data can be a highly powerful tool to determine the source origin of atmospheric pollutants in urban environments. Aerosol mass concentrations of seventeen elements (Na, Mg, Al, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Ni, Cu, Zn, Sr and Pb were obtained by time (1 h and size (PM2.5 particulate matter 2.5 mass fraction simultaneously measured at the UB and RS sites: (1 the regional aerosol sources impact both monitoring sites at similar concentrations regardless their different ventilation conditions; (2 by contrast, local industrial aerosol plumes associated with shipping oil combustion and smelters activities have a higher impact on the more ventilated UB site; (3 a unique source of Pb-Cl (associated with combustion emissions is found to be the major (82% source of fine Cl in the urban agglomerate; (4 the mean diurnal variation of PM2.5 primary traffic non-exhaust brake dust (Fe-Cu suggests that this source is mainly emitted and not resuspended, whereas PM2.5 urban dust (Ca is found mainly resuspended by both traffic vortex and sea breeze; (5 urban dust (Ca is found the aerosol source most affected by land wetness, reduced by a factor of eight during rainy days and suggesting that wet roads may be a solution for reducing urban dust concentrations.
Austin, Peter C; Manca, Andrea; Zwarenstein, Merrick; Juurlink, David N; Stanbrook, Matthew B
2010-02-01
Statisticians have criticized the use of significance testing to compare the distribution of baseline covariates between treatment groups in randomized controlled trials (RCTs). Furthermore, some have advocated for the use of regression adjustment to estimate the effect of treatment after adjusting for potential imbalances in prognostically important baseline covariates between treatment groups. We examined 114 RCTs published in the New England Journal of Medicine, the Journal of the American Medical Association, The Lancet, and the British Medical Journal between January 1, 2007 and June 30, 2007. Significance testing was used to compare baseline characteristics between treatment arms in 38% of the studies. The practice was very rare in British journals and more common in the U.S. journals. In 29% of the studies, the primary outcome was continuous, whereas in 65% of the studies, the primary outcome was either dichotomous or time-to-event in nature. Adjustment for baseline covariates was reported when estimating the treatment effect in 34% of the studies. Our findings suggest the need for greater editorial consistency across journals in the reporting of RCTs. Furthermore, there is a need for greater debate about the relative merits of unadjusted vs. adjusted estimates of treatment effect. Copyright 2010 Elsevier Inc. All rights reserved.
Ansari, Imran Shafique
2012-09-08
The probability distribution function (PDF) and cumulative density function of the sum of L independent but not necessarily identically distributed gamma variates, applicable to maximal ratio combining receiver outputs or in other words applicable to the performance analysis of diversity combining receivers operating over Nakagami-m fading channels, is presented in closed form in terms of Meijer G-function and Fox H-bar-function for integer valued fading parameters and non-integer valued fading parameters, respectively. Further analysis, particularly on bit error rate via PDF-based approach, too is represented in closed form in terms of Meijer G-function and Fox H-bar-function for integer-order fading parameters, and extended Fox H-bar-function (H-hat) for non-integer-order fading parameters. The proposed results complement previous results that are either evolved in closed-form, or expressed in terms of infinite sums or higher order derivatives of the fading parameter m.
Ansari, Imran Shafique; Yilmaz, Ferkan; Alouini, Mohamed-Slim; Kucur, Oguz
2012-01-01
The probability distribution function (PDF) and cumulative density function of the sum of L independent but not necessarily identically distributed gamma variates, applicable to maximal ratio combining receiver outputs or in other words applicable to the performance analysis of diversity combining receivers operating over Nakagami-m fading channels, is presented in closed form in terms of Meijer G-function and Fox H-bar-function for integer valued fading parameters and non-integer valued fading parameters, respectively. Further analysis, particularly on bit error rate via PDF-based approach, too is represented in closed form in terms of Meijer G-function and Fox H-bar-function for integer-order fading parameters, and extended Fox H-bar-function (H-hat) for non-integer-order fading parameters. The proposed results complement previous results that are either evolved in closed-form, or expressed in terms of infinite sums or higher order derivatives of the fading parameter m.
International Nuclear Information System (INIS)
Bengtson, P.; Larsson, C.M.; Simenstad, P.; Suomela, J.
1995-09-01
Marine samples from the vicinity of the plants show elevated radionuclide concentrations, caused by discharges from the plants. Very low concentrations are noted in terrestrial samples. At several locations, the effects of the Chernobyl disaster still dominates. Control samples measured by SSI have confirmed the measurements performed by the operators. 8 refs, 6 tabs, 46 figs
Estimation of population mean under systematic sampling
Noor-ul-amin, Muhammad; Javaid, Amjad
2017-11-01
In this study we propose a generalized ratio estimator under non-response for systematic random sampling. We also generate a class of estimators through special cases of generalized estimator using different combinations of coefficients of correlation, kurtosis and variation. The mean square errors and mathematical conditions are also derived to prove the efficiency of proposed estimators. Numerical illustration is included using three populations to support the results.
Semaan, T.; Hubert, A. M.; Zorec, J.; Gutiérrez-Soto, J.; Frémat, Y.; Martayan, C.; Fabregat, J.; Eggenberger, P.
2018-06-01
Context. The class of Be stars are the epitome of rapid rotators in the main sequence. These stars are privileged candidates for studying the incidence of rotation on the stellar internal structure and on non-radial pulsations. Pulsations are considered possible mechanisms to trigger mass-ejection phenomena required to build up the circumstellar disks of Be stars. Aims: Time series analyses of the light curves of 15 faint Be stars observed with the CoRoT satellite were performed to obtain the distribution of non-radial pulsation (NRP) frequencies in their power spectra at epochs with and without light outbursts and to discriminate pulsations from rotation-related photometric variations. Methods: Standard Fourier techniques were employed to analyze the CoRoT light curves. Fundamental parameters corrected for rapid-rotation effects were used to study the power spectrum as a function of the stellar location in the instability domains of the Hertzsprung-Russell (H-R) diagram. Results: Frequencies are concentrated in separate groups as predicted for g-modes in rapid B-type rotators, except for the two stars that are outside the H-R instability domain. In five objects the variations in the power spectrum are correlated with the time-dependent outbursts characteristics. Time-frequency analysis showed that during the outbursts the amplitudes of stable main frequencies within 0.03 c d-1 intervals strongly change, while transients and/or frequencies of low amplitude appear separated or not separated from the stellar frequencies. The frequency patterns and activities depend on evolution phases: (i) the average separations between groups of frequencies are larger in the zero-age main sequence (ZAMS) than in the terminal age main sequence (TAMS) and are the largest in the middle of the MS phase; (ii) a poor frequency spectrum with f ≲ 1 cd-1 of low amplitude characterizes the stars beyond the TAMS; and (iii) outbursts are seen in stars hotter than B4 spectral type and in the
Quantum chemistry by random walk: Higher accuracy
International Nuclear Information System (INIS)
Anderson, J.B.
1980-01-01
The random walk method of solving the Schroedinger equation is extended to allow the calculation of eigenvalues of atomic and molecular systems with higher accuracy. The combination of direct calculation of the difference delta between a true wave function psi and a trial wave function psi/sub o/ with importance sampling greatly reduces systematic and statistical error. The method is illustrated with calculations for ground-state hydrogen and helium atoms using trial wave functions from variational calculations. The energies obtained are 20 to 100 times more accurate than those of the corresponding variational calculations
Ziegler, Tom; Krykunov, Mykhaylo; Autschbach, Jochen
2014-09-09
The random phase approximation (RPA) equation of adiabatic time dependent density functional ground state response theory (ATDDFT) has been used extensively in studies of excited states. It extracts information about excited states from frequency dependent ground state response properties and avoids, thus, in an elegant way, direct Kohn-Sham calculations on excited states in accordance with the status of DFT as a ground state theory. Thus, excitation energies can be found as resonance poles of frequency dependent ground state polarizability from the eigenvalues of the RPA equation. ATDDFT is approximate in that it makes use of a frequency independent energy kernel derived from the ground state functional. It is shown in this study that one can derive the RPA equation of ATDDFT from a purely variational approach in which stationary states above the ground state are located using our constricted variational DFT (CV-DFT) method and the ground state functional. Thus, locating stationary states above the ground state due to one-electron excitations with a ground state functional is completely equivalent to solving the RPA equation of TDDFT employing the same functional. The present study is an extension of a previous work in which we demonstrated the equivalence between ATDDFT and CV-DFT within the Tamm-Dancoff approximation.
Willan, Andrew R; Eckermann, Simon
2012-10-01
Previous applications of value of information methods for determining optimal sample size in randomized clinical trials have assumed no between-study variation in mean incremental net benefit. By adopting a hierarchical model, we provide a solution for determining optimal sample size with this assumption relaxed. The solution is illustrated with two examples from the literature. Expected net gain increases with increasing between-study variation, reflecting the increased uncertainty in incremental net benefit and reduced extent to which data are borrowed from previous evidence. Hence, a trial can become optimal where current evidence is sufficient assuming no between-study variation. However, despite the expected net gain increasing, the optimal sample size in the illustrated examples is relatively insensitive to the amount of between-study variation. Further percentage losses in expected net gain were small even when choosing sample sizes that reflected widely different between-study variation. Copyright © 2011 John Wiley & Sons, Ltd.
Modeling stimulus variation in three common implicit attitude tasks.
Wolsiefer, Katie; Westfall, Jacob; Judd, Charles M
2017-08-01
We explored the consequences of ignoring the sampling variation due to stimuli in the domain of implicit attitudes. A large literature in psycholinguistics has examined the statistical treatment of random stimulus materials, but the recommendations from this literature have not been applied to the social psychological literature on implicit attitudes. This is partly because of inherent complications in applying crossed random-effect models to some of the most common implicit attitude tasks, and partly because no work to date has demonstrated that random stimulus variation is in fact consequential in implicit attitude measurement. We addressed this problem by laying out statistically appropriate and practically feasible crossed random-effect models for three of the most commonly used implicit attitude measures-the Implicit Association Test, affect misattribution procedure, and evaluative priming task-and then applying these models to large datasets (average N = 3,206) that assess participants' implicit attitudes toward race, politics, and self-esteem. We showed that the test statistics from the traditional analyses are substantially (about 60 %) inflated relative to the more-appropriate analyses that incorporate stimulus variation. Because all three tasks used the same stimulus words and faces, we could also meaningfully compare the relative contributions of stimulus variation across the tasks. In an appendix, we give syntax in R, SAS, and SPSS for fitting the recommended crossed random-effects models to data from all three tasks, as well as instructions on how to structure the data file.
Dall'Osto, M.; Querol, X.; Amato, F.; Karanasiou, A.; Lucarelli, F.; Nava, S.; Calzolai, G.; Chiari, M.
2013-04-01
combustion emissions) is found to be the major (82%) source of fine Cl in the urban agglomerate; (4) the mean diurnal variation of PM2.5 primary traffic non-exhaust brake dust (Fe-Cu) suggests that this source is mainly emitted and not resuspended, whereas PM2.5 urban dust (Ca) is found mainly resuspended by both traffic vortex and sea breeze; (5) urban dust (Ca) is found the aerosol source most affected by land wetness, reduced by a factor of eight during rainy days and suggesting that wet roads may be a solution for reducing urban dust concentrations.
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Modeling response variation for radiometric calorimeters
International Nuclear Information System (INIS)
Mayer, R.L. II.
1986-01-01
Radiometric calorimeters are widely used in the DOE complex for accountability measurements of plutonium and tritium. Proper characterization of response variation for these instruments is, therefore, vital for accurate assessment of measurement control as well as for propagation of error calculations. This is not difficult for instruments used to measure items within a narrow range of power values; however, when a single instrument is used to measure items over a wide range of power values, improper estimates of uncertainty can result since traditional error models for radiometric calorimeters assume that uncertainty is not a function of sample power. This paper describes methods which can be used to accurately estimate random response variation for calorimeters used to measure items over a wide range of sample powers. The model is applicable to the two most common modes of calorimeter operation: heater replacement and servo control. 5 refs., 4 figs., 1 tab
Brito, Barbara P; Gardner, Ian A; Hietala, Sharon K; Crossley, Beate M
2011-07-01
Bluetongue is a vector-borne viral disease that affects domestic and wild ruminants. The epidemiology of this disease has recently changed, with occurrence in new geographic areas. Various real-time quantitative reverse transcription polymerase chain reaction (real-time qRT-PCR) assays are used to detect Bluetongue virus (BTV); however, the impact of biologic differences between New World camelids and domestic ruminant samples on PCR efficiency, for which the BTV real-time qRT-PCR was initially validated are unknown. New world camelids are known to have important biologic differences in whole blood composition, including hemoglobin concentration, which can alter PCR performance. In the present study, sheep, cattle, and alpaca blood were spiked with BTV serotypes 10, 11, 13, and 17 and analyzed in 10-fold dilutions by real-time qRT-PCR to determine if species affected nucleic acid recovery and assay performance. A separate experiment was performed using spiked alpaca blood subsequently diluted in 10-fold series in sheep blood to assess the influence of alpaca blood on performance efficiency of the BTV real-time qRT-PCR assay. Results showed that BTV-specific nucleic acid detection from alpaca blood was consistently 1-2 logs lower than from sheep and cattle blood, and results were similar for each of the 4 BTV serotypes analyzed.
Energy Technology Data Exchange (ETDEWEB)
D. Thompson [University of Sheffield (United Kingdom). Department of Engineering Materials
2010-08-15
The thermodynamic equilibrium phases formed under ash fusion test and excess air combustion conditions by 30 coals of the BCURA Coal Sample Bank have been predicted from 1100 to 2000 K using the MTDATA computational suite and the MTOX database for silicate melts and associated phases. Predicted speciation and degree of melting varied widely from coal to coal. Melting under an ash fusion test atmosphere of CO{sub 2}:H{sub 2} 1:1 was essentially the same as under excess air combustion conditions for some coals, and markedly different for others. For those ashes which flowed below the fusion test maximum temperature of 1773 K flow coincided with 75-100% melting in most cases. Flow at low predicted melt formation (46%) for one coal cannot be attributed to any one cause. The difference between predicted fusion behaviours under excess air and fusion test atmospheres becomes greater with decreasing silica and alumina, and increasing iron, calcium and alkali metal content in the coal mineral. 22 refs., 7 figs., 3 tabs.
Van Calster, Laurens; D'Argembeau, Arnaud; Salmon, Eric; Peters, Frédéric; Majerus, Steve
2017-01-01
Neuroimaging studies have revealed the recruitment of a range of neural networks during the resting state, which might reflect a variety of cognitive experiences and processes occurring in an individual's mind. In this study, we focused on the default mode network (DMN) and attentional networks and investigated their association with distinct mental states when participants are not performing an explicit task. To investigate the range of possible cognitive experiences more directly, this study proposes a novel method of resting-state fMRI experience sampling, informed by a phenomenological investigation of the fluctuation of mental states during the resting state. We hypothesized that DMN activity would increase as a function of internal mentation and that the activity of dorsal and ventral networks would indicate states of top-down versus bottom-up attention at rest. Results showed that dorsal attention network activity fluctuated as a function of subjective reports of attentional control, providing evidence that activity of this network reflects the perceived recruitment of controlled attentional processes during spontaneous cognition. Activity of the DMN increased when participants reported to be in a subjective state of internal mentation, but not when they reported to be in a state of perception. This study provides direct evidence for a link between fluctuations of resting-state neural activity and fluctuations in specific cognitive processes.
Directory of Open Access Journals (Sweden)
Amir H Pakpour
Full Text Available The investigation of short-term changes in female sexual functioning has received little attention so far. The aims of the study were to gain empirical knowledge on within-subject and within- and across-variable fluctuations in women's sexual functioning over time. More specifically, to investigate the stability of women´s self-reported sexual functioning and the moderating effects of contextual and interpersonal factors. A convenience sample of 206 women, recruited across eight Health care Clinics in Rasht, Iran. Ecological momentary assessment was used to examine fluctuations of sexual functioning over a six week period. A shortened version of the Female Sexual Function Index (FSFI was applied to assess sexual functioning. Self-constructed questions were included to assess relationship satisfaction, partner's sexual performance and stress levels. Mixed linear two-level model analyses revealed a link between orgasm and relationship satisfaction (Beta = 0.125, P = 0.074 with this link varying significantly between women. Analyses further revealed a significant negative association between stress and all six domains of women's sexual functioning. Women not only reported differing levels of stress over the course of the assessment period, but further differed from each other in how much stress they experienced and how much this influenced their sexual response. Orgasm and sexual satisfaction were both significantly associated with all other domains of sexual function (P<0.001. And finally, a link between partner performance and all domains of women`s sexual functioning (P<0.001 could be detected. Except for lubrication (P = 0.717, relationship satisfaction had a significant effect on all domains of the sexual response (P<0.001. Overall, our findings support the new group of criteria introduced in the DSM-5, called "associated features" such as partner factors and relationship factors. Consideration of these criteria is important and necessary for
Forkert, Nils Daniel; Fiehler, Jens
2015-03-01
The tissue outcome prediction in acute ischemic stroke patients is highly relevant for clinical and research purposes. It has been shown that the combined analysis of diffusion and perfusion MRI datasets using high-level machine learning techniques leads to an improved prediction of final infarction compared to single perfusion parameter thresholding. However, most high-level classifiers require a previous training and, until now, it is ambiguous how many subjects are required for this, which is the focus of this work. 23 MRI datasets of acute stroke patients with known tissue outcome were used in this work. Relative values of diffusion and perfusion parameters as well as the binary tissue outcome were extracted on a voxel-by- voxel level for all patients and used for training of a random forest classifier. The number of patients used for training set definition was iteratively and randomly reduced from using all 22 other patients to only one other patient. Thus, 22 tissue outcome predictions were generated for each patient using the trained random forest classifiers and compared to the known tissue outcome using the Dice coefficient. Overall, a logarithmic relation between the number of patients used for training set definition and tissue outcome prediction accuracy was found. Quantitatively, a mean Dice coefficient of 0.45 was found for the prediction using the training set consisting of the voxel information from only one other patient, which increases to 0.53 if using all other patients (n=22). Based on extrapolation, 50-100 patients appear to be a reasonable tradeoff between tissue outcome prediction accuracy and effort required for data acquisition and preparation.
Random Intercept and Random Slope 2-Level Multilevel Models
Directory of Open Access Journals (Sweden)
Rehan Ahmad Khan
2012-11-01
Full Text Available Random intercept model and random intercept & random slope model carrying two-levels of hierarchy in the population are presented and compared with the traditional regression approach. The impact of students’ satisfaction on their grade point average (GPA was explored with and without controlling teachers influence. The variation at level-1 can be controlled by introducing the higher levels of hierarchy in the model. The fanny movement of the fitted lines proves variation of student grades around teachers.
International Nuclear Information System (INIS)
Carnet, Bernard; Delhumeau, Michel
1971-06-01
The principles of binary analysis applied to the investigation of sequential circuits were used to design a two way coincidence circuit whose input may be, random or periodic variables of constant or variable duration. The output signal strictly reproduces the characteristics of the input signal triggering the coincidence. A coincidence between input signals does not produce any output signal if one of the signals has already triggered the output signal. The characteristics of the output signal in relation to those of the input signal are: minimum time jitter, excellent duration reproducibility and maximum efficiency. Some rules are given for achieving these results. The symmetry, transitivity and non-transitivity characteristics of the edges on the primitive graph are analyzed and lead to some rules for positioning the states on a secondary graph. It is from this graph that the equations of the circuits can be calculated. The development of the circuit and its dynamic testing are discussed. For this testing, the functioning of the circuit is simulated by feeding into the input randomly generated signals
Loban, Amanda; Mandefield, Laura; Hind, Daniel; Bradburn, Mike
2017-12-01
The objective of this study was to compare the response rates, data completeness, and representativeness of survey data produced by online and postal surveys. A randomized trial nested within a cohort study in Yorkshire, United Kingdom. Participants were randomized to receive either an electronic (online) survey questionnaire with paper reminder (N = 2,982) or paper questionnaire with electronic reminder (N = 2,855). Response rates were similar for electronic contact and postal contacts (50.9% vs. 49.7%, difference = 1.2%, 95% confidence interval: -1.3% to 3.8%). The characteristics of those responding to the two groups were similar. Participants nevertheless demonstrated an overwhelming preference for postal questionnaires, with the majority responding by post in both groups. Online survey questionnaire systems need to be supplemented with a postal reminder to achieve acceptable uptake, but doing so provides a similar response rate and case mix when compared to postal questionnaires alone. For large surveys, online survey systems may be cost saving. Copyright © 2017 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Helmut Prodinger
2007-01-01
Full Text Available In words, generated by independent geometrically distributed random variables, we study the l th descent, which is, roughly speaking, the l th occurrence of a neighbouring pair ab with a>b. The value a is called the initial height, and b the end height. We study these two random variables (and some similar ones by combinatorial and probabilistic tools. We find in all instances a generating function Ψ(v,u, where the coefficient of v j u i refers to the j th descent (ascent, and i to the initial (end height. From this, various conclusions can be drawn, in particular expected values. In the probabilistic part, a Markov chain model is used, which allows to get explicit expressions for the heights of the second descent. In principle, one could go further, but the complexity of the results forbids it. This is extended to permutations of a large number of elements. Methods from q-analysis are used to simplify the expressions. This is the reason that we confine ourselves to the geometric distribution only. For general discrete distributions, no such tools are available.
Randomness at the root of things 1: Random walks
Ogborn, Jon; Collins, Simon; Brown, Mick
2003-09-01
This is the first of a pair of articles about randomness in physics. In this article, we use some variations on the idea of a `random walk' to consider first the path of a particle in Brownian motion, and then the random variation to be expected in radioactive decay. The arguments are set in the context of the general importance of randomness both in physics and in everyday life. We think that the ideas could usefully form part of students' A-level work on random decay and quantum phenomena, as well as being good for their general education. In the second article we offer a novel and simple approach to Poisson sequences.
Marumoto, Kohji; Sudo, Yasuaki; Nagamatsu, Yoshizumi
2017-07-01
During 2014-2016, the Aso volcano, located in the center of the Kyushu Islands, Japan, erupted and emitted large amounts of volcanic gases and ash. Two episodes of the eruption were observed; firstly Strombolian magmatic eruptive episodes from 25 November 2014 to the middle of May 2015, and secondly phreatomagmatic and phreatic eruptive episodes from September 2015 to February 2016. Bulk chemical analyses on total mercury (Hg) and major ions in water soluble fraction in volcanic ash fall samples were conducted. During the Strombolian magmatic eruptive episodes, total Hg concentrations averaged 1.69 ± 0.87 ng g- 1 (N = 33), with a range from 0.47 to 3.8 ng g- 1. In addition, the temporal variation of total Hg concentrations in volcanic ash varied with the amplitude change of seismic signals. In the Aso volcano, the volcanic tremors are always observed during eruptive stages and quiet interludes, and the amplitudes of tremors increase at eruptive stages. So, the temporal variation of total Hg concentrations could provide an indication of the level of volcanic activity. During the phreatomagmatic and phreatic eruptive episodes, on the other hand, total Hg concentrations in the volcanic ash fall samples averaged 220 ± 88 ng g- 1 (N = 5), corresponding to 100 times higher than those during the Strombolian eruptive episode. Therefore, it is possible that total Hg concentrations in volcanic ash samples are largely varied depending on the eruptive type. In addition, the ash fall amounts were also largely different among the two eruptive episodes. This can be also one of the factors controlling Hg concentrations in volcanic ash.
Abebe, Kaleab Z; Jones, Kelley A; Rofey, Dana; McCauley, Heather L; Clark, Duncan B; Dick, Rebecca; Gmelin, Theresa; Talis, Janine; Anderson, Jocelyn; Chugani, Carla; Algarroba, Gabriela; Antonio, Ashley; Bee, Courtney; Edwards, Clare; Lethihet, Nadia; Macak, Justin; Paley, Joshua; Torres, Irving; Van Dusen, Courtney; Miller, Elizabeth
2018-02-01
Sexual violence (SV) on college campuses is common, especially alcohol-related SV. This is a 2-arm cluster randomized controlled trial to test a brief intervention to reduce risk for alcohol-related sexual violence (SV) among students receiving care from college health centers (CHCs). Intervention CHC staff are trained to deliver universal SV education to all students seeking care, to facilitate patient and provider comfort in discussing SV and related abusive experiences (including the role of alcohol). Control sites provide participants with information about drinking responsibly. Across 28 participating campuses (12 randomized to intervention and 16 to control), 2292 students seeking care at CHCs complete surveys prior to their appointment (baseline), immediately after (exit), 4months later (T2) and one year later (T3). The primary outcome is change in recognition of SV and sexual risk. Among those reporting SV exposure at baseline, changes in SV victimization, disclosure, and use of SV services are additional outcomes. Intervention effects will be assessed using generalized linear mixed models that account for clustering of repeated observations both within CHCs and within students. Slightly more than half of the participating colleges have undergraduate enrollment of ≥3000 students; two-thirds are public and almost half are urban. Among participants there were relatively more Asian (10 v 1%) and Black/African American (13 v 7%) and fewer White (58 v 74%) participants in the intervention compared to control. This study will offer the first formal assessment for SV prevention in the CHC setting. Clinical Trials #: NCT02355470. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Ricardo de Amorim Corrêa
2014-12-01
Full Text Available OBJECTIVE: To compare 28-day mortality rates and clinical outcomes in ICU patients with ventilator-associated pneumonia according to the diagnostic strategy used. METHODS: This was a prospective randomized clinical trial. Of the 73 patients included in the study, 36 and 37 were randomized to undergo BAL or endotracheal aspiration (EA, respectively. Antibiotic therapy was based on guidelines and was adjusted according to the results of quantitative cultures. RESULTS: The 28-day mortality rate was similar in the BAL and EA groups (25.0% and 37.8%, respectively; p = 0.353. There were no differences between the groups regarding the duration of mechanical ventilation, antibiotic therapy, secondary complications, VAP recurrence, or length of ICU and hospital stay. Initial antibiotic therapy was deemed appropriate in 28 (77.8% and 30 (83.3% of the patients in the BAL and EA groups, respectively (p = 0.551. The 28-day mortality rate was not associated with the appropriateness of initial therapy in the BAL and EA groups (appropriate therapy: 35.7% vs. 43.3%; p = 0.553; and inappropriate therapy: 62.5% vs. 50.0%; p = 1.000. Previous use of antibiotics did not affect the culture yield in the EA or BAL group (p = 0.130 and p = 0.484, respectively. CONCLUSIONS: In the context of this study, the management of VAP patients, based on the results of quantitative endotracheal aspirate cultures, led to similar clinical outcomes to those obtained with the results of quantitative BAL fluid cultures.
Pearce, Peter; Sewell, Ros; Cooper, Mick; Osman, Sarah; Fugard, Andrew J B; Pybis, Joanne
2017-06-01
The aim of this study was to pilot a test of the effectiveness of school-based humanistic counselling (SBHC) in an ethnically diverse group of young people (aged 11-18 years old), with follow-up assessments at 6 and 9 months. Pilot randomized controlled trial, using linear-mixed effect modelling and intention-to-treat analysis to compare changes in levels of psychological distress for participants in SBHC against usual care (UC). ISRCTN44253140. In total, 64 young people were randomized to either SBHC or UC. Participants were aged between 11 and 18 (M = 14.2, SD = 1.8), with 78.1% of a non-white ethnicity. The primary outcome was psychological distress at 6 weeks (mid-therapy), 12 weeks (end of therapy), 6-month follow-up and 9-month follow-up. Secondary measures included emotional symptoms, self-esteem and attainment of personal goals. Recruitment and retention rates for the study were acceptable. Participants in the SBHC condition, as compared with participants in the UC condition, showed greater reductions in psychological distress and emotional symptoms, and greater improvements in self-esteem, over time. However, at follow-up, only emotional symptoms showed significant differences across groups. The study adds to the pool of evidence suggesting that SBHC can be tested and that it brings about short-term reductions in psychological and emotional distress in young people, across ethnicities. However, there is no evidence of longer-term effects. School-based humanistic counselling can be an effective means of reducing the psychological distress experienced by young people with emotional symptoms in the short term. The short-term effectiveness of school-based humanistic counselling is not limited to young people of a White ethnicity. There is no evidence that school-based humanistic counselling has effects beyond the end of therapy. © 2016 The British Psychological Society.
Techniques to assess biological variation in destructive data
Tijskens, L.M.M.; Schouten, R.E.; Jongbloed, G.; Konopacki, P.J.
2018-01-01
Variation is present in all measured data, due to variation between individuals (biological variation) and variation induced by the measuring system (technical variation). Biological variation present in experimental data is not the result of a random process but strictly subject to deterministic
Mix, Joseph A; Crews, W David
2002-08-01
There appears to be an absence of large-scaled clinical trials that have examined the efficacy of Ginkgo biloba extract on the neuropsychological functioning of cognitively intact older adults. The importance of such clinical research appears paramount in light of the plethora of products containing Ginkgo biloba that are currently being widely marketed to predominantly cognitively intact adults with claims of enhanced cognitive performances. The purpose of this research was to conduct the first known, large-scaled clinical trial of the efficacy of Ginkgo biloba extract (EGb 761) on the neuropsychological functioning of cognitively intact older adults. Two hundred and sixty-two community-dwelling volunteers (both male and female) 60 years of age and older, who reported no history of dementia or significant neurocognitive impairments and obtained Mini-Mental State Examination total scores of at least 26, were examined via a 6-week, randomized, double-blind, fixed-dose, placebo-controlled, parallel-group, clinical trial. Participants were randomly assigned to receive either Ginkgo biloba extract EGb 761(n = 131; 180 mg/day) or placebo (n = 131) for 6 weeks. Efficacy measures consisted of participants' raw change in performance scores from pretreatment baseline to those obtained just prior to termination of treatment on the following standardized neuropsychological measures: Selective Reminding Test (SRT), Wechsler Adult Intelligence Scale-III Block Design (WAIS-III BD) and Digit Symbol-Coding (WAIS-III DS) subtests, and the Wechsler Memory Scale-III Faces I (WMS-III FI) and Faces II (WMS-III FII) subtests. A subjective Follow-up Self-report Questionnaire was also administered to participants just prior to termination of the treatment phase. Analyses of covariance indicated that cognitively intact participants who received 180 mg of EGb 761 daily for 6 weeks exhibited significantly more improvement on SRT tasks involving delayed (30 min) free recall (p visual material
Directory of Open Access Journals (Sweden)
Gorini Alessandra
2008-05-01
Full Text Available Abstract Background Generalized anxiety disorder (GAD is a psychiatric disorder characterized by a constant and unspecific anxiety that interferes with daily-life activities. Its high prevalence in general population and the severe limitations it causes, point out the necessity to find new efficient strategies to treat it. Together with the cognitive-behavioural treatments, relaxation represents a useful approach for the treatment of GAD, but it has the limitation that it is hard to be learned. To overcome this limitation we propose the use of virtual reality (VR to facilitate the relaxation process by visually presenting key relaxing images to the subjects. The visual presentation of a virtual calm scenario can facilitate patients' practice and mastery of relaxation, making the experience more vivid and real than the one that most subjects can create using their own imagination and memory, and triggering a broad empowerment process within the experience induced by a high sense of presence. According to these premises, the aim of the present study is to investigate the advantages of using a VR-based relaxation protocol in reducing anxiety in patients affected by GAD. Methods/Design The trial is based on a randomized controlled study, including three groups of 25 patients each (for a total of 75 patients: (1 the VR group, (2 the non-VR group and (3 the waiting list (WL group. Patients in the VR group will be taught to relax using a VR relaxing environment and audio-visual mobile narratives; patients in the non-VR group will be taught to relax using the same relaxing narratives proposed to the VR group, but without the VR support, and patients in the WL group will not receive any kind of relaxation training. Psychometric and psychophysiological outcomes will serve as quantitative dependent variables, while subjective reports of participants will be used as qualitative dependent variables. Conclusion We argue that the use of VR for relaxation
Vikram, Deepti S.; Bratasz, Anna; Ahmad, Rizwan; Kuppusamy, Periannan
2015-01-01
Methods currently available for the measurement of oxygen concentrations (oximetry) in viable tissues differ widely from each other in their methodological basis and applicability. The goal of this study was to compare two novel methods, particulate-based electron paramagnetic resonance (EPR) and OxyLite oximetry, in an experimental tumor model. EPR oximetry uses implantable paramagnetic particulates, whereas OxyLite uses fluorescent probes affixed on a fiber-optic cable. C3H mice were transplanted with radiation-induced fibrosarcoma (RIF-1) tumors in their hind limbs. Lithium phthalocyanine (LiPc) microcrystals were used as EPR probes. The pO2 measurements were taken from random locations at a depth of ~3 mm within the tumor either immediately or 48 h after implantation of LiPc. Both methods revealed significant hypoxia in the tumor. However, there were striking differences between the EPR and OxyLite readings. The differences were attributed to the volume of tissue under examination and the effect of needle invasion at the site of measurement. This study recognizes the unique benefits of EPR oximetry in terms of robustness, repeatability and minimal invasiveness. PMID:17705635
Directory of Open Access Journals (Sweden)
Dagmar Sigmundová
2014-07-01
Full Text Available This study investigates whether more physically active parents bring up more physically active children and whether parents’ level of physical activity helps children achieve step count recommendations on weekdays and weekends. The participants (388 parents aged 35–45 and their 485 children aged 9–12 were randomly recruited from 21 Czech government-funded primary schools. The participants recorded pedometer step counts for seven days (≥10 h a day during April–May and September–October of 2013. Logistic regression (Enter method was used to examine the achievement of the international recommendations of 11,000 steps/day for girls and 13,000 steps/day for boys. The children of fathers and mothers who met the weekend recommendation of 10,000 steps were 5.48 (95% confidence interval: 1.65; 18.19; p < 0.01 and 3.60 times, respectively (95% confidence interval: 1.21; 10.74; p < 0.05 more likely to achieve the international weekend recommendation than the children of less active parents. The children of mothers who reached the weekday pedometer-based step count recommendation were 4.94 times (95% confidence interval: 1.45; 16.82; p < 0.05 more likely to fulfil the step count recommendation on weekdays than the children of less active mothers.
Lipson, Sarah Ketchen; Zhou, Sasha; Wagner, Blake, III; Beck, Katie; Eisenberg, Daniel
2016-01-01
This article explores variations in mental health and service utilization across academic disciplines using a random sample of undergraduate and graduate students (N = 64,519) at 81 colleges and universities. We report prevalence of depression, anxiety, suicidality, and self-injury, and rates of help-seeking across disciplines, including results…
Extreme values, regular variation and point processes
Resnick, Sidney I
1987-01-01
Extremes Values, Regular Variation and Point Processes is a readable and efficient account of the fundamental mathematical and stochastic process techniques needed to study the behavior of extreme values of phenomena based on independent and identically distributed random variables and vectors It presents a coherent treatment of the distributional and sample path fundamental properties of extremes and records It emphasizes the core primacy of three topics necessary for understanding extremes the analytical theory of regularly varying functions; the probabilistic theory of point processes and random measures; and the link to asymptotic distribution approximations provided by the theory of weak convergence of probability measures in metric spaces The book is self-contained and requires an introductory measure-theoretic course in probability as a prerequisite Almost all sections have an extensive list of exercises which extend developments in the text, offer alternate approaches, test mastery and provide for enj...
DEFF Research Database (Denmark)
Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard
2015-01-01
shear strength of clay. Normal and Sobol sampling are employed to provide the asymptotic sampling method to generate the probability distribution of the foundation stiffnesses. Monte Carlo simulation is used as a benchmark. Asymptotic sampling accompanied with Sobol quasi random sampling demonstrates......The mechanical responses of an offshore monopile foundation mounted in over-consolidated clay are calculated by employing a stochastic approach where a nonlinear p–y curve is incorporated with a finite element scheme. The random field theory is applied to represent a spatial variation for undrained...... an efficient method for estimating the probability distribution of stiffnesses for the offshore monopile foundation....
Winer, Rachel L; Tiro, Jasmin A; Miglioretti, Diana L; Thayer, Chris; Beatty, Tara; Lin, John; Gao, Hongyuan; Kimbel, Kilian; Buist, Diana S M
2018-01-01
Women who delay or do not attend Papanicolaou (Pap) screening are at increased risk for cervical cancer. Trials in countries with organized screening programs have demonstrated that mailing high-risk (hr) human papillomavirus (HPV) self-sampling kits to under-screened women increases participation, but U.S. data are lacking. HOME is a pragmatic randomized controlled trial set within a U.S. integrated healthcare delivery system to compare two programmatic approaches for increasing cervical cancer screening uptake and effectiveness in under-screened women (≥3.4years since last Pap) aged 30-64years: 1) usual care (annual patient reminders and ad hoc outreach by clinics) and 2) usual care plus mailed hrHPV self-screening kits. Over 2.5years, eligible women were identified through electronic medical record (EMR) data and randomized 1:1 to the intervention or control arm. Women in the intervention arm were mailed kits with pre-paid envelopes to return samples to the central clinical laboratory for hrHPV testing. Results were documented in the EMR to notify women's primary care providers of appropriate follow-up. Primary outcomes are detection and treatment of cervical neoplasia. Secondary outcomes are cervical cancer screening uptake, abnormal screening results, and women's experiences and attitudes towards hrHPV self-sampling and follow-up of hrHPV-positive results (measured through surveys and interviews). The trial was designed to evaluate whether a programmatic strategy incorporating hrHPV self-sampling is effective in promoting adherence to the complete screening process (including follow-up of abnormal screening results and treatment). The objective of this report is to describe the rationale and design of this pragmatic trial. Copyright © 2017 Elsevier Inc. All rights reserved.
Investigating the Randomness of Numbers
Pendleton, Kenn L.
2009-01-01
The use of random numbers is pervasive in today's world. Random numbers have practical applications in such far-flung arenas as computer simulations, cryptography, gambling, the legal system, statistical sampling, and even the war on terrorism. Evaluating the randomness of extremely large samples is a complex, intricate process. However, the…
DEFF Research Database (Denmark)
Khandige, Surabhi; Møller-Jensen, Jakob
2016-01-01
Surface fimbriae of pathogenic Escherichia coli facilitate sensing, adhesion and even invasion of host epithelial cells. While it is known that the pathogen has the potential to express a plethora of fimbrial variants susceptible to rapid phase ON/OFF variation, it is an open question if the fimb......Surface fimbriae of pathogenic Escherichia coli facilitate sensing, adhesion and even invasion of host epithelial cells. While it is known that the pathogen has the potential to express a plethora of fimbrial variants susceptible to rapid phase ON/OFF variation, it is an open question...... if the fimbrial diversity seen at the population level is the product of random stochasticity or a concerted effort based on active communication. Here we discuss the possibility of a mechanism alternative to a stochastic fimbrial phase variation model affecting the dynamics of a heterogeneous population....
Directory of Open Access Journals (Sweden)
Jung-Nien Lai
Full Text Available BACKGROUND: Hormonal therapy (HT either estrogen alone (E-alone or estrogen plus progesterone (E+P appears to increase the risk for breast cancer in Western countries. However, limited information is available on the association between HT and breast cancer in Asian women characterized mainly by dietary phytoestrogens intake and low prevalence of contraceptive pills prescription. METHODOLOGY: A total of 65,723 women (20-79 years of age without cancer or the use of Chinese herbal products were recruited from a nation-wide one-million representative sample of the National Health Insurance of Taiwan and followed from 1997 to 2008. Seven hundred and eighty incidents of invasive breast cancer were diagnosed. Using a reference group that comprised 40,052 women who had never received a hormone prescription, Cox proportional hazard models were constructed to determine the hazard ratios for receiving different types of HT and the occurrence of breast cancer. CONCLUSIONS: 5,156 (20% women ever used E+P, 2,798 (10.8% ever used E-alone, and 17,717 (69% ever used other preparation types. The Cox model revealed adjusted hazard ratios (HRs of 2.05 (95% CI 1.37-3.07 for current users of E-alone and 8.65 (95% CI 5.45-13.70 for current users of E+P. Using women who had ceased to take hormonal medication for 6 years or more as the reference group, the adjusted HRs were significantly elevated and greater than current users and women who had discontinued hormonal medication for less than 6 years. Current users of either E-alone or E+P have an increased risk for invasive breast cancer in Taiwan, and precautions should be taken when such agents are prescribed.
International Nuclear Information System (INIS)
Sultana, Farhana; Gertig, Dorota M; English, Dallas R; Simpson, Julie A; Brotherton, Julia ML; Drennan, Kelly; Mullins, Robyn; Heley, Stella; Wrede, C David; Saville, Marion
2014-01-01
Organized screening based on Pap tests has substantially reduced deaths from cervical cancer in many countries, including Australia. However, the impact of the program depends upon the degree to which women participate. A new method of screening, testing for human papillomavirus (HPV) DNA to detect the virus that causes cervical cancer, has recently become available. Because women can collect their own samples for this test at home, it has the potential to overcome some of the barriers to Pap tests. The iPap trial will evaluate whether mailing an HPV self-sampling kit increases participation by never- and under-screened women within a cervical screening program. The iPap trial is a parallel randomized controlled, open label, trial. Participants will be Victorian women age 30–69 years, for whom there is either no record on the Victorian Cervical Cytology Registry (VCCR) of a Pap test (never-screened) or the last recorded Pap test was between five to fifteen years ago (under-screened). Enrolment information from the Victorian Electoral Commission will be linked to the VCCR to determine the never-screened women. Variables that will be used for record linkage include full name, address and date of birth. Never- and under-screened women will be randomly allocated to either receive an invitation letter with an HPV self-sampling kit or a reminder letter to attend for a Pap test, which is standard practice for women overdue for a test in Victoria. All resources have been focus group tested. The primary outcome will be the proportion of women who participate, by returning an HPV self-sampling kit for women in the self-sampling arm, and notification of a Pap test result to the Registry for women in the Pap test arm at 3 and 6 months after mailout. The most important secondary outcome is the proportion of test-positive women who undergo further investigations at 6 and 12 months after mailout of results. The iPap trial will provide strong evidence about whether HPV self-sampling
Mokhles, Sahar; Macbeth, Fergus; Treasure, Tom; Younes, Riad N; Rintoul, Robert C; Fiorentino, Francesca; Bogers, Ad J J C; Takkenberg, Johanna J M
2017-06-01
To re-examine the evidence for recommendations for complete dissection versus sampling of ipsilateral mediastinal lymph nodes during lobectomy for cancer. We searched for randomized trials of systematic mediastinal lymphadenectomy versus mediastinal sampling. We performed a textual analysis of the authors' own starting assumptions and conclusion. We analysed the trial designs and risk of bias. We extracted data on early mortality, perioperative complications, overall survival, local recurrence and distant recurrence for meta-analysis. We found five randomized controlled trials recruiting 1980 patients spanning 1989-2007. The expressed starting position in 3/5 studies was a conviction that systematic dissection was effective. Long-term survival was better with lymphadenectomy compared with sampling (Hazard Ratio 0.78; 95% CI 0.69-0.89) as was perioperative survival (Odds Ratio 0.59; 95% CI 0.25-1.36, non-significant). But there was an overall high risk of bias and a lack of intention to treat analysis. There were higher rates (non-significant) of perioperative complications including bleeding, chylothorax and recurrent nerve palsy with lymphadenectomy. The high risk of bias in these trials makes the overall conclusion insecure. The finding of clinically important surgically related morbidities but lower perioperative mortality with lymphadenectomy seems inconsistent. The multiple variables in patients, cancers and available treatments suggest that large pragmatic multicentre trials, testing currently available strategies, are the best way to find out which are more effective. The number of patients affected with lung cancer makes trials feasible. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Directory of Open Access Journals (Sweden)
Frömel Karel
2011-09-01
Full Text Available Abstract Background An optimal level of physical activity (PA in adolescence influences the level of PA in adulthood. Although PA declines with age have been demonstrated repeatedly, few studies have been carried out on secular trends. The present study assessed levels, types and secular trends of PA and sedentary behaviour of a sample of adolescents in the Czech Republic. Methods The study comprised two cross-sectional cohorts of adolescents ten years apart. The analysis compared data collected through a week-long monitoring of adolescents' PA in 1998-2000 and 2008-2010. Adolescents wore either Yamax SW-701 or Omron HJ-105 pedometer continuously for 7 days (at least 10 hours per day excluding sleeping, hygiene and bathing. They also recorded their number of steps per day, the type and duration of PA and sedentary behaviour (in minutes on record sheets. In total, 902 adolescents (410 boys; 492 girls aged 14-18 were eligible for analysis. Results Overweight and obesity in Czech adolescents participating in this study increased from 5.5% (older cohort, 1998-2000 to 10.4% (younger cohort, 2008-2010. There were no inter-cohort significant changes in the total amount of sedentary behaviour in boys. However in girls, on weekdays, there was a significant increase in the total duration of sedentary behaviour of the younger cohort (2008-2010 compared with the older one (1998-2000. Studying and screen time (television and computer were among the main sedentary behaviours in Czech adolescents. The types of sedentary behaviour also changed: watching TV (1998-2000 was replaced by time spent on computers (2008-2010. The Czech health-related criterion (achieving 11,000 steps per day decreased only in boys from 68% (1998-2000 to 55% (2008-2010. Across both genders, 55%-75% of Czech adolescents met the health-related criterion of recommended steps per day, however less participants in the younger cohort (2008-2010 met this criterion than in the older cohort
Meldrum, R J; Garside, J; Mannion, P; Charles, D; Ellis, P
2012-12-01
The Welsh Food Microbiological Forum "shopping basket" survey is a long running, structured surveillance program examining ready-to-eat food randomly sampled from the point of sale or service in Wales, United Kingdom. The annual unsatisfactory rates for selected indicators and pathogens for 1998 through 2008 were examined. All the annual unsatisfactory rates for the selected pathogens were <0.5%, and no pattern with the annual rate was observed. There was also no discernible trend observed for the annual rates of Listeria spp. (not moncytogenes), with all rates <0.5%. However, there was a trend observed for Esherichia coli, with a decrease in rate between 1998 and 2003, rapid in the first few years, and then a gradual increase in rate up to 2008. It was concluded that there was no discernible pattern to the annual unsatisfactory rates for Listeria spp. (not monocytogenes), L. monocytogenes, Staphylococcus aureus, and Bacillus cereus, but that a definite trend had been observed for E. coli.
Visualizing the Sample Standard Deviation
Sarkar, Jyotirmoy; Rashid, Mamunur
2017-01-01
The standard deviation (SD) of a random sample is defined as the square-root of the sample variance, which is the "mean" squared deviation of the sample observations from the sample mean. Here, we interpret the sample SD as the square-root of twice the mean square of all pairwise half deviations between any two sample observations. This…
Victorson, David; Hankin, Vered; Burns, James; Weiland, Rebecca; Maletich, Carly; Sufrin, Nathaniel; Schuette, Stephanie; Gutierrez, Bruriah; Brendler, Charles
2017-08-01
In a pilot randomized controlled trial, examine the feasibility and preliminary efficacy of an 8-week, mindfulness training program (Mindfulness Based Stress Reduction) in a sample of men on active surveillance on important psychological outcomes including prostate cancer anxiety, uncertainty intolerance and posttraumatic growth. Men were randomized to either mindfulness (n = 24) or an attention control arm (n = 19) and completed self-reported measures of prostate cancer anxiety, uncertainty intolerance, global quality of life, mindfulness and posttraumatic growth at baseline, 8 weeks, 6 months and 12 months. Participants in the mindfulness arm demonstrated significant decreases in prostate cancer anxiety and uncertainty intolerance, and significant increases in mindfulness, global mental health and posttraumatic growth. Participants in the control condition also demonstrated significant increases in mindfulness over time. Longitudinal increases in posttraumatic growth were significantly larger in the mindfulness arm than they were in the control arm. While mindfulness training was found to be generally feasible and acceptable among participants who enrolled in the 8-week intervention as determined by completion rates and open-ended survey responses, the response rate between initial enrollment and the total number of men approached was lower than desired (47%). While larger sample sizes are necessary to examine the efficacy of mindfulness training on important psychological outcomes, in this pilot study posttraumatic growth was shown to significantly increase over time for men in the treatment group. Mindfulness training has the potential to help men cope more effectively with some of the stressors and uncertainties associated with active surveillance. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Chen, Xinguang; Yu, Bin; Zhou, Dunjin; Zhou, Wang; Gong, Jie; Li, Shiyue; Stanton, Bonita
2015-01-01
Background Mobile populations and men who have sex with men (MSM) play an increasing role in the current HIV epidemic in China and across the globe. While considerable research has addressed both of these at-risk populations, more effective HIV control requires accurate data on the number of MSM at the population level, particularly MSM among migrant populations. Methods Survey data from a random sample of male rural-to-urban migrants (aged 18-45, n=572) in Wuhan, China were analyzed and compared with those of randomly selected non-migrant urban (n=566) and rural counterparts (580). The GIS/GPS technologies were used for sampling and the survey estimation method was used for data analysis. Results HIV-related risk behaviors among rural-to-urban migrants were similar to those among the two comparison groups. The estimated proportion of MSM among migrants [95% CI] was 5.8% [4.7, 6.8], higher than 2.8% [1.2, 4.5] for rural residents and 1.0% [0.0, 2.4] for urban residents, respectively. Among these migrants, the MSM were more likely than non-MSM to be older in age, married, and migrated to more cities. They were also more likely to co-habit with others in rental properties located in new town and neighborhoods with fewer old acquaintances and more entertainment establishments. In addition, they were more likely to engage in commercial sex and less likely to consistently use condoms. Conclusion Findings of this study indicate that compared to rural and urban populations, the migrant population in Wuhan consists of a higher proportion of MSM who also exhibit higher levels of HIV-related risk behaviors. More effective interventions should target this population with a focus on neighborhood factors, social capital and collective efficacy for risk reduction. PMID:26241900
Moiseiwitsch, B L
2004-01-01
This graduate-level text's primary objective is to demonstrate the expression of the equations of the various branches of mathematical physics in the succinct and elegant form of variational principles (and thereby illuminate their interrelationship). Its related intentions are to show how variational principles may be employed to determine the discrete eigenvalues for stationary state problems and to illustrate how to find the values of quantities (such as the phase shifts) that arise in the theory of scattering. Chapter-by-chapter treatment consists of analytical dynamics; optics, wave mecha
Power Spectrum Estimation of Randomly Sampled Signals
DEFF Research Database (Denmark)
Velte, Clara M.; Buchhave, Preben; K. George, William
2014-01-01
proportional to velocity magnitude that consist of well-defined frequency content, which makes bias easy to spot. The idea is that if the algorithms are not able to produce correct statistics from this simple signal, then they will certainly not be able to function well for a more complex measured LDA signal...
Control of Randomly Sampled Robotic Systems
1989-05-01
task is so cumbersome and complicated that we would not be able to do without lots of mistakes. To avoid this formidable business , a Lisp program is...Artificial Inteligence Laboratory, 1972. PumA26O.c Ned Mar 8 17:51:04 1989 1 #include <rnath.h> #define real float #define mm 6 #define G 9.83. #define M6
Directory of Open Access Journals (Sweden)
Marin-Garcia Pablo
2010-05-01
Full Text Available Abstract Background The maturing field of genomics is rapidly increasing the number of sequenced genomes and producing more information from those previously sequenced. Much of this additional information is variation data derived from sampling multiple individuals of a given species with the goal of discovering new variants and characterising the population frequencies of the variants that are already known. These data have immense value for many studies, including those designed to understand evolution and connect genotype to phenotype. Maximising the utility of the data requires that it be stored in an accessible manner that facilitates the integration of variation data with other genome resources such as gene annotation and comparative genomics. Description The Ensembl project provides comprehensive and integrated variation resources for a wide variety of chordate genomes. This paper provides a detailed description of the sources of data and the methods for creating the Ensembl variation databases. It also explores the utility of the information by explaining the range of query options available, from using interactive web displays, to online data mining tools and connecting directly to the data servers programmatically. It gives a good overview of the variation resources and future plans for expanding the variation data within Ensembl. Conclusions Variation data is an important key to understanding the functional and phenotypic differences between individuals. The development of new sequencing and genotyping technologies is greatly increasing the amount of variation data known for almost all genomes. The Ensembl variation resources are integrated into the Ensembl genome browser and provide a comprehensive way to access this data in the context of a widely used genome bioinformatics system. All Ensembl data is freely available at http://www.ensembl.org and from the public MySQL database server at ensembldb.ensembl.org.
Hilário, M.; Hollander, den W.Th.F.; Sidoravicius, V.; Soares dos Santos, R.; Teixeira, A.
2014-01-01
In this paper we study a random walk in a one-dimensional dynamic random environment consisting of a collection of independent particles performing simple symmetric random walks in a Poisson equilibrium with density ¿¿(0,8). At each step the random walk performs a nearest-neighbour jump, moving to
RNA-seq: technical variability and sampling
2011-01-01
Background RNA-seq is revolutionizing the way we study transcriptomes. mRNA can be surveyed without prior knowledge of gene transcripts. Alternative splicing of transcript isoforms and the identification of previously unknown exons are being reported. Initial reports of differences in exon usage, and splicing between samples as well as quantitative differences among samples are beginning to surface. Biological variation has been reported to be larger than technical variation. In addition, technical variation has been reported to be in line with expectations due to random sampling. However, strategies for dealing with technical variation will differ depending on the magnitude. The size of technical variance, and the role of sampling are examined in this manuscript. Results In this study three independent Solexa/Illumina experiments containing technical replicates are analyzed. When coverage is low, large disagreements between technical replicates are apparent. Exon detection between technical replicates is highly variable when the coverage is less than 5 reads per nucleotide and estimates of gene expression are more likely to disagree when coverage is low. Although large disagreements in the estimates of expression are observed at all levels of coverage. Conclusions Technical variability is too high to ignore. Technical variability results in inconsistent detection of exons at low levels of coverage. Further, the estimate of the relative abundance of a transcript can substantially disagree, even when coverage levels are high. This may be due to the low sampling fraction and if so, it will persist as an issue needing to be addressed in experimental design even as the next wave of technology produces larger numbers of reads. We provide practical recommendations for dealing with the technical variability, without dramatic cost increases. PMID:21645359
International Nuclear Information System (INIS)
Tahir-Kheli, R.A.
1975-01-01
A few simple problems relating to random magnetic systems are presented. Translational symmetry, only on the macroscopic scale, is assumed for these systems. A random set of parameters, on the microscopic scale, for the various regions of these systems is also assumed. A probability distribution for randomness is obeyed. Knowledge of the form of these probability distributions, is assumed in all cases [pt
Geographic variation in expenditures for Workers' Compensation hospitalized claims.
Miller, T R; Levy, D T
1999-02-01
Past literature finds considerable variation in the cost of physician care and in the utilization of medical procedures. Variation in the cost of hospitalized care has received little attention. We examine injury costs of hospitalized claims across states. Multivariate regression analysis is used to isolate state variations, while controlling for personal and injury characteristics, and state characteristics. Injuries to workers filing Workers' Compensation lost workday claims. About 35,000 randomly sampled Workers' Compensation claims from 17 states filed between 1979 and 1988. Medical payments per episode of three injury groups: upper and lower extremity fractures and dislocations, other upper extremity injuries, and back strains and sprains. Statistical analyses reveal considerable variation in expenditures for hospitalized injuries across states, even after controlling for case mix and state characteristics. A substantial portion of the variation is explained by state rate regulations; regulated states have lower costs. The large variation in costs suggests a potential to affect the costs of hospitalized care. Efforts should be directed at those areas that have higher costs without sufficient input price, quality, or case mix justification.
Pernot, Dominique
2014-01-01
Les derniers romans de Gabriel Josipovici offrent beaucoup de variété, allant de la parodie, de la fiction comique légère, dans Only Joking et Making Mistakes, à des sujets plus graves, plus personnels, ontologiques. Dans un court roman, Everything Passes, et dans un roman majeur, Goldberg: Variations, le lecteur est amené à se poser des questions sur la nature mystérieuse de la réalité, qui est, trop souvent, acceptée sans conteste par de nombreux roma...
Directory of Open Access Journals (Sweden)
Elena Zubieta
2008-12-01
Full Text Available Desde una mirada psicosocial, el trabajo consiste en un conjunto de creencias y valores hacia el trabajo que los individuos y grupos sociales desarrollan antes y durante el proceso de socialización en el trabajo. Se trata de un conjunto flexible de cogniciones que está sujeto a cambios dependiendo de las vivencias personales y los cambios contextuales (Salanova, Gracia & Peiró;1996. Desde la perspectiva de la socialización en el trabajo y con el objetivo de explorar en probables fuentes de variación a partir de variables sociodemográficas, contextuales y psicosociales, se desarrolló un estudio descriptivo de diferencias de grupo sobre la base de una muestra no probabilística intencional por cuotas compuesta por 290 sujetos activos laboralmente de la Ciudad de Buenos Aires y el Conurbano Bonaerense. Los resultados muestran la presencia de creencias asociadas a la Ética Protestante del Trabajo y la Competitividad, valores de Apertura al Cambio y Autotrascendencia y configuraciones particulares a partir de introducir variables como el sexo, la edad, el nivel de educación y aspectos de trayectoria laboral tales como años de trabajo, permanencia en la organización y el puesto, interrupciones en la actividad laboral y modalidad de trabajo.From a psycho-sociological view, work can be understood as a set of values and beliefs which individuals and groups construct before and during work process socialization. It is a flexible set of cognitions influenced by individuals personal experiences and contextual changes (Salanova, Gracia & Peiró;1996. Taking socialization at work as a starting point and with the aim of exploring variation sources in terms of sociodemographic, contextual and psycho-sociological variables, a descriptive group differences study was carried out based on a convenience sample of 290 working participants from Buenos Aires city and surroundings. Results show the presence of Protestant Work Ethic, Competitive beliefs, Self
Application of random effects to the study of resource selection by animals.
Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L
2006-07-01
1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions
Generalization of Random Intercept Multilevel Models
Directory of Open Access Journals (Sweden)
Rehan Ahmad Khan
2013-10-01
Full Text Available The concept of random intercept models in a multilevel model developed by Goldstein (1986 has been extended for k-levels. The random variation in intercepts at individual level is marginally split into components by incorporating higher levels of hierarchy in the single level model. So, one can control the random variation in intercepts by incorporating the higher levels in the model.
Randomized random walk on a random walk
International Nuclear Information System (INIS)
Lee, P.A.
1983-06-01
This paper discusses generalizations of the model introduced by Kehr and Kunter of the random walk of a particle on a one-dimensional chain which in turn has been constructed by a random walk procedure. The superimposed random walk is randomised in time according to the occurrences of a stochastic point process. The probability of finding the particle in a particular position at a certain instant is obtained explicitly in the transform domain. It is found that the asymptotic behaviour for large time of the mean-square displacement of the particle depends critically on the assumed structure of the basic random walk, giving a diffusion-like term for an asymmetric walk or a square root law if the walk is symmetric. Many results are obtained in closed form for the Poisson process case, and these agree with those given previously by Kehr and Kunter. (author)
Variation in extinction risk among birds: chance or evolutionary predisposition?
Bennett, P. M.; Owens, I. P. F.
1997-01-01
Collar et al. (1994) estimate that of the 9,672 extant species of bird, 1,111 are threatened by extinction. Here, we test whether these threatened species are simply a random sample of birds, or whether there is something about their biology that predisposes them to extinction. We ask three specific questions. First, is extinction risk randomly distributed among families? Second, which families, if any, contain more, or less, threatened species than would be expected by chance? Third, is variation between taxa in extinction risk associated with variation in either body size or fecundity? Extinction risk is not randomly distributed among families. The families which contain significantly more threatened species than expected are the parrots (Psittacidae), pheasants and allies (Phasianidae), albatrosses and allies (Procellariidae), rails (Rallidae), cranes (Gruidae), cracids (Cracidae), megapodes (Megapodidae) and pigeons (Columbidae). The only family which contains significantly fewer threatened species than expected is the woodpeckers (Picidae). Extinction risk is also not distributed randomly with respect to fecundity or body size. Once phylogeny has been controlled for, increases in extinction risk are independently associated with increases in body size and decreases in fecundity. We suggest that this is because low rates of fecundity, which evolved many tens of millions of years ago, predisposed certain lineages to extinction. Low-fecundity populations take longer to recover if they are reduced to small sizes and are, therefore, more likely to go extinct if an external force causes an increase in the rate of mortality, thereby perturbing the natural balance between fecundity and mortality.
Latent spatial models and sampling design for landscape genetics
Hanks, Ephraim M.; Hooten, Mevin B.; Knick, Steven T.; Oyler-McCance, Sara J.; Fike, Jennifer A.; Cross, Todd B.; Schwartz, Michael K.
2016-01-01
We propose a spatially-explicit approach for modeling genetic variation across space and illustrate how this approach can be used to optimize spatial prediction and sampling design for landscape genetic data. We propose a multinomial data model for categorical microsatellite allele data commonly used in landscape genetic studies, and introduce a latent spatial random effect to allow for spatial correlation between genetic observations. We illustrate how modern dimension reduction approaches to spatial statistics can allow for efficient computation in landscape genetic statistical models covering large spatial domains. We apply our approach to propose a retrospective spatial sampling design for greater sage-grouse (Centrocercus urophasianus) population genetics in the western United States.
Directory of Open Access Journals (Sweden)
Fernando Sergio Leitão Filho
2009-12-01
Full Text Available OBJETIVO: Divulgar os dados de um estudo transversal randomizado, realizado em 2001, pelo Centro Brasileiro de Informações sobre Drogas Psicotrópicas. MÉTODOS: A população pesquisada neste levantamento incluiu indivíduos com 12-65 anos de idade, residentes nos 107 maiores municípios do Brasil (com mais de 200 mil habitantes, o que representou 27,7% da população brasileira na época, estimada em 169.799.170 habitantes. Foram realizadas no total 8.589 entrevistas. Utilizou-se o questionário Substance Abuse and Mental Health Services Administration, que foi traduzido e adaptado para o uso no Brasil. RESULTADOS: Do total, 41,1% dos entrevistados disseram já ter utilizado produtos derivados de tabaco alguma vez na vida. A prevalência de uso diário de tabaco foi de 17,4% da amostra (20,3% entre os homens e 14,8% entre as mulheres. Observou-se que 9% da população (10,1% entre os homens e 7,9% entre as mulheres são dependentes da nicotina, segundo os critérios do National Household Surveys on Drug Abuse. CONCLUSÕES: A prevalência do uso diário de tabaco, nos maiores municípios brasileiros, é significativamente menor na presente década do que a prevalência nacional ao final do século passado.OBJECTIVE: To provide access to the results of a randomized cross-sectional study conducted by the Brazilian Center for Information on Psychotropic Drugs in 2001. METHODS: This survey involved a random sample of individuals ranging from 12 to 65 years of age and residing in the 107 largest cities (over 200,000 inhabitants in Brazil, which represented 27.7% of the Brazilian population, estimated to be 169,799,170 inhabitants at the time. A total of 8,589 interviews were conducted. The Substance Abuse and Mental Health Services Administration questionnaire, translated and adapted for use in Brazil, was used in the interviews. RESULTS: Of the sample as a whole, 41.1% of the interviewees reported having experimented with tobacco products. The
DEFF Research Database (Denmark)
Kobayashi, Sofie; Berge, Maria; Grout, Brian William Wilson
2017-01-01
This study contributes towards a better understanding of learning dynamics in doctoral supervision by analysing how learning opportunities are created in the interaction between supervisors and PhD students, using the notion of experiencing variation as a key to learning. Empirically, we have based...... the study on four video-recorded sessions, with four different PhD students and their supervisors, all from life sciences. Our analysis revealed that learning opportunities in the supervision sessions concerned either the content matter of research (for instance, understanding soil structure......), or the research methods— more specifically how to produce valid results. Our results illustrate how supervisors and PhD students create a space of learning together in their particular discipline by varying critical aspects of their research in their discussions. Situations where more openended research issues...
Sampling Large Graphs for Anticipatory Analytics
2015-05-15
low. C. Random Area Sampling Random area sampling [8] is a “ snowball ” sampling method in which a set of random seed vertices are selected and areas... Sampling Large Graphs for Anticipatory Analytics Lauren Edwards, Luke Johnson, Maja Milosavljevic, Vijay Gadepally, Benjamin A. Miller Lincoln...systems, greater human-in-the-loop involvement, or through complex algorithms. We are investigating the use of sampling to mitigate these challenges
Variation-aware advanced CMOS devices and SRAM
Shin, Changhwan
2016-01-01
This book provides a comprehensive overview of contemporary issues in complementary metal-oxide semiconductor (CMOS) device design, describing how to overcome process-induced random variations such as line-edge-roughness, random-dopant-fluctuation, and work-function variation, and the applications of novel CMOS devices to cache memory (or Static Random Access Memory, SRAM). The author places emphasis on the physical understanding of process-induced random variation as well as the introduction of novel CMOS device structures and their application to SRAM. The book outlines the technical predicament facing state-of-the-art CMOS technology development, due to the effect of ever-increasing process-induced random/intrinsic variation in transistor performance at the sub-30-nm technology nodes. Therefore, the physical understanding of process-induced random/intrinsic variations and the technical solutions to address these issues plays a key role in new CMOS technology development. This book aims to provide the reade...
Coordination of Conditional Poisson Samples
Directory of Open Access Journals (Sweden)
Grafström Anton
2015-12-01
Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.
Coupling methods for multistage sampling
Chauvet, Guillaume
2015-01-01
Multistage sampling is commonly used for household surveys when there exists no sampling frame, or when the population is scattered over a wide area. Multistage sampling usually introduces a complex dependence in the selection of the final units, which makes asymptotic results quite difficult to prove. In this work, we consider multistage sampling with simple random without replacement sampling at the first stage, and with an arbitrary sampling design for further stages. We consider coupling ...
Blocked Randomization with Randomly Selected Block Sizes
Directory of Open Access Journals (Sweden)
Jimmy Efird
2010-12-01
Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.
Desu, M M
2012-01-01
One of the most important problems in designing an experiment or a survey is sample size determination and this book presents the currently available methodology. It includes both random sampling from standard probability distributions and from finite populations. Also discussed is sample size determination for estimating parameters in a Bayesian setting by considering the posterior distribution of the parameter and specifying the necessary requirements. The determination of the sample size is considered for ranking and selection problems as well as for the design of clinical trials. Appropria
Statistical sampling strategies
International Nuclear Information System (INIS)
Andres, T.H.
1987-01-01
Systems assessment codes use mathematical models to simulate natural and engineered systems. Probabilistic systems assessment codes carry out multiple simulations to reveal the uncertainty in values of output variables due to uncertainty in the values of the model parameters. In this paper, methods are described for sampling sets of parameter values to be used in a probabilistic systems assessment code. Three Monte Carlo parameter selection methods are discussed: simple random sampling, Latin hypercube sampling, and sampling using two-level orthogonal arrays. Three post-selection transformations are also described: truncation, importance transformation, and discretization. Advantages and disadvantages of each method are summarized
International Nuclear Information System (INIS)
Citanovic, M.; Bezlaj, H.
1994-01-01
This presentation describes essential boat sampling activities: on site boat sampling process optimization and qualification; boat sampling of base material (beltline region); boat sampling of weld material (weld No. 4); problems accompanied with weld crown varieties, RPV shell inner radius tolerance, local corrosion pitting and water clarity. The equipment used for boat sampling is described too. 7 pictures
Zhang, L.-C.; Patone, M.
2017-01-01
We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.
PCT Uncertainty Analysis Using Unscented Transform with Random Orthogonal Matrix
Energy Technology Data Exchange (ETDEWEB)
Fynana, Douglas A.; Ahn, Kwang-Il [KAERI, Daejeon (Korea, Republic of); Lee, John C. [Univ. of Michigan, Michigan (United States)
2015-05-15
Most Best Estimate Plus Uncertainty (BEPU) methods employ nonparametric order statistics through Wilks' formula to quantify uncertainties of best estimate simulations of nuclear power plant (NPP) transients. 95%/95% limits, the 95''t{sup h} percentile at a 95% confidence level, are obtained by randomly sampling all uncertainty contributors through conventional Monte Carlo (MC). Advantages are simple implementation of MC sampling of input probability density functions (pdfs) and limited computational expense of 1''s{sup t}, 2''n{sup d}, and 3''r{sup d} order Wilks' formula requiring only 59, 93, or 124 simulations, respectively. A disadvantage of small sample size is large sample to sample variation of statistical estimators. This paper presents a new efficient sampling based algorithm for accurate estimation of mean and variance of the output parameter pdf. The algorithm combines a deterministic sampling method, the unscented transform (UT), with random sampling through the generation of a random orthogonal matrix (ROM). The UT guarantees the mean, covariance, and 3''r{sup d} order moments of the multivariate input parameter distributions are exactly preserved by the sampled input points and the orthogonal transformation of the points by a ROM guarantees the sample error of all 4''t{sup h} order and higher moments are unbiased. The UT with ROM algorithm is applied to the uncertainty quantification of the peak clad temperature (PCT) during a large break loss-of-coolant accident (LBLOCA) in an OPR1000 NPP to demonstrate the applicability of the new algorithm to BEPU. This paper presented a new algorithm combining the UT with ROM for efficient multivariate parameter sampling that ensures sample input covariance and 3''r{sup d} order moments are exactly preserved and 4''th moment errors are small and unbiased. The advantageous sample properties guarantee higher order accuracy and
Dufour, Claire M. S.; Meynard, Christine; Watson, Johan; Rioux, Camille; Benhamou, Simon; Perez, Julie; du Plessis, Jurie J.; Avenant, Nico; Pillay, Neville; Ganem, Guila
2015-01-01
Coexistence often involves niche differentiation either as the result of environmental divergence, or in response to competition. Disentangling the causes of such divergence requires that environmental variation across space is taken into account, which is rarely done in empirical studies. We address the role of environmental variation versus competition in coexistence between two rodent species: Rhabdomys bechuanae (bechuanae) and Rhabdomys dilectus dilectus (dilectus) comparing their habitat preference and home range (HR) size in areas with similar climates, where their distributions abut (allopatry) or overlap (sympatry). Using Outlying Mean Index analyses, we test whether habitat characteristics of the species deviate significantly from a random sample of available habitats. In allopatry, results suggest habitat selection: dilectus preferring grasslands with little bare soil while bechuanae occurring in open shrublands. In sympatry, shrubland type habitats dominate and differences are less marked, yet dilectus selects habitats with more cover than bechuanae. Interestingly, bechuanae shows larger HRs than dilectus, and both species display larger HRs in sympatry. Further, HR overlaps between species are lower than expected. We discuss our results in light of data on the phylogeography of the genus and propose that evolution in allopatry resulted in adaptation leading to different habitat preferences, even at their distribution margins, a divergence expected to facilitate coexistence. However, since sympatry occurs in sites where environmental characteristics do not allow complete species separation, competition may explain reduced inter-species overlap and character displacement in HR size. This study reveals that both environmental variation and competition may shape species coexistence. PMID:25693176
680 SPATIAL VARIATION IN GROUNDWATER POLLUTION BY ...
African Journals Online (AJOL)
Osondu
higher in Group A water samples, and reduced slightly in the Group B and then the Group C samples, ... Keywords: Spatial variation, Groundwater, Pollution, Abattoir, Effluents, Water quality. ... situation which may likely pose a threat to the.
Cui, J.; Galand, M.; Yelle, R. V.; Vuitton, V.; Wahlund, J.-E.; Lavvas, P. P.; Mueller-Wodarg, I. C. F.; Kasprzak, W. T.; Waite, J. H.
2009-04-01
We present our analysis of the diurnal variations of Titan's ionosphere (between 1,000 and 1,400 km) based on a sample of Ion Neutral Mass Spectrometer (INMS) measurements in the Open Source Ion (OSI) mode obtained from 8 close encounters of the Cassini spacecraft with Titan. Though there is an overall ion depletion well beyond the terminator, the ion content on Titan's nightside is still appreciable, with a density plateau of ~700 cm-3 below ~1,300 km. Such a plateau is associated with the combination of distinct diurnal variations of light and heavy ions. Light ions (e.g. CH5+, HCNH+, C2H5+) show strong diurnal variation, with clear bite-outs in their nightside distributions. In contrast, heavy ions (e.g. c-C3H3+, C2H3CNH+, C6H7+) present modest diurnal variation, with significant densities observed on the nightside. We propose that the distinctions between light and heavy ions are associated with their different chemical loss pathways, with the former primarily through "fast" ion-neutral chemistry and the latter through "slow" electron dissociative recombination. The INMS data suggest day-to-night transport as an important source of ions on Titan's nightside, to be distinguished from the conventional scenario of auroral ionization by magnetospheric particles as the only ionizing source on the nightside. This is supported by the strong correlation between the observed night-to-day ion density ratios and the associated ion lifetimes. We construct a time-dependent ion chemistry model to investigate the effects of day-to-night transport on the ionospheric structures of Titan. The predicted diurnal variation has similar general characteristics to those observed, with some apparent discrepancies which could be reconciled by imposing fast horizontal thermal winds in Titan's upper atmosphere.
Process variation in electron beam sterilization
International Nuclear Information System (INIS)
Beck, Jeffrey A.
2012-01-01
The qualification and control of electron beam sterilization can be improved by the application of proven statistical analysis techniques such as Analysis of Variance (ANOVA) and Statistical Tolerance Limits. These statistical techniques can be useful tools in: •Locating and quantifying the minimum and maximum absorbed dose in a product. •Estimating the expected process maximum dose, given a minimum sterilizing dose. •Setting a process minimum dose target, based on an allowance for random measurement and process variation. •Determining the dose relationship between a reference dosimeter and process minimum and maximum doses. This study investigates and demonstrates the application of these tools in qualifying electron beam sterilization, and compares the conclusions obtained with those obtained using practices recommended in Guide for Process Control in Radiation Sterilization. The study supports the following conclusions for electron beam processes: 1.ANOVA is a more effective tool for evaluating the equivalency of absorbed doses than methods suggested in . 2.Process limits computed using statistical tolerance limits more accurately reflect actual process variability than the AAMI method, which applies +/−2 sample standard deviations (s) regardless of sample size. 3.The use of reference dose ratios lends itself to qualification using statistical tolerance limits. The current AAMI recommended approach may result in an overly optimistic estimate of the reference dose adjustment factor, as it is based on application of +/−2(s) tolerances regardless of sample size.
Energy Technology Data Exchange (ETDEWEB)
Macias B, L.R.; Garcia C, R.M.; De Ita de la Torre, A.; Chavez R, A. [Instituto Nacional de Investigaciones Nucleares, A.P. 18-1027, 11801 Mexico D.F. (Mexico)
2000-07-01
In this work making use of the diffraction and fluorescence techniques its were determined the presence of elements in a known compound ZrSiO{sub 4} under different pressure conditions. At preparing the samples it were applied different pressures from 1600 until 350 k N/m{sup 2} and it is detected the apparent variations in concentration in the Zr and Si elements. (Author)
Brus, D.J.
2015-01-01
In balanced sampling a linear relation between the soil property of interest and one or more covariates with known means is exploited in selecting the sampling locations. Recent developments make this sampling design attractive for statistical soil surveys. This paper introduces balanced sampling
Hanson, Jeffrey O; Rhodes, Jonathan R; Riginos, Cynthia; Fuller, Richard A
2017-11-28
Protected areas buffer species from anthropogenic threats and provide places for the processes that generate and maintain biodiversity to continue. However, genetic variation, the raw material for evolution, is difficult to capture in conservation planning, not least because genetic data require considerable resources to obtain and analyze. Here we show that freely available environmental and geographic distance variables can be highly effective surrogates in conservation planning for representing adaptive and neutral intraspecific genetic variation. We obtained occurrence