Independent random sampling methods
Martino, Luca; Míguez, Joaquín
2018-01-01
This book systematically addresses the design and analysis of efficient techniques for independent random sampling. Both general-purpose approaches, which can be used to generate samples from arbitrary probability distributions, and tailored techniques, designed to efficiently address common real-world practical problems, are introduced and discussed in detail. In turn, the monograph presents fundamental results and methodologies in the field, elaborating and developing them into the latest techniques. The theory and methods are illustrated with a varied collection of examples, which are discussed in detail in the text and supplemented with ready-to-run computer code. The main problem addressed in the book is how to generate independent random samples from an arbitrary probability distribution with the weakest possible constraints or assumptions in a form suitable for practical implementation. The authors review the fundamental results and methods in the field, address the latest methods, and emphasize the li...
Chu, Hui-May; Ette, Ene I
2005-09-02
his study was performed to develop a new nonparametric approach for the estimation of robust tissue-to-plasma ratio from extremely sparsely sampled paired data (ie, one sample each from plasma and tissue per subject). Tissue-to-plasma ratio was estimated from paired/unpaired experimental data using independent time points approach, area under the curve (AUC) values calculated with the naïve data averaging approach, and AUC values calculated using sampling based approaches (eg, the pseudoprofile-based bootstrap [PpbB] approach and the random sampling approach [our proposed approach]). The random sampling approach involves the use of a 2-phase algorithm. The convergence of the sampling/resampling approaches was investigated, as well as the robustness of the estimates produced by different approaches. To evaluate the latter, new data sets were generated by introducing outlier(s) into the real data set. One to 2 concentration values were inflated by 10% to 40% from their original values to produce the outliers. Tissue-to-plasma ratios computed using the independent time points approach varied between 0 and 50 across time points. The ratio obtained from AUC values acquired using the naive data averaging approach was not associated with any measure of uncertainty or variability. Calculating the ratio without regard to pairing yielded poorer estimates. The random sampling and pseudoprofile-based bootstrap approaches yielded tissue-to-plasma ratios with uncertainty and variability. However, the random sampling approach, because of the 2-phase nature of its algorithm, yielded more robust estimates and required fewer replications. Therefore, a 2-phase random sampling approach is proposed for the robust estimation of tissue-to-plasma ratio from extremely sparsely sampled data.
Systematic versus random sampling in stereological studies.
West, Mark J
2012-12-01
The sampling that takes place at all levels of an experimental design must be random if the estimate is to be unbiased in a statistical sense. There are two fundamental ways by which one can make a random sample of the sections and positions to be probed on the sections. Using a card-sampling analogy, one can pick any card at all out of a deck of cards. This is referred to as independent random sampling because the sampling of any one card is made without reference to the position of the other cards. The other approach to obtaining a random sample would be to pick a card within a set number of cards and others at equal intervals within the deck. Systematic sampling along one axis of many biological structures is more efficient than random sampling, because most biological structures are not randomly organized. This article discusses the merits of systematic versus random sampling in stereological studies.
Freeman, Lindsay M; Pang, Lin; Fainman, Yeshaiahu
2018-05-09
The analysis of DNA has led to revolutionary advancements in the fields of medical diagnostics, genomics, prenatal screening, and forensic science, with the global DNA testing market expected to reach revenues of USD 10.04 billion per year by 2020. However, the current methods for DNA analysis remain dependent on the necessity for fluorophores or conjugated proteins, leading to high costs associated with consumable materials and manual labor. Here, we demonstrate a potential label-free DNA composition detection method using surface-enhanced Raman spectroscopy (SERS) in which we identify the composition of cytosine and adenine within single strands of DNA. This approach depends on the fact that there is one phosphate backbone per nucleotide, which we use as a reference to compensate for systematic measurement variations. We utilize plasmonic nanomaterials with random Raman sampling to perform label-free detection of the nucleotide composition within DNA strands, generating a calibration curve from standard samples of DNA and demonstrating the capability of resolving the nucleotide composition. The work represents an innovative way for detection of the DNA composition within DNA strands without the necessity of attached labels, offering a highly sensitive and reproducible method that factors in random sampling to minimize error.
A Bayesian Justification for Random Sampling in Sample Survey
Directory of Open Access Journals (Sweden)
Glen Meeden
2012-07-01
Full Text Available In the usual Bayesian approach to survey sampling the sampling design, plays a minimal role, at best. Although a close relationship between exchangeable prior distributions and simple random sampling has been noted; how to formally integrate simple random sampling into the Bayesian paradigm is not clear. Recently it has been argued that the sampling design can be thought of as part of a Bayesian's prior distribution. We will show here that under this scenario simple random sample can be given a Bayesian justification in survey sampling.
Agashiwala, Rajiv M; Louis, Elan D; Hof, Patrick R; Perl, Daniel P
2008-10-21
Non-biased systematic sampling using the principles of stereology provides accurate quantitative estimates of objects within neuroanatomic structures. However, the basic principles of stereology are not optimally suited for counting objects that selectively exist within a limited but complex and convoluted portion of the sample, such as occurs when counting cerebellar Purkinje cells. In an effort to quantify Purkinje cells in association with certain neurodegenerative disorders, we developed a new method for stereologic sampling of the cerebellar cortex, involving calculating the volume of the cerebellar tissues, identifying and isolating the Purkinje cell layer and using this information to extrapolate non-biased systematic sampling data to estimate the total number of Purkinje cells in the tissues. Using this approach, we counted Purkinje cells in the right cerebella of four human male control specimens, aged 41, 67, 70 and 84 years, and estimated the total Purkinje cell number for the four entire cerebella to be 27.03, 19.74, 20.44 and 22.03 million cells, respectively. The precision of the method is seen when comparing the density of the cells within the tissue: 266,274, 173,166, 167,603 and 183,575 cells/cm3, respectively. Prior literature documents Purkinje cell counts ranging from 14.8 to 30.5 million cells. These data demonstrate the accuracy of our approach. Our novel approach, which offers an improvement over previous methodologies, is of value for quantitative work of this nature. This approach could be applied to morphometric studies of other similarly complex tissues as well.
Sulaiman, Nabil; Albadawi, Salah; Abusnana, Salah; Fikri, Mahmoud; Madani, Abdulrazzag; Mairghani, Maisoon; Alawadi, Fatheya; Zimmet, Paul; Shaw, Jonathan
2015-09-01
The prevalence of diabetes has risen rapidly in the Middle East, particularly in the Gulf Region. However, some prevalence estimates have not fully accounted for large migrant worker populations and have focused on minority indigenous populations. The objectives of the UAE National Diabetes and Lifestyle Study are to: (i) define the prevalence of, and risk factors for, T2DM; (ii) describe the distribution and determinants of T2DM risk factors; (iii) study health knowledge, attitudes, and (iv) identify gene-environment interactions; and (v) develop baseline data for evaluation of future intervention programs. Given the high burden of diabetes in the region and the absence of accurate data on non-UAE nationals in the UAE, a representative sample of the non-UAE nationals was essential. We used an innovative methodology in which non-UAE nationals were sampled when attending the mandatory biannual health check that is required for visa renewal. Such an approach could also be used in other countries in the region. Complete data were available for 2719 eligible non-UAE nationals (25.9% Arabs, 70.7% Asian non-Arabs, 1.1% African non-Arabs, and 2.3% Westerners). Most were men < 65 years of age. The response rate was 68%, and the non-response was greater among women than men; 26.9% earned less than UAE Dirham (AED) 24 000 (US$6500) and the most common areas of employment were as managers or professionals, in service and sales, and unskilled occupations. Most (37.4%) had completed high school and 4.1% had a postgraduate degree. This novel methodology could provide insights for epidemiological studies in the UAE and other Gulf States, particularly for expatriates. © 2015 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
Systematic random sampling of the comet assay.
McArt, Darragh G; Wasson, Gillian R; McKerr, George; Saetzler, Kurt; Reed, Matt; Howard, C Vyvyan
2009-07-01
The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory 'tail' DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the 'randomness' of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.
k-Means: Random Sampling Procedure
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.
Nicklas, Jacinda M; Skurnik, Geraldine; Zera, Chloe A; Reforma, Liberty G; Levkoff, Sue E; Seely, Ellen W
2016-02-01
The postpartum period is a window of opportunity for diabetes prevention in women with recent gestational diabetes (GDM), but recruitment for clinical trials during this period of life is a major challenge. We adapted a social-ecologic model to develop a multi-level recruitment strategy at the macro (high or institutional level), meso (mid or provider level), and micro (individual) levels. Our goal was to recruit 100 women with recent GDM into the Balance after Baby randomized controlled trial over a 17-month period. Participants were asked to attend three in-person study visits at 6 weeks, 6, and 12 months postpartum. They were randomized into a control arm or a web-based intervention arm at the end of the baseline visit at six weeks postpartum. At the end of the recruitment period, we compared population characteristics of our enrolled subjects to the entire population of women with GDM delivering at Brigham and Women's Hospital (BWH). We successfully recruited 107 of 156 (69 %) women assessed for eligibility, with the majority (92) recruited during pregnancy at a mean 30 (SD ± 5) weeks of gestation, and 15 recruited postpartum, at a mean 2 (SD ± 3) weeks postpartum. 78 subjects attended the initial baseline visit, and 75 subjects were randomized into the trial at a mean 7 (SD ± 2) weeks postpartum. The recruited subjects were similar in age and race/ethnicity to the total population of 538 GDM deliveries at BWH over the 17-month recruitment period. Our multilevel approach allowed us to successfully meet our recruitment goal and recruit a representative sample of women with recent GDM. We believe that our most successful strategies included using a dedicated in-person recruiter, integrating recruitment into clinical flow, allowing for flexibility in recruitment, minimizing barriers to participation, and using an opt-out strategy with providers. Although the majority of women were recruited while pregnant, women recruited in the early postpartum period were
Sampling problems for randomly broken sticks
Energy Technology Data Exchange (ETDEWEB)
Huillet, Thierry [Laboratoire de Physique Theorique et Modelisation, CNRS-UMR 8089 et Universite de Cergy-Pontoise, 5 mail Gay-Lussac, 95031, Neuville sur Oise (France)
2003-04-11
Consider the random partitioning model of a population (represented by a stick of length 1) into n species (fragments) with identically distributed random weights (sizes). Upon ranking the fragments' weights according to ascending sizes, let S{sub m:n} be the size of the mth smallest fragment. Assume that some observer is sampling such populations as follows: drop at random k points (the sample size) onto this stick and record the corresponding numbers of visited fragments. We shall investigate the following sampling problems: (1) what is the sample size if the sampling is carried out until the first visit of the smallest fragment (size S{sub 1:n})? (2) For a given sample size, have all the fragments of the stick been visited at least once or not? This question is related to Feller's random coupon collector problem. (3) In what order are new fragments being discovered and what is the random number of samples separating the discovery of consecutive new fragments until exhaustion of the list? For this problem, the distribution of the size-biased permutation of the species' weights, as the sequence of their weights in their order of appearance is needed and studied.
Padilla, Alberto
2009-01-01
Systematic sampling is a commonly used technique due to its simplicity and ease of implementation. The drawback of this simplicity is that it is not possible to estimate the design variance without bias. There are several ways to circumvent this problem. One method is to suppose that the variable of interest has a random order in the population, so the sample variance of simple random sampling without replacement is used. By means of a mixed random - systematic sample, an unbiased estimator o...
Generation and Analysis of Constrained Random Sampling Patterns
DEFF Research Database (Denmark)
Pierzchlewski, Jacek; Arildsen, Thomas
2016-01-01
Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, which...... indicates signal sampling points in time. Practical random sampling patterns are constrained by ADC characteristics and application requirements. In this paper, we introduce statistical methods which evaluate random sampling pattern generators with emphasis on practical applications. Furthermore, we propose...... algorithm generates random sampling patterns dedicated for event-driven-ADCs better than existed sampling pattern generators. Finally, implementation issues of random sampling patterns are discussed....
Tempia, S; Salman, M D; Keefe, T; Morley, P; Freier, J E; DeMartini, J C; Wamwayi, H M; Njeumi, F; Soumaré, B; Abdi, A M
2010-12-01
A cross-sectional sero-survey, using a two-stage cluster sampling design, was conducted between 2002 and 2003 in ten administrative regions of central and southern Somalia, to estimate the seroprevalence and geographic distribution of rinderpest (RP) in the study area, as well as to identify potential risk factors for the observed seroprevalence distribution. The study was also used to test the feasibility of the spatially integrated investigation technique in nomadic and semi-nomadic pastoral systems. In the absence of a systematic list of livestock holdings, the primary sampling units were selected by generating random map coordinates. A total of 9,216 serum samples were collected from cattle aged 12 to 36 months at 562 sampling sites. Two apparent clusters of RP seroprevalence were detected. Four potential risk factors associated with the observed seroprevalence were identified: the mobility of cattle herds, the cattle population density, the proximity of cattle herds to cattle trade routes and cattle herd size. Risk maps were then generated to assist in designing more targeted surveillance strategies. The observed seroprevalence in these areas declined over time. In subsequent years, similar seroprevalence studies in neighbouring areas of Kenya and Ethiopia also showed a very low seroprevalence of RP or the absence of antibodies against RP. The progressive decline in RP antibody prevalence is consistent with virus extinction. Verification of freedom from RP infection in the Somali ecosystem is currently in progress.
Acceptance sampling using judgmental and randomly selected samples
Energy Technology Data Exchange (ETDEWEB)
Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl
2010-09-01
We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.
A random sampling procedure for anisotropic distributions
International Nuclear Information System (INIS)
Nagrajan, P.S.; Sethulakshmi, P.; Raghavendran, C.P.; Bhatia, D.P.
1975-01-01
A procedure is described for sampling the scattering angle of neutrons as per specified angular distribution data. The cosine of the scattering angle is written as a double Legendre expansion in the incident neutron energy and a random number. The coefficients of the expansion are given for C, N, O, Si, Ca, Fe and Pb and these elements are of interest in dosimetry and shielding. (author)
BWIP-RANDOM-SAMPLING, Random Sample Generation for Nuclear Waste Disposal
International Nuclear Information System (INIS)
Sagar, B.
1989-01-01
1 - Description of program or function: Random samples for different distribution types are generated. Distribution types as required for performance assessment modeling of geologic nuclear waste disposal are provided. These are: - Uniform, - Log-uniform (base 10 or natural), - Normal, - Lognormal (base 10 or natural), - Exponential, - Bernoulli, - User defined continuous distribution. 2 - Method of solution: A linear congruential generator is used for uniform random numbers. A set of functions is used to transform the uniform distribution to the other distributions. Stratified, rather than random, sampling can be chosen. Truncated limits can be specified on many distributions, whose usual definition has an infinite support. 3 - Restrictions on the complexity of the problem: Generation of correlated random variables is not included
Phobos Sample Return: Next Approach
Zelenyi, Lev; Martynov, Maxim; Zakharov, Alexander; Korablev, Oleg; Ivanov, Alexey; Karabadzak, George
The Martian moons still remain a mystery after numerous studies by Mars orbiting spacecraft. Their study cover three major topics related to (1) Solar system in general (formation and evolution, origin of planetary satellites, origin and evolution of life); (2) small bodies (captured asteroid, or remnants of Mars formation, or reaccreted Mars ejecta); (3) Mars (formation and evolution of Mars; Mars ejecta at the satellites). As reviewed by Galimov [2010] most of the above questions require the sample return from the Martian moon, while some (e.g. the characterization of the organic matter) could be also answered by in situ experiments. There is the possibility to obtain the sample of Mars material by sampling Phobos: following to Chappaz et al. [2012] a 200-g sample could contain 10-7 g of Mars surface material launched during the past 1 mln years, or 5*10-5 g of Mars material launched during the past 10 mln years, or 5*1010 individual particles from Mars, quantities suitable for accurate laboratory analyses. The studies of Phobos have been of high priority in the Russian program on planetary research for many years. Phobos-88 mission consisted of two spacecraft (Phobos-1, Phobos-2) and aimed the approach to Phobos at 50 m and remote studies, and also the release of small landers (long-living stations DAS). This mission implemented the program incompletely. It was returned information about the Martian environment and atmosphere. The next profect Phobos Sample Return (Phobos-Grunt) initially planned in early 2000 has been delayed several times owing to budget difficulties; the spacecraft failed to leave NEO in 2011. The recovery of the science goals of this mission and the delivery of the samples of Phobos to Earth remain of highest priority for Russian scientific community. The next Phobos SR mission named Boomerang was postponed following the ExoMars cooperation, but is considered the next in the line of planetary exploration, suitable for launch around 2022. A
Decompounding random sums: A nonparametric approach
DEFF Research Database (Denmark)
Hansen, Martin Bøgsted; Pitts, Susan M.
Observations from sums of random variables with a random number of summands, known as random, compound or stopped sums arise within many areas of engineering and science. Quite often it is desirable to infer properties of the distribution of the terms in the random sum. In the present paper we...... review a number of applications and consider the nonlinear inverse problem of inferring the cumulative distribution function of the components in the random sum. We review the existing literature on non-parametric approaches to the problem. The models amenable to the analysis are generalized considerably...
A random spatial sampling method in a rural developing nation
Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas
2014-01-01
Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...
Power Spectrum Estimation of Randomly Sampled Signals
DEFF Research Database (Denmark)
Velte, C. M.; Buchhave, P.; K. George, W.
algorithms; sample and-hold and the direct spectral estimator without residence time weighting. The computer generated signal is a Poisson process with a sample rate proportional to velocity magnitude that consist of well-defined frequency content, which makes bias easy to spot. The idea...
A random matrix approach to VARMA processes
International Nuclear Information System (INIS)
Burda, Zdzislaw; Jarosz, Andrzej; Nowak, Maciej A; Snarska, Malgorzata
2010-01-01
We apply random matrix theory to derive the spectral density of large sample covariance matrices generated by multivariate VMA(q), VAR(q) and VARMA(q 1 , q 2 ) processes. In particular, we consider a limit where the number of random variables N and the number of consecutive time measurements T are large but the ratio N/T is fixed. In this regime, the underlying random matrices are asymptotically equivalent to free random variables (FRV). We apply the FRV calculus to calculate the eigenvalue density of the sample covariance for several VARMA-type processes. We explicitly solve the VARMA(1, 1) case and demonstrate perfect agreement between the analytical result and the spectra obtained by Monte Carlo simulations. The proposed method is purely algebraic and can be easily generalized to q 1 >1 and q 2 >1.
Ucar Zennure; Pete Bettinger; Krista Merry; Jacek Siry; J.M. Bowker
2016-01-01
Two different sampling approaches for estimating urban tree canopy cover were applied to two medium-sized cities in the United States, in conjunction with two freely available remotely sensed imagery products. A random point-based sampling approach, which involved 1000 sample points, was compared against a plot/grid sampling (cluster sampling) approach that involved a...
Biro, Peter A
2013-02-01
Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.
Statistical sampling approaches for soil monitoring
Brus, D.J.
2014-01-01
This paper describes three statistical sampling approaches for regional soil monitoring, a design-based, a model-based and a hybrid approach. In the model-based approach a space-time model is exploited to predict global statistical parameters of interest such as the space-time mean. In the hybrid
Efficient sampling of complex network with modified random walk strategies
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
A Table-Based Random Sampling Simulation for Bioluminescence Tomography
Directory of Open Access Journals (Sweden)
Xiaomeng Zhang
2006-01-01
Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.
Sampling large random knots in a confined space
International Nuclear Information System (INIS)
Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M
2007-01-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications
Sampling large random knots in a confined space
Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.
2007-09-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
Sampling large random knots in a confined space
Energy Technology Data Exchange (ETDEWEB)
Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)
2007-09-28
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS
Sampath Sundaram; Ammani Sivaraman
2010-01-01
In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i) Balanced Systematic Sampling (BSS) of Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...
Health plan auditing: 100-percent-of-claims vs. random-sample audits.
Sillup, George P; Klimberg, Ronald K
2011-01-01
The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.
Optimizing sampling approaches along ecological gradients
DEFF Research Database (Denmark)
Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel
2016-01-01
1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...... patterns in a statistically solid and reproducible manner, given the normal restrictions in labour, time and money. However, a technical guideline about an adequate sampling design to maximize prediction success under restricted resources is lacking. This study aims at developing such a solid...... and reproducible guideline for sampling along gradients in all fields of ecology and science in general. 2. We conducted simulations with artificial data for five common response types known in ecology, each represented by a simple function (no response, linear, exponential, symmetric unimodal and asymmetric...
Random sampling of evolution time space and Fourier transform processing
International Nuclear Information System (INIS)
Kazimierczuk, Krzysztof; Zawadzka, Anna; Kozminski, Wiktor; Zhukov, Igor
2006-01-01
Application of Fourier Transform for processing 3D NMR spectra with random sampling of evolution time space is presented. The 2D FT is calculated for pairs of frequencies, instead of conventional sequence of one-dimensional transforms. Signal to noise ratios and linewidths for different random distributions were investigated by simulations and experiments. The experimental examples include 3D HNCA, HNCACB and 15 N-edited NOESY-HSQC spectra of 13 C 15 N labeled ubiquitin sample. Obtained results revealed general applicability of proposed method and the significant improvement of resolution in comparison with conventional spectra recorded in the same time
Adaptive importance sampling of random walks on continuous state spaces
International Nuclear Information System (INIS)
Baggerly, K.; Cox, D.; Picard, R.
1998-01-01
The authors consider adaptive importance sampling for a random walk with scoring in a general state space. Conditions under which exponential convergence occurs to the zero-variance solution are reviewed. These results generalize previous work for finite, discrete state spaces in Kollman (1993) and in Kollman, Baggerly, Cox, and Picard (1996). This paper is intended for nonstatisticians and includes considerable explanatory material
Strong disorder RG approach of random systems
International Nuclear Information System (INIS)
Igloi, Ferenc; Monthus, Cecile
2005-01-01
There is a large variety of quantum and classical systems in which the quenched disorder plays a dominant ro-circumflex le over quantum, thermal, or stochastic fluctuations: these systems display strong spatial heterogeneities, and many averaged observables are actually governed by rare regions. A unifying approach to treat the dynamical and/or static singularities of these systems has emerged recently, following the pioneering RG idea by Ma and Dasgupta and the detailed analysis by Fisher who showed that the Ma-Dasgupta RG rules yield asymptotic exact results if the broadness of the disorder grows indefinitely at large scales. Here we report these new developments by starting with an introduction of the main ingredients of the strong disorder RG method. We describe the basic properties of infinite disorder fixed points, which are realized at critical points, and of strong disorder fixed points, which control the singular behaviors in the Griffiths-phases. We then review in detail applications of the RG method to various disordered models, either (i) quantum models, such as random spin chains, ladders and higher dimensional spin systems, or (ii) classical models, such as diffusion in a random potential, equilibrium at low temperature and coarsening dynamics of classical random spin chains, trap models, delocalization transition of a random polymer from an interface, driven lattice gases and reaction diffusion models in the presence of quenched disorder. For several one-dimensional systems, the Ma-Dasgupta RG rules yields very detailed analytical results, whereas for other, mainly higher dimensional problems, the RG rules have to be implemented numerically. If available, the strong disorder RG results are compared with another, exact or numerical calculations
An integrate-over-temperature approach for enhanced sampling.
Gao, Yi Qin
2008-02-14
A simple method is introduced to achieve efficient random walking in the energy space in molecular dynamics simulations which thus enhances the sampling over a large energy range. The approach is closely related to multicanonical and replica exchange simulation methods in that it allows configurations of the system to be sampled in a wide energy range by making use of Boltzmann distribution functions at multiple temperatures. A biased potential is quickly generated using this method and is then used in accelerated molecular dynamics simulations.
Random phase approximation in relativistic approach
International Nuclear Information System (INIS)
Ma Zhongyu; Yang Ding; Tian Yuan; Cao Ligang
2009-01-01
Some special issues of the random phase approximation(RPA) in the relativistic approach are reviewed. A full consistency and proper treatment of coupling to the continuum are responsible for the successful application of the RPA in the description of dynamical properties of finite nuclei. The fully consistent relativistic RPA(RRPA) requires that the relativistic mean filed (RMF) wave function of the nucleus and the RRPA correlations are calculated in a same effective Lagrangian and the consistent treatment of the Dirac sea of negative energy states. The proper treatment of the single particle continuum with scattering asymptotic conditions in the RMF and RRPA is discussed. The full continuum spectrum can be described by the single particle Green's function and the relativistic continuum RPA is established. A separable form of the paring force is introduced in the relativistic quasi-particle RPA. (authors)
Random vs. systematic sampling from administrative databases involving human subjects.
Hagino, C; Lo, R J
1998-09-01
Two sampling techniques, simple random sampling (SRS) and systematic sampling (SS), were compared to determine whether they yield similar and accurate distributions for the following four factors: age, gender, geographic location and years in practice. Any point estimate within 7 yr or 7 percentage points of its reference standard (SRS or the entire data set, i.e., the target population) was considered "acceptably similar" to the reference standard. The sampling frame was from the entire membership database of the Canadian Chiropractic Association. The two sampling methods were tested using eight different sample sizes of n (50, 100, 150, 200, 250, 300, 500, 800). From the profile/characteristics, summaries of four known factors [gender, average age, number (%) of chiropractors in each province and years in practice], between- and within-methods chi 2 tests and unpaired t tests were performed to determine whether any of the differences [descriptively greater than 7% or 7 yr] were also statistically significant. The strengths of the agreements between the provincial distributions were quantified by calculating the percent agreements for each (provincial pairwise-comparison methods). Any percent agreement less than 70% was judged to be unacceptable. Our assessments of the two sampling methods (SRS and SS) for the different sample sizes tested suggest that SRS and SS yielded acceptably similar results. Both methods started to yield "correct" sample profiles at approximately the same sample size (n > 200). SS is not only convenient, it can be recommended for sampling from large databases in which the data are listed without any inherent order biases other than alphabetical listing by surname.
A random matrix approach to language acquisition
Nicolaidis, A.; Kosmidis, Kosmas; Argyrakis, Panos
2009-12-01
Since language is tied to cognition, we expect the linguistic structures to reflect patterns that we encounter in nature and are analyzed by physics. Within this realm we investigate the process of lexicon acquisition, using analytical and tractable methods developed within physics. A lexicon is a mapping between sounds and referents of the perceived world. This mapping is represented by a matrix and the linguistic interaction among individuals is described by a random matrix model. There are two essential parameters in our approach. The strength of the linguistic interaction β, which is considered as a genetically determined ability, and the number N of sounds employed (the lexicon size). Our model of linguistic interaction is analytically studied using methods of statistical physics and simulated by Monte Carlo techniques. The analysis reveals an intricate relationship between the innate propensity for language acquisition β and the lexicon size N, N~exp(β). Thus a small increase of the genetically determined β may lead to an incredible lexical explosion. Our approximate scheme offers an explanation for the biological affinity of different species and their simultaneous linguistic disparity.
A random matrix approach to language acquisition
International Nuclear Information System (INIS)
Nicolaidis, A; Kosmidis, Kosmas; Argyrakis, Panos
2009-01-01
Since language is tied to cognition, we expect the linguistic structures to reflect patterns that we encounter in nature and are analyzed by physics. Within this realm we investigate the process of lexicon acquisition, using analytical and tractable methods developed within physics. A lexicon is a mapping between sounds and referents of the perceived world. This mapping is represented by a matrix and the linguistic interaction among individuals is described by a random matrix model. There are two essential parameters in our approach. The strength of the linguistic interaction β, which is considered as a genetically determined ability, and the number N of sounds employed (the lexicon size). Our model of linguistic interaction is analytically studied using methods of statistical physics and simulated by Monte Carlo techniques. The analysis reveals an intricate relationship between the innate propensity for language acquisition β and the lexicon size N, N∼exp(β). Thus a small increase of the genetically determined β may lead to an incredible lexical explosion. Our approximate scheme offers an explanation for the biological affinity of different species and their simultaneous linguistic disparity
Melvin, Neal R; Poda, Daniel; Sutherland, Robert J
2007-10-01
When properly applied, stereology is a very robust and efficient method to quantify a variety of parameters from biological material. A common sampling strategy in stereology is systematic random sampling, which involves choosing a random sampling [corrected] start point outside the structure of interest, and sampling relevant objects at [corrected] sites that are placed at pre-determined, equidistant intervals. This has proven to be a very efficient sampling strategy, and is used widely in stereological designs. At the microscopic level, this is most often achieved through the use of a motorized stage that facilitates the systematic random stepping across the structure of interest. Here, we report a simple, precise and cost-effective software-based alternative to accomplishing systematic random sampling under the microscope. We believe that this approach will facilitate the use of stereological designs that employ systematic random sampling in laboratories that lack the resources to acquire costly, fully automated systems.
RandomSpot: A web-based tool for systematic random sampling of virtual slides.
Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E
2015-01-01
This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.
The Bootstrap, the Jackknife, and the Randomization Test: A Sampling Taxonomy.
Rodgers, J L
1999-10-01
A simple sampling taxonomy is defined that shows the differences between and relationships among the bootstrap, the jackknife, and the randomization test. Each method has as its goal the creation of an empirical sampling distribution that can be used to test statistical hypotheses, estimate standard errors, and/or create confidence intervals. Distinctions between the methods can be made based on the sampling approach (with replacement versus without replacement) and the sample size (replacing the whole original sample versus replacing a subset of the original sample). The taxonomy is useful for teaching the goals and purposes of resampling schemes. An extension of the taxonomy implies other possible resampling approaches that have not previously been considered. Univariate and multivariate examples are presented.
LOD score exclusion analyses for candidate genes using random population samples.
Deng, H W; Li, J; Recker, R R
2001-05-01
While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes with random population samples. We develop a LOD score approach for exclusion analyses of candidate genes with random population samples. Under this approach, specific genetic effects and inheritance models at candidate genes can be analysed and if a LOD score is < or = - 2.0, the locus can be excluded from having an effect larger than that specified. Computer simulations show that, with sample sizes often employed in association studies, this approach has high power to exclude a gene from having moderate genetic effects. In contrast to regular association analyses, population admixture will not affect the robustness of our analyses; in fact, it renders our analyses more conservative and thus any significant exclusion result is robust. Our exclusion analysis complements association analysis for candidate genes in random population samples and is parallel to the exclusion mapping analyses that may be conducted in linkage analyses with pedigrees or relative pairs. The usefulness of the approach is demonstrated by an application to test the importance of vitamin D receptor and estrogen receptor genes underlying the differential risk to osteoporotic fractures.
CONSISTENCY UNDER SAMPLING OF EXPONENTIAL RANDOM GRAPH MODELS.
Shalizi, Cosma Rohilla; Rinaldo, Alessandro
2013-04-01
The growing availability of network data and of scientific interest in distributed systems has led to the rapid development of statistical models of network structure. Typically, however, these are models for the entire network, while the data consists only of a sampled sub-network. Parameters for the whole network, which is what is of interest, are estimated by applying the model to the sub-network. This assumes that the model is consistent under sampling , or, in terms of the theory of stochastic processes, that it defines a projective family. Focusing on the popular class of exponential random graph models (ERGMs), we show that this apparently trivial condition is in fact violated by many popular and scientifically appealing models, and that satisfying it drastically limits ERGM's expressive power. These results are actually special cases of more general results about exponential families of dependent random variables, which we also prove. Using such results, we offer easily checked conditions for the consistency of maximum likelihood estimation in ERGMs, and discuss some possible constructive responses.
Random sampling of elementary flux modes in large-scale metabolic networks.
Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel
2012-09-15
The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.
International Nuclear Information System (INIS)
Maziero, Jonas
2015-01-01
The numerical generation of random quantum states (RQS) is an important procedure for investigations in quantum information science. Here, we review some methods that may be used for performing that task. We start by presenting a simple procedure for generating random state vectors, for which the main tool is the random sampling of unbiased discrete probability distributions (DPD). Afterwards, the creation of random density matrices is addressed. In this context, we first present the standard method, which consists in using the spectral decomposition of a quantum state for getting RQS from random DPDs and random unitary matrices. In the sequence, the Bloch vector parametrization method is described. This approach, despite being useful in several instances, is not in general convenient for RQS generation. In the last part of the article, we regard the overparametrized method (OPM) and the related Ginibre and Bures techniques. The OPM can be used to create random positive semidefinite matrices with unit trace from randomly produced general complex matrices in a simple way that is friendly for numerical implementations. We consider a physically relevant issue related to the possible domains that may be used for the real and imaginary parts of the elements of such general complex matrices. Subsequently, a too fast concentration of measure in the quantum state space that appears in this parametrization is noticed. (author)
Soil sampling strategies: Evaluation of different approaches
International Nuclear Information System (INIS)
De Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia
2008-01-01
The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2σ, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies
Soil sampling strategies: Evaluation of different approaches
Energy Technology Data Exchange (ETDEWEB)
De Zorzi, Paolo [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy)], E-mail: paolo.dezorzi@apat.it; Barbizzi, Sabrina; Belli, Maria [Agenzia per la Protezione dell' Ambiente e per i Servizi Tecnici (APAT), Servizio Metrologia Ambientale, Via di Castel Romano, 100-00128 Roma (Italy); Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia [Agenzia Regionale per la Prevenzione e Protezione dell' Ambiente del Veneto, ARPA Veneto, U.O. Centro Qualita Dati, Via Spalato, 14-36045 Vicenza (Italy)
2008-11-15
The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2{sigma}, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.
Soil sampling strategies: evaluation of different approaches.
de Zorzi, Paolo; Barbizzi, Sabrina; Belli, Maria; Mufato, Renzo; Sartori, Giuseppe; Stocchero, Giulia
2008-11-01
The National Environmental Protection Agency of Italy (APAT) performed a soil sampling intercomparison, inviting 14 regional agencies to test their own soil sampling strategies. The intercomparison was carried out at a reference site, previously characterised for metal mass fraction distribution. A wide range of sampling strategies, in terms of sampling patterns, type and number of samples collected, were used to assess the mean mass fraction values of some selected elements. The different strategies led in general to acceptable bias values (D) less than 2sigma, calculated according to ISO 13258. Sampling on arable land was relatively easy, with comparable results between different sampling strategies.
A random matrix approach to credit risk.
Münnix, Michael C; Schäfer, Rudi; Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.
A random matrix approach to credit risk.
Directory of Open Access Journals (Sweden)
Michael C Münnix
Full Text Available We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.
International Nuclear Information System (INIS)
Skalski, J.R.; Hoffman, A.; Ransom, B.H.; Steig, T.W.
1993-01-01
Five alternate sampling designs are compared using 15 d of 24-h continuous hydroacoustic data to identify the most favorable approach to fixed-location hydroacoustic monitoring of salmonid outmigrants. Four alternative aproaches to systematic sampling are compared among themselves and with stratified random sampling (STRS). Stratifying systematic sampling (STSYS) on a daily basis is found to reduce sampling error in multiday monitoring studies. Although sampling precision was predictable with varying levels of effort in STRS, neither magnitude nor direction of change in precision was predictable when effort was varied in systematic sampling (SYS). Furthermore, modifying systematic sampling to include replicated (e.g., nested) sampling (RSYS) is further shown to provide unbiased point and variance estimates as does STRS. Numerous short sampling intervals (e.g., 12 samples of 1-min duration per hour) must be monitored hourly using RSYS to provide efficient, unbiased point and interval estimates. For equal levels of effort, STRS outperformed all variations of SYS examined. Parametric approaches to confidence interval estimates are found to be superior to nonparametric interval estimates (i.e., bootstrap and jackknife) in estimating total fish passage. 10 refs., 1 fig., 8 tabs
Monitoring oil persistence on beaches : SCAT versus stratified random sampling designs
International Nuclear Information System (INIS)
Short, J.W.; Lindeberg, M.R.; Harris, P.M.; Maselko, J.M.; Pella, J.J.; Rice, S.D.
2003-01-01
In the event of a coastal oil spill, shoreline clean-up assessment teams (SCAT) commonly rely on visual inspection of the entire affected area to monitor the persistence of the oil on beaches. Occasionally, pits are excavated to evaluate the persistence of subsurface oil. This approach is practical for directing clean-up efforts directly following a spill. However, sampling of the 1989 Exxon Valdez oil spill in Prince William Sound 12 years later has shown that visual inspection combined with pit excavation does not offer estimates of contaminated beach area of stranded oil volumes. This information is needed to statistically evaluate the significance of change with time. Assumptions regarding the correlation of visually-evident surface oil and cryptic subsurface oil are usually not evaluated as part of the SCAT mandate. Stratified random sampling can avoid such problems and could produce precise estimates of oiled area and volume that allow for statistical assessment of major temporal trends and the extent of the impact. The 2001 sampling of the shoreline of Prince William Sound showed that 15 per cent of surface oil occurrences were associated with subsurface oil. This study demonstrates the usefulness of the stratified random sampling method and shows how sampling design parameters impact statistical outcome. Power analysis based on the study results, indicate that optimum power is derived when unnecessary stratification is avoided. It was emphasized that sampling effort should be balanced between choosing sufficient beaches for sampling and the intensity of sampling
LOD score exclusion analyses for candidate QTLs using random population samples.
Deng, Hong-Wen
2003-11-01
While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes as putative QTLs using random population samples. Previously, we developed an LOD score exclusion mapping approach for candidate genes for complex diseases. Here, we extend this LOD score approach for exclusion analyses of candidate genes for quantitative traits. Under this approach, specific genetic effects (as reflected by heritability) and inheritance models at candidate QTLs can be analyzed and if an LOD score is < or = -2.0, the locus can be excluded from having a heritability larger than that specified. Simulations show that this approach has high power to exclude a candidate gene from having moderate genetic effects if it is not a QTL and is robust to population admixture. Our exclusion analysis complements association analysis for candidate genes as putative QTLs in random population samples. The approach is applied to test the importance of Vitamin D receptor (VDR) gene as a potential QTL underlying the variation of bone mass, an important determinant of osteoporosis.
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account
A random probabilistic approach to seismic nuclear power plant analysis
International Nuclear Information System (INIS)
Romo, M.P.
1985-01-01
A probabilistic method for the seismic analysis of structures which takes into account the random nature of earthquakes and of the soil parameter uncertainties is presented in this paper. The method was developed combining elements of the theory of perturbations, the Random vibration theory and the complex response method. The probabilistic method is evaluated by comparing the responses of a single degree of freedom system computed with this approach and the Monte Carlo method. (orig.)
Sampling Polya-Gamma random variates: alternate and approximate techniques
Windle, Jesse; Polson, Nicholas G.; Scott, James G.
2014-01-01
Efficiently sampling from the P\\'olya-Gamma distribution, ${PG}(b,z)$, is an essential element of P\\'olya-Gamma data augmentation. Polson et. al (2013) show how to efficiently sample from the ${PG}(1,z)$ distribution. We build two new samplers that offer improved performance when sampling from the ${PG}(b,z)$ distribution and $b$ is not unity.
Estimates of Inequality Indices Based on Simple Random, Ranked Set, and Systematic Sampling
Bansal, Pooja; Arora, Sangeeta; Mahajan, Kalpana K.
2013-01-01
Gini index, Bonferroni index, and Absolute Lorenz index are some popular indices of inequality showing different features of inequality measurement. In general simple random sampling procedure is commonly used to estimate the inequality indices and their related inference. The key condition that the samples must be drawn via simple random sampling procedure though makes calculations much simpler but this assumption is often violated in practice as the data does not always yield simple random ...
Importance sampling of heavy-tailed iterated random functions
B. Chen (Bohan); C.H. Rhee (Chang-Han); A.P. Zwart (Bert)
2016-01-01
textabstractWe consider a stochastic recurrence equation of the form $Z_{n+1} = A_{n+1} Z_n+B_{n+1}$, where $\\mathbb{E}[\\log A_1]<0$, $\\mathbb{E}[\\log^+ B_1]<\\infty$ and $\\{(A_n,B_n)\\}_{n\\in\\mathbb{N}}$ is an i.i.d. sequence of positive random vectors. The stationary distribution of this Markov
Random Matrix Approach for Primal-Dual Portfolio Optimization Problems
Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi
2017-12-01
In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.
An effective Hamiltonian approach to quantum random walk
Indian Academy of Sciences (India)
2017-02-09
Feb 9, 2017 ... Abstract. In this article we present an effective Hamiltonian approach for discrete time quantum random walk. A form of the Hamiltonian for one-dimensional quantum walk has been prescribed, utilizing the fact that Hamil- tonians are generators of time translations. Then an attempt has been made to ...
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Directory of Open Access Journals (Sweden)
CODRUŢA DURA
2010-01-01
Full Text Available The sample represents a particular segment of the statistical populationchosen to represent it as a whole. The representativeness of the sample determines the accuracyfor estimations made on the basis of calculating the research indicators and the inferentialstatistics. The method of random sampling is part of probabilistic methods which can be usedwithin marketing research and it is characterized by the fact that it imposes the requirementthat each unit belonging to the statistical population should have an equal chance of beingselected for the sampling process. When the simple random sampling is meant to be rigorouslyput into practice, it is recommended to use the technique of random number tables in order toconfigure the sample which will provide information that the marketer needs. The paper alsodetails the practical procedure implemented in order to create a sample for a marketingresearch by generating random numbers using the facilities offered by Microsoft Excel.
Relative efficiency and sample size for cluster randomized trials with variable cluster sizes.
You, Zhiying; Williams, O Dale; Aban, Inmaculada; Kabagambe, Edmond Kato; Tiwari, Hemant K; Cutter, Gary
2011-02-01
The statistical power of cluster randomized trials depends on two sample size components, the number of clusters per group and the numbers of individuals within clusters (cluster size). Variable cluster sizes are common and this variation alone may have significant impact on study power. Previous approaches have taken this into account by either adjusting total sample size using a designated design effect or adjusting the number of clusters according to an assessment of the relative efficiency of unequal versus equal cluster sizes. This article defines a relative efficiency of unequal versus equal cluster sizes using noncentrality parameters, investigates properties of this measure, and proposes an approach for adjusting the required sample size accordingly. We focus on comparing two groups with normally distributed outcomes using t-test, and use the noncentrality parameter to define the relative efficiency of unequal versus equal cluster sizes and show that statistical power depends only on this parameter for a given number of clusters. We calculate the sample size required for an unequal cluster sizes trial to have the same power as one with equal cluster sizes. Relative efficiency based on the noncentrality parameter is straightforward to calculate and easy to interpret. It connects the required mean cluster size directly to the required sample size with equal cluster sizes. Consequently, our approach first determines the sample size requirements with equal cluster sizes for a pre-specified study power and then calculates the required mean cluster size while keeping the number of clusters unchanged. Our approach allows adjustment in mean cluster size alone or simultaneous adjustment in mean cluster size and number of clusters, and is a flexible alternative to and a useful complement to existing methods. Comparison indicated that we have defined a relative efficiency that is greater than the relative efficiency in the literature under some conditions. Our measure
A sampling-based approach to probabilistic pursuit evasion
Mahadevan, Aditya; Amato, Nancy M.
2012-01-01
Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented
Correlated random sampling for multivariate normal and log-normal distributions
International Nuclear Information System (INIS)
Žerovnik, Gašper; Trkov, Andrej; Kodeli, Ivan A.
2012-01-01
A method for correlated random sampling is presented. Representative samples for multivariate normal or log-normal distribution can be produced. Furthermore, any combination of normally and log-normally distributed correlated variables may be sampled to any requested accuracy. Possible applications of the method include sampling of resonance parameters which are used for reactor calculations.
Small sample approach, and statistical and epidemiological aspects
Offringa, Martin; van der Lee, Hanneke
2011-01-01
In this chapter, the design of pharmacokinetic studies and phase III trials in children is discussed. Classical approaches and relatively novel approaches, which may be more useful in the context of drug research in children, are discussed. The burden of repeated blood sampling in pediatric
THE SAMPLING PROCESS IN THE FINANCIAL AUDIT .TECHNICAL PRACTICE APPROACH
Directory of Open Access Journals (Sweden)
Cardos Vasile-Daniel
2014-12-01
“Audit sampling” (sampling assumes appliancing audit procedures for less than 100% of the elements within an account or a trasaction class balance, such that all the samples will be selected. This will allow the auditor to obtain and to evaluate the audit evidence on some features for the selected elements, in purpose to assist or to express a conclusion regardind the population within the sample was extracted. The sampling in audit can use both a statistical or a non-statistical approach. (THE AUDIT INTERNATIONAl STANDARD 530 –THE SAMPLING IN AUDIT AND OTHER SELECTIVE TESTING PROCEDURES
THE SAMPLING PROCESS IN THE FINANCIAL AUDIT .TECHNICAL PRACTICE APPROACH
Directory of Open Access Journals (Sweden)
GRIGORE MARIAN
2014-07-01
“Audit sampling” (sampling assumes appliancing audit procedures for less than 100% of the elements within an account or a trasaction class balance, such that all the samples will be selected. This will allow the auditor to obtain and to evaluate the audit evidence on some features for the selected elements, in purpose to assist or to express a conclusion regardind the population within the sample was extracted. The sampling in audit can use both a statistical or a non-statistical approach. (THE AUDIT INTERNATIONAl STANDARD 530 –THE SAMPLING IN AUDIT AND OTHER SELECTIVE TESTING PROCEDURES
International Nuclear Information System (INIS)
Jeong, Hae-Yong; Park, Moon-Ghu
2015-01-01
In most existing evaluation methodologies, which follow a conservative approach, the most conservative initial conditions are searched for each transient scenario through tremendous assessment for wide operating windows or limiting conditions for operation (LCO) allowed by the operating guidelines. In this procedure, a user effect could be involved and a remarkable time and human resources are consumed. In the present study, we investigated a more effective statistical method for the selection of the most conservative initial condition by the use of random sampling of operating parameters affecting the initial conditions. A method for the determination of initial conditions based on random sampling of plant design parameters is proposed. This method is expected to be applied for the selection of the most conservative initial plant conditions in the safety analysis using a conservative evaluation methodology. In the method, it is suggested that the initial conditions of reactor coolant flow rate, pressurizer level, pressurizer pressure, and SG level are adjusted by controlling the pump rated flow, setpoints of PLCS, PPCS, and FWCS, respectively. The proposed technique is expected to contribute to eliminate the human factors introduced in the conventional safety analysis procedure and also to reduce the human resources invested in the safety evaluation of nuclear power plants
A systematic examination of a random sampling strategy for source apportionment calculations.
Andersson, August
2011-12-15
Estimating the relative contributions from multiple potential sources of a specific component in a mixed environmental matrix is a general challenge in diverse fields such as atmospheric, environmental and earth sciences. Perhaps the most common strategy for tackling such problems is by setting up a system of linear equations for the fractional influence of different sources. Even though an algebraic solution of this approach is possible for the common situation with N+1 sources and N source markers, such methodology introduces a bias, since it is implicitly assumed that the calculated fractions and the corresponding uncertainties are independent of the variability of the source distributions. Here, a random sampling (RS) strategy for accounting for such statistical bias is examined by investigating rationally designed synthetic data sets. This random sampling methodology is found to be robust and accurate with respect to reproducibility and predictability. This method is also compared to a numerical integration solution for a two-source situation where source variability also is included. A general observation from this examination is that the variability of the source profiles not only affects the calculated precision but also the mean/median source contributions. Copyright © 2011 Elsevier B.V. All rights reserved.
Triangulation based inclusion probabilities: a design-unbiased sampling approach
Fehrmann, Lutz; Gregoire, Timothy; Kleinn, Christoph
2011-01-01
A probabilistic sampling approach for design-unbiased estimation of area-related quantitative characteristics of spatially dispersed population units is proposed. The developed field protocol includes a fixed number of 3 units per sampling location and is based on partial triangulations over their natural neighbors to derive the individual inclusion probabilities. The performance of the proposed design is tested in comparison to fixed area sample plots in a simulation with two forest stands. ...
U.S. Environmental Protection Agency — Figure. This dataset is associated with the following publication: Shah, S., S. Kane, A.M. Erler, and T. Alfaro. Sample Processing Approach for Detection of Ricin in...
Directory of Open Access Journals (Sweden)
Francesco Bonavolontà
2014-10-01
Full Text Available The paper deals with the problem of improving the maximum sample rate of analog-to-digital converters (ADCs included in low cost wireless sensing nodes. To this aim, the authors propose an efficient acquisition strategy based on the combined use of high-resolution time-basis and compressive sampling. In particular, the high-resolution time-basis is adopted to provide a proper sequence of random sampling instants, and a suitable software procedure, based on compressive sampling approach, is exploited to reconstruct the signal of interest from the acquired samples. Thanks to the proposed strategy, the effective sample rate of the reconstructed signal can be as high as the frequency of the considered time-basis, thus significantly improving the inherent ADC sample rate. Several tests are carried out in simulated and real conditions to assess the performance of the proposed acquisition strategy in terms of reconstruction error. In particular, the results obtained in experimental tests with ADC included in actual 8- and 32-bits microcontrollers highlight the possibility of achieving effective sample rate up to 50 times higher than that of the original ADC sample rate.
Flow in Random Microstructures: a Multilevel Monte Carlo Approach
Icardi, Matteo
2016-01-06
In this work we are interested in the fast estimation of effective parameters of random heterogeneous materials using Multilevel Monte Carlo (MLMC). MLMC is an efficient and flexible solution for the propagation of uncertainties in complex models, where an explicit parametrisation of the input randomness is not available or too expensive. We propose a general-purpose algorithm and computational code for the solution of Partial Differential Equations (PDEs) on random heterogeneous materials. We make use of the key idea of MLMC, based on different discretization levels, extending it in a more general context, making use of a hierarchy of physical resolution scales, solvers, models and other numerical/geometrical discretisation parameters. Modifications of the classical MLMC estimators are proposed to further reduce variance in cases where analytical convergence rates and asymptotic regimes are not available. Spheres, ellipsoids and general convex-shaped grains are placed randomly in the domain with different placing/packing algorithms and the effective properties of the heterogeneous medium are computed. These are, for example, effective diffusivities, conductivities, and reaction rates. The implementation of the Monte-Carlo estimators, the statistical samples and each single solver is done efficiently in parallel. The method is tested and applied for pore-scale simulations of random sphere packings.
Extension of the Multipole Approach to Random Metamaterials
Directory of Open Access Journals (Sweden)
A. Chipouline
2012-01-01
Full Text Available Influence of the short-range lateral disorder in the meta-atoms positioning on the effective parameters of the metamaterials is investigated theoretically using the multipole approach. Random variation of the near field quasi-static interaction between metaatoms in form of double wires is shown to be the reason for the effective permittivity and permeability changes. The obtained analytical results are compared with the known experimental ones.
International Nuclear Information System (INIS)
Coleman, C.J.; Goode, S.R.
1996-01-01
A convenient and effective new approach for analyzing DWPF samples involves the use of inserts with volumes of 1.5--3 ml placed in the neck of 14 ml sample vials. The inserts have rims that conform to the rim of the vials so that they sit straight and stable in the vial. The DWPF tank sampling system fills the pre-weighed insert rather than the entire vial, so the vial functions only as the insert holder. The shielded cell operator then removes the vial cap and decants the insert containing the sample into a plastic bottle, crucible, etc., for analysis. Inert materials such as Teflon, plastic, and zirconium are used for the insert so it is unnecessary to separate the insert from the sample for most analyses. The key technique advantage of using inserts to take DWPF samples versus filling sample vials is that it provides a convenient and almost foolproof way of obtaining and handling small volumes of slurry samples in a shielded cell without corrupting the sample. Since the insert allows the entire sample to be analyzed, this approach eliminates the errors inherent with subsampling heterogeneous slurries that comprise DWPF samples. Slurry samples can then be analyzed with confidence. Analysis times are dramatically reduced by eliminating the drying and vitrification steps normally used to produce a homogeneous solid sample. Direct dissolution and elemental analysis of slurry samples are achieved in 8 hours or less compared with 40 hours for analysis of vitrified slurry samples. Comparison of samples taken in inserts versus full vials indicate that the insert does not significantly affect sample composition
An alternative procedure for estimating the population mean in simple random sampling
Directory of Open Access Journals (Sweden)
Housila P. Singh
2012-03-01
Full Text Available This paper deals with the problem of estimating the finite population mean using auxiliary information in simple random sampling. Firstly we have suggested a correction to the mean squared error of the estimator proposed by Gupta and Shabbir [On improvement in estimating the population mean in simple random sampling. Jour. Appl. Statist. 35(5 (2008, pp. 559-566]. Later we have proposed a ratio type estimator and its properties are studied in simple random sampling. Numerically we have shown that the proposed class of estimators is more efficient than different known estimators including Gupta and Shabbir (2008 estimator.
Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster
DEFF Research Database (Denmark)
Schou, Mads Fristrup
2013-01-01
When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and diminishes environmental variance. This method was compared with a traditional egg collection method where eggs are collected directly from the medium. Within each method the observed and expected standard deviations of egg-to-adult viability were compared, whereby the difference in the randomness...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
Approach-Induced Biases in Human Information Sampling.
Directory of Open Access Journals (Sweden)
Laurence T Hunt
2016-11-01
Full Text Available Information sampling is often biased towards seeking evidence that confirms one's prior beliefs. Despite such biases being a pervasive feature of human behavior, their underlying causes remain unclear. Many accounts of these biases appeal to limitations of human hypothesis testing and cognition, de facto evoking notions of bounded rationality, but neglect more basic aspects of behavioral control. Here, we investigated a potential role for Pavlovian approach in biasing which information humans will choose to sample. We collected a large novel dataset from 32,445 human subjects, making over 3 million decisions, who played a gambling task designed to measure the latent causes and extent of information-sampling biases. We identified three novel approach-related biases, formalized by comparing subject behavior to a dynamic programming model of optimal information gathering. These biases reflected the amount of information sampled ("positive evidence approach", the selection of which information to sample ("sampling the favorite", and the interaction between information sampling and subsequent choices ("rejecting unsampled options". The prevalence of all three biases was related to a Pavlovian approach-avoid parameter quantified within an entirely independent economic decision task. Our large dataset also revealed that individual differences in the amount of information gathered are a stable trait across multiple gameplays and can be related to demographic measures, including age and educational attainment. As well as revealing limitations in cognitive processing, our findings suggest information sampling biases reflect the expression of primitive, yet potentially ecologically adaptive, behavioral repertoires. One such behavior is sampling from options that will eventually be chosen, even when other sources of information are more pertinent for guiding future action.
Accounting for randomness in measurement and sampling in studying cancer cell population dynamics.
Ghavami, Siavash; Wolkenhauer, Olaf; Lahouti, Farshad; Ullah, Mukhtar; Linnebacher, Michael
2014-10-01
Knowing the expected temporal evolution of the proportion of different cell types in sample tissues gives an indication about the progression of the disease and its possible response to drugs. Such systems have been modelled using Markov processes. We here consider an experimentally realistic scenario in which transition probabilities are estimated from noisy cell population size measurements. Using aggregated data of FACS measurements, we develop MMSE and ML estimators and formulate two problems to find the minimum number of required samples and measurements to guarantee the accuracy of predicted population sizes. Our numerical results show that the convergence mechanism of transition probabilities and steady states differ widely from the real values if one uses the standard deterministic approach for noisy measurements. This provides support for our argument that for the analysis of FACS data one should consider the observed state as a random variable. The second problem we address is about the consequences of estimating the probability of a cell being in a particular state from measurements of small population of cells. We show how the uncertainty arising from small sample sizes can be captured by a distribution for the state probability.
A Markov random field approach for microstructure synthesis
International Nuclear Information System (INIS)
Kumar, A; Nguyen, L; DeGraef, M; Sundararaghavan, V
2016-01-01
We test the notion that many microstructures have an underlying stationary probability distribution. The stationary probability distribution is ubiquitous: we know that different windows taken from a polycrystalline microstructure are generally ‘statistically similar’. To enable computation of such a probability distribution, microstructures are represented in the form of undirected probabilistic graphs called Markov Random Fields (MRFs). In the model, pixels take up integer or vector states and interact with multiple neighbors over a window. Using this lattice structure, algorithms are developed to sample the conditional probability density for the state of each pixel given the known states of its neighboring pixels. The sampling is performed using reference experimental images. 2D microstructures are artificially synthesized using the sampled probabilities. Statistical features such as grain size distribution and autocorrelation functions closely match with those of the experimental images. The mechanical properties of the synthesized microstructures were computed using the finite element method and were also found to match the experimental values. (paper)
Random Sampling of Correlated Parameters – a Consistent Solution for Unfavourable Conditions
Energy Technology Data Exchange (ETDEWEB)
Žerovnik, G., E-mail: gasper.zerovnik@ijs.si [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Trkov, A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Kodeli, I.A. [Jožef Stefan Institute, Jamova cesta 39, SI-1000 Ljubljana (Slovenia); Capote, R. [International Atomic Energy Agency, PO Box 100, A-1400 Vienna (Austria); Smith, D.L. [Argonne National Laboratory, 1710 Avenida del Mundo, Coronado, CA 92118-3073 (United States)
2015-01-15
Two methods for random sampling according to a multivariate lognormal distribution – the correlated sampling method and the method of transformation of correlation coefficients – are briefly presented. The methods are mathematically exact and enable consistent sampling of correlated inherently positive parameters with given information on the first two distribution moments. Furthermore, a weighted sampling method to accelerate the convergence of parameters with extremely large relative uncertainties is described. However, the method is efficient only for a limited number of correlated parameters.
A sampling-based approach to probabilistic pursuit evasion
Mahadevan, Aditya
2012-05-01
Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented with useful information to model interesting scenarios related to multi-agent interaction and coordination. © 2012 IEEE.
Time delay correlations in chaotic scattering and random matrix approach
International Nuclear Information System (INIS)
Lehmann, N.; Savin, D.V.; Sokolov, V.V.; Sommers, H.J.
1994-01-01
We study the correlations in the time delay a model of chaotic resonance scattering based on the random matrix approach. Analytical formulae which are valid for arbitrary number of open channels and arbitrary coupling strength between resonances and channels are obtained by the supersymmetry method. The time delay correlation function, through being not a Lorentzian, is characterized, similar to that of the scattering matrix, by the gap between the cloud of complex poles of the S-matrix and the real energy axis. 28 refs.; 4 figs
Williamson, Graham R
2003-11-01
This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.
New approaches to nanoparticle sample fabrication for atom probe tomography
International Nuclear Information System (INIS)
Felfer, P.; Li, T.; Eder, K.; Galinski, H.; Magyar, A.P.; Bell, D.C.; Smith, G.D.W.; Kruse, N.; Ringer, S.P.; Cairney, J.M.
2015-01-01
Due to their unique properties, nano-sized materials such as nanoparticles and nanowires are receiving considerable attention. However, little data is available about their chemical makeup at the atomic scale, especially in three dimensions (3D). Atom probe tomography is able to answer many important questions about these materials if the challenge of producing a suitable sample can be overcome. In order to achieve this, the nanomaterial needs to be positioned within the end of a tip and fixed there so the sample possesses sufficient structural integrity for analysis. Here we provide a detailed description of various techniques that have been used to position nanoparticles on substrates for atom probe analysis. In some of the approaches, this is combined with deposition techniques to incorporate the particles into a solid matrix, and focused ion beam processing is then used to fabricate atom probe samples from this composite. Using these approaches, data has been achieved from 10–20 nm core–shell nanoparticles that were extracted directly from suspension (i.e. with no chemical modification) with a resolution of better than ±1 nm. - Highlights: • Samples for APT of nanoparticles were fabricated from particle powders and dispersions. • Electrophoresis was suitable for producing samples from dispersions. • Powder lift-out was successfully producing samples from particle agglomerates. • Dispersion application/coating delivered the highest quality results.
New approaches to nanoparticle sample fabrication for atom probe tomography
Energy Technology Data Exchange (ETDEWEB)
Felfer, P., E-mail: peter.felfer@sydney.edu.au [School for Aerospace, Mechanical and Mechatronic Engineering/Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); Li, T. [School for Aerospace, Mechanical and Mechatronic Engineering/Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); Materials Department, The University of Oxford, Oxford (United Kingdom); Eder, K. [School for Aerospace, Mechanical and Mechatronic Engineering/Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia); Galinski, H. [School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138 (United States); Magyar, A.P.; Bell, D.C. [School of Engineering and Applied Sciences, Harvard University, Cambridge, MA 02138 (United States); Center for Nanoscale Systems, Harvard University, Cambridge, MA 02138 (United States); Smith, G.D.W. [Materials Department, The University of Oxford, Oxford (United Kingdom); Kruse, N. [Chemical Physics of Materials (Catalysis-Tribology), Université Libre de Bruxelles, Campus Plaine, CP 243, 1050 Brussels (Belgium); Ringer, S.P.; Cairney, J.M. [School for Aerospace, Mechanical and Mechatronic Engineering/Australian Centre for Microscopy and Microanalysis, The University of Sydney, NSW 2006 (Australia)
2015-12-15
Due to their unique properties, nano-sized materials such as nanoparticles and nanowires are receiving considerable attention. However, little data is available about their chemical makeup at the atomic scale, especially in three dimensions (3D). Atom probe tomography is able to answer many important questions about these materials if the challenge of producing a suitable sample can be overcome. In order to achieve this, the nanomaterial needs to be positioned within the end of a tip and fixed there so the sample possesses sufficient structural integrity for analysis. Here we provide a detailed description of various techniques that have been used to position nanoparticles on substrates for atom probe analysis. In some of the approaches, this is combined with deposition techniques to incorporate the particles into a solid matrix, and focused ion beam processing is then used to fabricate atom probe samples from this composite. Using these approaches, data has been achieved from 10–20 nm core–shell nanoparticles that were extracted directly from suspension (i.e. with no chemical modification) with a resolution of better than ±1 nm. - Highlights: • Samples for APT of nanoparticles were fabricated from particle powders and dispersions. • Electrophoresis was suitable for producing samples from dispersions. • Powder lift-out was successfully producing samples from particle agglomerates. • Dispersion application/coating delivered the highest quality results.
Generating Random Samples of a Given Size Using Social Security Numbers.
Erickson, Richard C.; Brauchle, Paul E.
1984-01-01
The purposes of this article are (1) to present a method by which social security numbers may be used to draw cluster samples of a predetermined size and (2) to describe procedures used to validate this method of drawing random samples. (JOW)
Occupational position and its relation to mental distress in a random sample of Danish residents
DEFF Research Database (Denmark)
Rugulies, Reiner Ernst; Madsen, Ida E H; Nielsen, Maj Britt D
2010-01-01
PURPOSE: To analyze the distribution of depressive, anxiety, and somatization symptoms across different occupational positions in a random sample of Danish residents. METHODS: The study sample consisted of 591 Danish residents (50% women), aged 20-65, drawn from an age- and gender-stratified random...... sample of the Danish population. Participants filled out a survey that included the 92 item version of the Hopkins Symptom Checklist (SCL-92). We categorized occupational position into seven groups: high- and low-grade non-manual workers, skilled and unskilled manual workers, high- and low-grade self...
An integrated approach for multi-level sample size determination
International Nuclear Information System (INIS)
Lu, M.S.; Teichmann, T.; Sanborn, J.B.
1997-01-01
Inspection procedures involving the sampling of items in a population often require steps of increasingly sensitive measurements, with correspondingly smaller sample sizes; these are referred to as multilevel sampling schemes. In the case of nuclear safeguards inspections verifying that there has been no diversion of Special Nuclear Material (SNM), these procedures have been examined often and increasingly complex algorithms have been developed to implement them. The aim in this paper is to provide an integrated approach, and, in so doing, to describe a systematic, consistent method that proceeds logically from level to level with increasing accuracy. The authors emphasize that the methods discussed are generally consistent with those presented in the references mentioned, and yield comparable results when the error models are the same. However, because of its systematic, integrated approach the proposed method elucidates the conceptual understanding of what goes on, and, in many cases, simplifies the calculations. In nuclear safeguards inspections, an important aspect of verifying nuclear items to detect any possible diversion of nuclear fissile materials is the sampling of such items at various levels of sensitivity. The first step usually is sampling by ''attributes'' involving measurements of relatively low accuracy, followed by further levels of sampling involving greater accuracy. This process is discussed in some detail in the references given; also, the nomenclature is described. Here, the authors outline a coordinated step-by-step procedure for achieving such multilevel sampling, and they develop the relationships between the accuracy of measurement and the sample size required at each stage, i.e., at the various levels. The logic of the underlying procedures is carefully elucidated; the calculations involved and their implications, are clearly described, and the process is put in a form that allows systematic generalization
Kaolin Quality Prediction from Samples: A Bayesian Network Approach
International Nuclear Information System (INIS)
Rivas, T.; Taboada, J.; Ordonez, C.; Matias, J. M.
2009-01-01
We describe the results of an expert system applied to the evaluation of samples of kaolin for industrial use in paper or ceramic manufacture. Different machine learning techniques - classification trees, support vector machines and Bayesian networks - were applied with the aim of evaluating and comparing their interpretability and prediction capacities. The predictive capacity of these models for the samples analyzed was highly satisfactory, both for ceramic quality and paper quality. However, Bayesian networks generally proved to be the most useful technique for our study, as this approach combines good predictive capacity with excellent interpretability of the kaolin quality structure, as it graphically represents relationships between variables and facilitates what-if analyses.
Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne
2018-01-01
Introduction Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. Materials and methods The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. Results The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. Conclusions The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in
L'Engle, Kelly; Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne
2018-01-01
Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in comparison to household surveys. Random digit dialing of mobile
Directory of Open Access Journals (Sweden)
Kelly L'Engle
Full Text Available Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample.The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census.The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample.The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in comparison to household surveys. Random digit
Energy Technology Data Exchange (ETDEWEB)
Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-11-01
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10^{-4} probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.
International Nuclear Information System (INIS)
Ekins, R.P.; Sufi, S.; Malan, P.G.
1978-01-01
The enormous impact on medical science in the last two decades of microanalytical techniques employing radioisotopic labels has, in turn, generated a large demand for automatic radioisotopic sample counters. Such instruments frequently comprise the most important item of capital equipment required in the use of radioimmunoassay and related techniques and often form a principle bottleneck in the flow of samples through a busy laboratory. It is therefore imperative that such instruments should be used 'intelligently' and in an optimal fashion to avoid both the very large capital expenditure involved in the unnecessary proliferation of instruments and the time delays arising from their sub-optimal use. Most of the current generation of radioactive sample counters nevertheless rely on primitive control mechanisms based on a simplistic statistical theory of radioactive sample counting which preclude their efficient and rational use. The fundamental principle upon which this approach is based is that it is useless to continue counting a radioactive sample for a time longer than that required to yield a significant increase in precision of the measurement. Thus, since substantial experimental errors occur during sample preparation, these errors should be assessed and must be related to the counting errors for that sample. The objective of the paper is to demonstrate that the combination of a realistic statistical assessment of radioactive sample measurement, together with the more sophisticated control mechanisms that modern microprocessor technology make possible, may often enable savings in counter usage of the order of 5- to 10-fold to be made. (author)
International Nuclear Information System (INIS)
Ekins, R.P.; Sufi, S.; Malan, P.G.
1977-01-01
The enormous impact on medical science in the last two decades of microanalytical techniques employing radioisotopic labels has, in turn, generated a large demand for automatic radioisotopic sample counters. Such instruments frequently comprise the most important item of capital equipment required in the use of radioimmunoassay and related techniques and often form a principle bottleneck in the flow of samples through a busy laboratory. It is therefore particularly imperitive that such instruments should be used 'intelligently' and in an optimal fashion to avoid both the very large capital expenditure involved in the unnecessary proliferation of instruments and the time delays arising from their sub-optimal use. The majority of the current generation of radioactive sample counters nevertheless rely on primitive control mechanisms based on a simplistic statistical theory of radioactive sample counting which preclude their efficient and rational use. The fundamental principle upon which this approach is based is that it is useless to continue counting a radioactive sample for a time longer than that required to yield a significant increase in precision of the measurement. Thus, since substantial experimental errors occur during sample preparation, these errors should be assessed and must be releted to the counting errors for that sample. It is the objective of this presentation to demonstrate that the combination of a realistic statistical assessment of radioactive sample measurement, together with the more sophisticated control mechanisms that modern microprocessor technology make possible, may often enable savings in counter usage of the order of 5-10 fold to be made. (orig.) [de
A Fault Sample Simulation Approach for Virtual Testability Demonstration Test
Institute of Scientific and Technical Information of China (English)
ZHANG Yong; QIU Jing; LIU Guanjun; YANG Peng
2012-01-01
Virtual testability demonstration test has many advantages,such as low cost,high efficiency,low risk and few restrictions.It brings new requirements to the fault sample generation.A fault sample simulation approach for virtual testability demonstration test based on stochastic process theory is proposed.First,the similarities and differences of fault sample generation between physical testability demonstration test and virtual testability demonstration test are discussed.Second,it is pointed out that the fault occurrence process subject to perfect repair is renewal process.Third,the interarrival time distribution function of the next fault event is given.Steps and flowcharts of fault sample generation are introduced.The number of faults and their occurrence time are obtained by statistical simulation.Finally,experiments are carried out on a stable tracking platform.Because a variety of types of life distributions and maintenance modes are considered and some assumptions are removed,the sample size and structure of fault sample simulation results are more similar to the actual results and more reasonable.The proposed method can effectively guide the fault injection in virtual testability demonstration test.
Per-Sample Multiple Kernel Approach for Visual Concept Learning
Directory of Open Access Journals (Sweden)
Ling-Yu Duan
2010-01-01
Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.
Per-Sample Multiple Kernel Approach for Visual Concept Learning
Directory of Open Access Journals (Sweden)
Tian Yonghong
2010-01-01
Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.
A Variational Approach to Enhanced Sampling and Free Energy Calculations
Parrinello, Michele
2015-03-01
The presence of kinetic bottlenecks severely hampers the ability of widely used sampling methods like molecular dynamics or Monte Carlo to explore complex free energy landscapes. One of the most popular methods for addressing this problem is umbrella sampling which is based on the addition of an external bias which helps overcoming the kinetic barriers. The bias potential is usually taken to be a function of a restricted number of collective variables. However constructing the bias is not simple, especially when the number of collective variables increases. Here we introduce a functional of the bias which, when minimized, allows us to recover the free energy. We demonstrate the usefulness and the flexibility of this approach on a number of examples which include the determination of a six dimensional free energy surface. Besides the practical advantages, the existence of such a variational principle allows us to look at the enhanced sampling problem from a rather convenient vantage point.
Variational Approach to Enhanced Sampling and Free Energy Calculations
Valsson, Omar; Parrinello, Michele
2014-08-01
The ability of widely used sampling methods, such as molecular dynamics or Monte Carlo simulations, to explore complex free energy landscapes is severely hampered by the presence of kinetic bottlenecks. A large number of solutions have been proposed to alleviate this problem. Many are based on the introduction of a bias potential which is a function of a small number of collective variables. However constructing such a bias is not simple. Here we introduce a functional of the bias potential and an associated variational principle. The bias that minimizes the functional relates in a simple way to the free energy surface. This variational principle can be turned into a practical, efficient, and flexible sampling method. A number of numerical examples are presented which include the determination of a three-dimensional free energy surface. We argue that, beside being numerically advantageous, our variational approach provides a convenient and novel standpoint for looking at the sampling problem.
The redshift distribution of cosmological samples: a forward modeling approach
Energy Technology Data Exchange (ETDEWEB)
Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina, E-mail: joerg.herbel@phys.ethz.ch, E-mail: tomasz.kacprzak@phys.ethz.ch, E-mail: adam.amara@phys.ethz.ch, E-mail: alexandre.refregier@phys.ethz.ch, E-mail: claudio.bruderer@phys.ethz.ch, E-mail: andrina.nicola@phys.ethz.ch [Institute for Astronomy, Department of Physics, ETH Zürich, Wolfgang-Pauli-Strasse 27, 8093 Zürich (Switzerland)
2017-08-01
Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.
The redshift distribution of cosmological samples: a forward modeling approach
Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina
2017-08-01
Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.
The redshift distribution of cosmological samples: a forward modeling approach
International Nuclear Information System (INIS)
Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina
2017-01-01
Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.
Volatility of an Indian stock market: A random matrix approach
International Nuclear Information System (INIS)
Kulkarni, V.; Deo, N.
2006-07-01
We examine volatility of an Indian stock market in terms of aspects like participation, synchronization of stocks and quantification of volatility using the random matrix approach. Volatility pattern of the market is found using the BSE index for the three-year period 2000- 2002. Random matrix analysis is carried out using daily returns of 70 stocks for several time windows of 85 days in 2001 to (i) do a brief comparative analysis with statistics of eigenvalues and eigenvectors of the matrix C of correlations between price fluctuations, in time regimes of different volatilities. While a bulk of eigenvalues falls within RMT bounds in all the time periods, we see that the largest (deviating) eigenvalue correlates well with the volatility of the index, the corresponding eigenvector clearly shows a shift in the distribution of its components from volatile to less volatile periods and verifies the qualitative association between participation and volatility (ii) observe that the Inverse participation ratio for the last eigenvector is sensitive to market fluctuations (the two quantities are observed to anti correlate significantly) (iii) set up a variability index, V whose temporal evolution is found to be significantly correlated with the volatility of the overall market index. MIRAMAR (author)
A Random Walk Approach to Query Informative Constraints for Clustering.
Abin, Ahmad Ali
2017-08-09
This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.
Azimi, Ehsan; Behrad, Alireza; Ghaznavi-Ghoushchi, Mohammad Bagher; Shanbehzadeh, Jamshid
2016-11-01
The projective model is an important mapping function for the calculation of global transformation between two images. However, its hardware implementation is challenging because of a large number of coefficients with different required precisions for fixed point representation. A VLSI hardware architecture is proposed for the calculation of a global projective model between input and reference images and refining false matches using random sample consensus (RANSAC) algorithm. To make the hardware implementation feasible, it is proved that the calculation of the projective model can be divided into four submodels comprising two translations, an affine model and a simpler projective mapping. This approach makes the hardware implementation feasible and considerably reduces the required number of bits for fixed point representation of model coefficients and intermediate variables. The proposed hardware architecture for the calculation of a global projective model using the RANSAC algorithm was implemented using Verilog hardware description language and the functionality of the design was validated through several experiments. The proposed architecture was synthesized by using an application-specific integrated circuit digital design flow utilizing 180-nm CMOS technology as well as a Virtex-6 field programmable gate array. Experimental results confirm the efficiency of the proposed hardware architecture in comparison with software implementation.
Energy Technology Data Exchange (ETDEWEB)
Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Schmidt, Jennifer; Martin, Alexandra
2016-09-01
Brain-directed treatment techniques, such as neurofeedback, have recently been proposed as adjuncts in the treatment of eating disorders to improve therapeutic outcomes. In line with this recommendation, a cue exposure EEG-neurofeedback protocol was developed. The present study aimed at the evaluation of the specific efficacy of neurofeedback to reduce subjective binge eating in a female subthreshold sample. A total of 75 subjects were randomized to EEG-neurofeedback, mental imagery with a comparable treatment set-up or a waitlist group. At post-treatment, only EEG-neurofeedback led to a reduced frequency of binge eating (p = .015, g = 0.65). The effects remained stable to a 3-month follow-up. EEG-neurofeedback further showed particular beneficial effects on perceived stress and dietary self-efficacy. Differences in outcomes did not arise from divergent treatment expectations. Because EEG-neurofeedback showed a specific efficacy, it may be a promising brain-directed approach that should be tested as a treatment adjunct in clinical groups with binge eating. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.
Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E
2001-01-01
Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.
Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.
Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel
2017-06-01
Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.
Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling
Directory of Open Access Journals (Sweden)
Bo Yu
2015-01-01
Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.
Application of the Sampling Selection Technique in Approaching Financial Audit
Directory of Open Access Journals (Sweden)
Victor Munteanu
2018-03-01
Full Text Available In his professional approach, the financial auditor has a wide range of working techniques, including selection techniques. They are applied depending on the nature of the information available to the financial auditor, the manner in which they are presented - paper or electronic format, and, last but not least, the time available. Several techniques are applied, successively or in parallel, to increase the safety of the expressed opinion and to provide the audit report with a solid basis of information. Sampling is used in the phase of control or clarification of the identified error. The main purpose is to corroborate or measure the degree of risk detected following a pertinent analysis. Since the auditor does not have time or means to thoroughly rebuild the information, the sampling technique can provide an effective response to the need for valorization.
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... Â§ 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...
The Dirichet-Multinomial model for multivariate randomized response data and small samples
Avetisyan, Marianna; Fox, Gerardus J.A.
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The
The Dirichlet-Multinomial Model for Multivariate Randomized Response Data and Small Samples
Avetisyan, Marianna; Fox, Jean-Paul
2012-01-01
In survey sampling the randomized response (RR) technique can be used to obtain truthful answers to sensitive questions. Although the individual answers are masked due to the RR technique, individual (sensitive) response rates can be estimated when observing multivariate response data. The beta-binomial model for binary RR data will be generalized…
A Nationwide Random Sampling Survey of Potential Complicated Grief in Japan
Mizuno, Yasunao; Kishimoto, Junji; Asukai, Nozomu
2012-01-01
To investigate the prevalence of significant loss, potential complicated grief (CG), and its contributing factors, we conducted a nationwide random sampling survey of Japanese adults aged 18 or older (N = 1,343) using a self-rating Japanese-language version of the Complicated Grief Brief Screen. Among them, 37.0% experienced their most significant…
A simple sample size formula for analysis of covariance in cluster randomized trials.
Teerenstra, S.; Eldridge, S.; Graff, M.J.; Hoop, E. de; Borm, G.F.
2012-01-01
For cluster randomized trials with a continuous outcome, the sample size is often calculated as if an analysis of the outcomes at the end of the treatment period (follow-up scores) would be performed. However, often a baseline measurement of the outcome is available or feasible to obtain. An
Random selection of items. Selection of n1 samples among N items composing a stratum
International Nuclear Information System (INIS)
Jaech, J.L.; Lemaire, R.J.
1987-02-01
STR-224 provides generalized procedures to determine required sample sizes, for instance in the course of a Physical Inventory Verification at Bulk Handling Facilities. The present report describes procedures to generate random numbers and select groups of items to be verified in a given stratum through each of the measurement methods involved in the verification. (author). 3 refs
Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA
Taylor, Laura; Doehler, Kirsten
2015-01-01
This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…
An efficient method of randomly sampling the coherent angular scatter distribution
International Nuclear Information System (INIS)
Williamson, J.F.; Morin, R.L.
1983-01-01
Monte Carlo simulations of photon transport phenomena require random selection of an interaction process at each collision site along the photon track. Possible choices are usually limited to photoelectric absorption and incoherent scatter as approximated by the Klein-Nishina distribution. A technique is described for sampling the coherent angular scatter distribution, for the benefit of workers in medical physics. (U.K.)
A random walk approach to stochastic neutron transport
International Nuclear Information System (INIS)
Mulatier, Clelia de
2015-01-01
One of the key goals of nuclear reactor physics is to determine the distribution of the neutron population within a reactor core. This population indeed fluctuates due to the stochastic nature of the interactions of the neutrons with the nuclei of the surrounding medium: scattering, emission of neutrons from fission events and capture by nuclear absorption. Due to these physical mechanisms, the stochastic process performed by neutrons is a branching random walk. For most applications, the neutron population considered is very large, and all physical observables related to its behaviour, such as the heat production due to fissions, are well characterised by their average values. Generally, these mean quantities are governed by the classical neutron transport equation, called linear Boltzmann equation. During my PhD, using tools from branching random walks and anomalous diffusion, I have tackled two aspects of neutron transport that cannot be approached by the linear Boltzmann equation. First, thanks to the Feynman-Kac backward formalism, I have characterised the phenomenon of 'neutron clustering' that has been highlighted for low-density configuration of neutrons and results from strong fluctuations in space and time of the neutron population. Then, I focused on several properties of anomalous (non-exponential) transport, that can model neutron transport in strongly heterogeneous and disordered media, such as pebble-bed reactors. One of the novel aspects of this work is that problems are treated in the presence of boundaries. Indeed, even though real systems are finite (confined geometries), most of previously existing results were obtained for infinite systems. (author) [fr
Flow in Random Microstructures: a Multilevel Monte Carlo Approach
Icardi, Matteo; Tempone, Raul
2016-01-01
, where an explicit parametrisation of the input randomness is not available or too expensive. We propose a general-purpose algorithm and computational code for the solution of Partial Differential Equations (PDEs) on random heterogeneous materials. We
An integrated sampling and analysis approach for improved biodiversity monitoring
DeWan, Amielle A.; Zipkin, Elise F.
2010-01-01
Successful biodiversity conservation requires high quality monitoring data and analyses to ensure scientifically defensible policy, legislation, and management. Although monitoring is a critical component in assessing population status and trends, many governmental and non-governmental organizations struggle to develop and implement effective sampling protocols and statistical analyses because of the magnitude and diversity of species in conservation concern. In this article we describe a practical and sophisticated data collection and analysis framework for developing a comprehensive wildlife monitoring program that includes multi-species inventory techniques and community-level hierarchical modeling. Compared to monitoring many species individually, the multi-species approach allows for improved estimates of individual species occurrences, including rare species, and an increased understanding of the aggregated response of a community to landscape and habitat heterogeneity. We demonstrate the benefits and practicality of this approach to address challenges associated with monitoring in the context of US state agencies that are legislatively required to monitor and protect species in greatest conservation need. We believe this approach will be useful to regional, national, and international organizations interested in assessing the status of both common and rare species.
DEFF Research Database (Denmark)
Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan
implementation generally improved the algorithm’s ability to predict the correct soil class. The implementation of soil-landscape relationships and area-proportional sampling generally increased the calculation time, while the random forest implementation reduced the calculation time. In the most successful......Detailed soil information is often needed to support agricultural practices, environmental protection and policy decisions. Several digital approaches can be used to map soil properties based on field observations. When soil observations are sparse or missing, an alternative approach...... is to disaggregate existing conventional soil maps. At present, the DSMART algorithm represents the most sophisticated approach for disaggregating conventional soil maps (Odgers et al., 2014). The algorithm relies on classification trees trained from resampled points, which are assigned classes according...
Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge
International Nuclear Information System (INIS)
Ma, Y C; Liu, H Y; Yan, S B; Li, J M; Tang, J; Yang, Y H; Yang, M W
2013-01-01
This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency. (paper)
Additive non-uniform random sampling in superimposed fiber Bragg grating strain gauge
Ma, Y. C.; Liu, H. Y.; Yan, S. B.; Yang, Y. H.; Yang, M. W.; Li, J. M.; Tang, J.
2013-05-01
This paper demonstrates an additive non-uniform random sampling and interrogation method for dynamic and/or static strain gauge using a reflection spectrum from two superimposed fiber Bragg gratings (FBGs). The superimposed FBGs are designed to generate non-equidistant space of a sensing pulse train in the time domain during dynamic strain gauge. By combining centroid finding with smooth filtering methods, both the interrogation speed and accuracy are improved. A 1.9 kHz dynamic strain is measured by generating an additive non-uniform randomly distributed 2 kHz optical sensing pulse train from a mean 500 Hz triangular periodically changing scanning frequency.
Scope of Various Random Number Generators in ant System Approach for TSP
Sen, S. K.; Shaykhian, Gholam Ali
2007-01-01
Experimented on heuristic, based on an ant system approach for traveling salesman problem, are several quasi- and pseudo-random number generators. This experiment is to explore if any particular generator is most desirable. Such an experiment on large samples has the potential to rank the performance of the generators for the foregoing heuristic. This is mainly to seek an answer to the controversial issue "which generator is the best in terms of quality of the result (accuracy) as well as cost of producing the result (time/computational complexity) in a probabilistic/statistical sense."
Directory of Open Access Journals (Sweden)
Kai Yang
2016-01-01
Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.
Characterization of electron microscopes with binary pseudo-random multilayer test samples
Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.
2011-09-01
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1,2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi 2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
Characterization of electron microscopes with binary pseudo-random multilayer test samples
International Nuclear Information System (INIS)
Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.
2011-01-01
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi 2 /Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
International Nuclear Information System (INIS)
Bertschinger, E.
1987-01-01
Path integrals may be used to describe the statistical properties of a random field such as the primordial density perturbation field. In this framework the probability distribution is given for a Gaussian random field subjected to constraints such as the presence of a protovoid or supercluster at a specific location in the initial conditions. An algorithm has been constructed for generating samples of a constrained Gaussian random field on a lattice using Monte Carlo techniques. The method makes possible a systematic study of the density field around peaks or other constrained regions in the biased galaxy formation scenario, and it is effective for generating initial conditions for N-body simulations with rare objects in the computational volume. 21 references
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Alnaffouri, Tareq Y.
2016-01-01
In this supplementary appendix we provide proofs and additional simulation results that complement the paper (constrained perturbation regularization approach for signal estimation using random matrix theory).
A Structural Modeling Approach to a Multilevel Random Coefficients Model.
Rovine, Michael J.; Molenaar, Peter C. M.
2000-01-01
Presents a method for estimating the random coefficients model using covariance structure modeling and allowing one to estimate both fixed and random effects. The method is applied to real and simulated data, including marriage data from J. Belsky and M. Rovine (1990). (SLD)
A Computerized Approach to Trickle-Process, Random Assignment.
Braucht, G. Nicholas; Reichardt, Charles S.
1993-01-01
Procedures for implementing random assignment with trickle processing and ways they can be corrupted are described. A computerized method for implementing random assignment with trickle processing is presented as a desirable alternative in many situations and a way of protecting against threats to assignment validity. (SLD)
McGarvey, Richard; Burch, Paul; Matthews, Janet M
2016-01-01
Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with
Iterative algorithm of discrete Fourier transform for processing randomly sampled NMR data sets
International Nuclear Information System (INIS)
Stanek, Jan; Kozminski, Wiktor
2010-01-01
Spectra obtained by application of multidimensional Fourier Transformation (MFT) to sparsely sampled nD NMR signals are usually corrupted due to missing data. In the present paper this phenomenon is investigated on simulations and experiments. An effective iterative algorithm for artifact suppression for sparse on-grid NMR data sets is discussed in detail. It includes automated peak recognition based on statistical methods. The results enable one to study NMR spectra of high dynamic range of peak intensities preserving benefits of random sampling, namely the superior resolution in indirectly measured dimensions. Experimental examples include 3D 15 N- and 13 C-edited NOESY-HSQC spectra of human ubiquitin.
Randomized branch sampling to estimatefruit production in Pecan trees cv. ‘Barton’
Directory of Open Access Journals (Sweden)
Filemom Manoel Mokochinski
Full Text Available ABSTRACT: Sampling techniques to quantify the production of fruits are still very scarce and create a gap in crop development research. This study was conducted in a rural property in the county of Cachoeira do Sul - RS to estimate the efficiency of randomized branch sampling (RBS in quantifying the production of pecan fruit at three different ages (5,7 and 10 years. Two selection techniques were tested: the probability proportional to the diameter (PPD and the uniform probability (UP techniques, which were performed on nine trees, three from each age and randomly chosen. The RBS underestimated fruit production for all ages, and its main drawback was the high sampling error (125.17% - PPD and 111.04% - UP. The UP was regarded as more efficient than the PPD, though both techniques estimated similar production and similar experimental errors. In conclusion, we reported that branch sampling was inaccurate for this case study, requiring new studies to produce estimates with smaller sampling error.
Olekhno, N. A.; Beltukov, Y. M.
2018-05-01
Random impedance networks are widely used as a model to describe plasmon resonances in disordered metal-dielectric and other two-component nanocomposites. In the present work, the spectral properties of resonances in random networks are studied within the framework of the random matrix theory. We have shown that the appropriate ensemble of random matrices for the considered problem is the Jacobi ensemble (the MANOVA ensemble). The obtained analytical expressions for the density of states in such resonant networks show a good agreement with the results of numerical simulations in a wide range of metal filling fractions 0
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Avrachenkov, Konstantin; Borkar, Vivek S; Kadavankandy, Arun; Sreedharan, Jithin K
2018-01-01
In the framework of network sampling, random walk (RW) based estimation techniques provide many pragmatic solutions while uncovering the unknown network as little as possible. Despite several theoretical advances in this area, RW based sampling techniques usually make a strong assumption that the samples are in stationary regime, and hence are impelled to leave out the samples collected during the burn-in period. This work proposes two sampling schemes without burn-in time constraint to estimate the average of an arbitrary function defined on the network nodes, for example, the average age of users in a social network. The central idea of the algorithms lies in exploiting regeneration of RWs at revisits to an aggregated super-node or to a set of nodes, and in strategies to enhance the frequency of such regenerations either by contracting the graph or by making the hitting set larger. Our first algorithm, which is based on reinforcement learning (RL), uses stochastic approximation to derive an estimator. This method can be seen as intermediate between purely stochastic Markov chain Monte Carlo iterations and deterministic relative value iterations. The second algorithm, which we call the Ratio with Tours (RT)-estimator, is a modified form of respondent-driven sampling (RDS) that accommodates the idea of regeneration. We study the methods via simulations on real networks. We observe that the trajectories of RL-estimator are much more stable than those of standard random walk based estimation procedures, and its error performance is comparable to that of respondent-driven sampling (RDS) which has a smaller asymptotic variance than many other estimators. Simulation studies also show that the mean squared error of RT-estimator decays much faster than that of RDS with time. The newly developed RW based estimators (RL- and RT-estimators) allow to avoid burn-in period, provide better control of stability along the sample path, and overall reduce the estimation time. Our
Expectation-based approach for one-dimensional randomly disordered phononic crystals
International Nuclear Information System (INIS)
Wu, Feng; Gao, Qiang; Xu, Xiaoming; Zhong, Wanxie
2014-01-01
An expectation-based approach to the statistical theorem is proposed for the one-dimensional randomly disordered phononic crystal. In the proposed approach, the expectations of the random eigenstates of randomly disordered phononic crystals are investigated. In terms of the expectations of the random eigenstates, the wave propagation and localization phenomenon in the random phononic crystal could be understood in a statistical perspective. Using the proposed approach, it is proved that for a randomly disordered phononic crystal, the Bloch theorem holds in the perspective of expectation. A one-dimensional randomly disordered binary phononic crystal consisting of two materials with the random geometry size or random physical parameter is addressed by using the proposed approach. From the result, it can be observed that with the increase of the disorder degree, the localization of the expectations of the eigenstates is strengthened. The effect of the random disorder on the eigenstates at higher frequencies is more significant than that at lower frequencies. Furthermore, after introducing the random disorder into phononic crystals, some random divergent eigenstates are changed to localized eigenstates in expectation sense.
A cluster expansion approach to exponential random graph models
International Nuclear Information System (INIS)
Yin, Mei
2012-01-01
The exponential family of random graphs are among the most widely studied network models. We show that any exponential random graph model may alternatively be viewed as a lattice gas model with a finite Banach space norm. The system may then be treated using cluster expansion methods from statistical mechanics. In particular, we derive a convergent power series expansion for the limiting free energy in the case of small parameters. Since the free energy is the generating function for the expectations of other random variables, this characterizes the structure and behavior of the limiting network in this parameter region
Improving ambulatory saliva-sampling compliance in pregnant women: a randomized controlled study.
Directory of Open Access Journals (Sweden)
Julian Moeller
Full Text Available OBJECTIVE: Noncompliance with scheduled ambulatory saliva sampling is common and has been associated with biased cortisol estimates in nonpregnant subjects. This study is the first to investigate in pregnant women strategies to improve ambulatory saliva-sampling compliance, and the association between sampling noncompliance and saliva cortisol estimates. METHODS: We instructed 64 pregnant women to collect eight scheduled saliva samples on two consecutive days each. Objective compliance with scheduled sampling times was assessed with a Medication Event Monitoring System and self-reported compliance with a paper-and-pencil diary. In a randomized controlled study, we estimated whether a disclosure intervention (informing women about objective compliance monitoring and a reminder intervention (use of acoustical reminders improved compliance. A mixed model analysis was used to estimate associations between women's objective compliance and their diurnal cortisol profiles, and between deviation from scheduled sampling and the cortisol concentration measured in the related sample. RESULTS: Self-reported compliance with a saliva-sampling protocol was 91%, and objective compliance was 70%. The disclosure intervention was associated with improved objective compliance (informed: 81%, noninformed: 60%, F(1,60 = 17.64, p<0.001, but not the reminder intervention (reminders: 68%, without reminders: 72%, F(1,60 = 0.78, p = 0.379. Furthermore, a woman's increased objective compliance was associated with a higher diurnal cortisol profile, F(2,64 = 8.22, p<0.001. Altered cortisol levels were observed in less objective compliant samples, F(1,705 = 7.38, p = 0.007, with delayed sampling associated with lower cortisol levels. CONCLUSIONS: The results suggest that in pregnant women, objective noncompliance with scheduled ambulatory saliva sampling is common and is associated with biased cortisol estimates. To improve sampling compliance, results suggest
Liquid Water from First Principles: Validation of Different Sampling Approaches
Energy Technology Data Exchange (ETDEWEB)
Mundy, C J; Kuo, W; Siepmann, J; McGrath, M J; Vondevondele, J; Sprik, M; Hutter, J; Parrinello, M; Mohamed, F; Krack, M; Chen, B; Klein, M
2004-05-20
A series of first principles molecular dynamics and Monte Carlo simulations were carried out for liquid water to assess the validity and reproducibility of different sampling approaches. These simulations include Car-Parrinello molecular dynamics simulations using the program CPMD with different values of the fictitious electron mass in the microcanonical and canonical ensembles, Born-Oppenheimer molecular dynamics using the programs CPMD and CP2K in the microcanonical ensemble, and Metropolis Monte Carlo using CP2K in the canonical ensemble. With the exception of one simulation for 128 water molecules, all other simulations were carried out for systems consisting of 64 molecules. It is found that the structural and thermodynamic properties of these simulations are in excellent agreement with each other as long as adiabatic sampling is maintained in the Car-Parrinello molecular dynamics simulations either by choosing a sufficiently small fictitious mass in the microcanonical ensemble or by Nos{acute e}-Hoover thermostats in the canonical ensemble. Using the Becke-Lee-Yang-Parr exchange and correlation energy functionals and norm-conserving Troullier-Martins or Goedecker-Teter-Hutter pseudopotentials, simulations at a fixed density of 1.0 g/cm{sup 3} and a temperature close to 315 K yield a height of the first peak in the oxygen-oxygen radial distribution function of about 3.0, a classical constant-volume heat capacity of about 70 J K{sup -1} mol{sup -1}, and a self-diffusion constant of about 0.1 Angstroms{sup 2}/ps.
Characteristics of quantum open systems: free random variables approach
International Nuclear Information System (INIS)
Gudowska-Nowak, E.; Papp, G.; Brickmann, J.
1998-01-01
Random Matrix Theory provides an interesting tool for modelling a number of phenomena where noises (fluctuations) play a prominent role. Various applications range from the theory of mesoscopic systems in nuclear and atomic physics to biophysical models, like Hopfield-type models of neural networks and protein folding. Random Matrix Theory is also used to study dissipative systems with broken time-reversal invariance providing a setup for analysis of dynamic processes in condensed, disordered media. In the paper we use the Random Matrix Theory (RMT) within the formalism of Free Random Variables (alias Blue's functions), which allows to characterize spectral properties of non-Hermitean ''Hamiltonians''. The relevance of using the Blue's function method is discussed in connection with application of non-Hermitean operators in various problems of physical chemistry. (author)
Acute stress symptoms during the second Lebanon war in a random sample of Israeli citizens.
Cohen, Miri; Yahav, Rivka
2008-02-01
The aims of this study were to assess prevalence of acute stress disorder (ASD) and acute stress symptoms (ASS) in Israel during the second Lebanon war. A telephone survey was conducted in July 2006 of a random sample of 235 residents of northern Israel, who were subjected to missile attacks, and of central Israel, who were not subjected to missile attacks. Results indicate that ASS scores were higher in the northern respondents; 6.8% of the northern sample and 3.9% of the central sample met ASD criteria. Appearance of each symptom ranged from 15.4% for dissociative to 88.4% for reexperiencing, with significant differences between northern and central respondents only for reexperiencing and arousal. A low ASD rate and a moderate difference between areas subjected and not subjected to attack were found.
Distributed fiber sparse-wideband vibration sensing by sub-Nyquist additive random sampling
Zhang, Jingdong; Zheng, Hua; Zhu, Tao; Yin, Guolu; Liu, Min; Bai, Yongzhong; Qu, Dingrong; Qiu, Feng; Huang, Xianbing
2018-05-01
The round trip time of the light pulse limits the maximum detectable vibration frequency response range of phase-sensitive optical time domain reflectometry ({\\phi}-OTDR). Unlike the uniform laser pulse interval in conventional {\\phi}-OTDR, we randomly modulate the pulse interval, so that an equivalent sub-Nyquist additive random sampling (sNARS) is realized for every sensing point of the long interrogation fiber. For an {\\phi}-OTDR system with 10 km sensing length, the sNARS method is optimized by theoretical analysis and Monte Carlo simulation, and the experimental results verify that a wide-band spars signal can be identified and reconstructed. Such a method can broaden the vibration frequency response range of {\\phi}-OTDR, which is of great significance in sparse-wideband-frequency vibration signal detection, such as rail track monitoring and metal defect detection.
Directory of Open Access Journals (Sweden)
Alireza Goli
2015-09-01
Full Text Available Distribution and optimum allocation of emergency resources are the most important tasks, which need to be accomplished during crisis. When a natural disaster such as earthquake, flood, etc. takes place, it is necessary to deliver rescue efforts as quickly as possible. Therefore, it is important to find optimum location and distribution of emergency relief resources. When a natural disaster occurs, it is not possible to reach some damaged areas. In this paper, location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling is investigated. In this study, there is no need to visit all the places and some demand points receive their needs from the nearest possible location. The proposed study is implemented for some randomly generated numbers in different sizes. The preliminary results indicate that the proposed method was capable of reaching desirable solutions in reasonable amount of time.
Efficient approach for reliability-based optimization based on weighted importance sampling approach
International Nuclear Information System (INIS)
Yuan, Xiukai; Lu, Zhenzhou
2014-01-01
An efficient methodology is presented to perform the reliability-based optimization (RBO). It is based on an efficient weighted approach for constructing an approximation of the failure probability as an explicit function of the design variables which is referred to as the ‘failure probability function (FPF)’. It expresses the FPF as a weighted sum of sample values obtained in the simulation-based reliability analysis. The required computational effort for decoupling in each iteration is just single reliability analysis. After the approximation of the FPF is established, the target RBO problem can be decoupled into a deterministic one. Meanwhile, the proposed weighted approach is combined with a decoupling approach and a sequential approximate optimization framework. Engineering examples are given to demonstrate the efficiency and accuracy of the presented methodology
Random matrix approach to cross correlations in financial data
Plerou, Vasiliki; Gopikrishnan, Parameswaran; Rosenow, Bernd; Amaral, Luís A.; Guhr, Thomas; Stanley, H. Eugene
2002-06-01
We analyze cross correlations between price fluctuations of different stocks using methods of random matrix theory (RMT). Using two large databases, we calculate cross-correlation matrices C of returns constructed from (i) 30-min returns of 1000 US stocks for the 2-yr period 1994-1995, (ii) 30-min returns of 881 US stocks for the 2-yr period 1996-1997, and (iii) 1-day returns of 422 US stocks for the 35-yr period 1962-1996. We test the statistics of the eigenvalues λi of C against a ``null hypothesis'' - a random correlation matrix constructed from mutually uncorrelated time series. We find that a majority of the eigenvalues of C fall within the RMT bounds [λ-,λ+] for the eigenvalues of random correlation matrices. We test the eigenvalues of C within the RMT bound for universal properties of random matrices and find good agreement with the results for the Gaussian orthogonal ensemble of random matrices-implying a large degree of randomness in the measured cross-correlation coefficients. Further, we find that the distribution of eigenvector components for the eigenvectors corresponding to the eigenvalues outside the RMT bound display systematic deviations from the RMT prediction. In addition, we find that these ``deviating eigenvectors'' are stable in time. We analyze the components of the deviating eigenvectors and find that the largest eigenvalue corresponds to an influence common to all stocks. Our analysis of the remaining deviating eigenvectors shows distinct groups, whose identities correspond to conventionally identified business sectors. Finally, we discuss applications to the construction of portfolios of stocks that have a stable ratio of risk to return.
Random On-Board Pixel Sampling (ROPS) X-Ray Camera
Energy Technology Data Exchange (ETDEWEB)
Wang, Zhehui [Los Alamos; Iaroshenko, O. [Los Alamos; Li, S. [Los Alamos; Liu, T. [Fermilab; Parab, N. [Argonne (main); Chen, W. W. [Purdue U.; Chu, P. [Los Alamos; Kenyon, G. [Los Alamos; Lipton, R. [Fermilab; Sun, K.-X. [Nevada U., Las Vegas
2017-09-25
Recent advances in compressed sensing theory and algorithms offer new possibilities for high-speed X-ray camera design. In many CMOS cameras, each pixel has an independent on-board circuit that includes an amplifier, noise rejection, signal shaper, an analog-to-digital converter (ADC), and optional in-pixel storage. When X-ray images are sparse, i.e., when one of the following cases is true: (a.) The number of pixels with true X-ray hits is much smaller than the total number of pixels; (b.) The X-ray information is redundant; or (c.) Some prior knowledge about the X-ray images exists, sparse sampling may be allowed. Here we first illustrate the feasibility of random on-board pixel sampling (ROPS) using an existing set of X-ray images, followed by a discussion about signal to noise as a function of pixel size. Next, we describe a possible circuit architecture to achieve random pixel access and in-pixel storage. The combination of a multilayer architecture, sparse on-chip sampling, and computational image techniques, is expected to facilitate the development and applications of high-speed X-ray camera technology.
Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël
2016-11-17
Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.
Composite Sampling Approaches for Bacillus anthracis Surrogate Extracted from Soil.
Directory of Open Access Journals (Sweden)
Brian France
Full Text Available Any release of anthrax spores in the U.S. would require action to decontaminate the site and restore its use and operations as rapidly as possible. The remediation activity would require environmental sampling, both initially to determine the extent of contamination (hazard mapping and post-decon to determine that the site is free of contamination (clearance sampling. Whether the spore contamination is within a building or outdoors, collecting and analyzing what could be thousands of samples can become the factor that limits the pace of restoring operations. To address this sampling and analysis bottleneck and decrease the time needed to recover from an anthrax contamination event, this study investigates the use of composite sampling. Pooling or compositing of samples is an established technique to reduce the number of analyses required, and its use for anthrax spore sampling has recently been investigated. However, use of composite sampling in an anthrax spore remediation event will require well-documented and accepted methods. In particular, previous composite sampling studies have focused on sampling from hard surfaces; data on soil sampling are required to extend the procedure to outdoor use. Further, we must consider whether combining liquid samples, thus increasing the volume, lowers the sensitivity of detection and produces false negatives. In this study, methods to composite bacterial spore samples from soil are demonstrated. B. subtilis spore suspensions were used as a surrogate for anthrax spores. Two soils (Arizona Test Dust and sterilized potting soil were contaminated and spore recovery with composites was shown to match individual sample performance. Results show that dilution can be overcome by concentrating bacterial spores using standard filtration methods. This study shows that composite sampling can be a viable method of pooling samples to reduce the number of analysis that must be performed during anthrax spore remediation.
A Copula Based Approach for Design of Multivariate Random Forests for Drug Sensitivity Prediction.
Haider, Saad; Rahman, Raziur; Ghosh, Souparno; Pal, Ranadip
2015-01-01
Modeling sensitivity to drugs based on genetic characterizations is a significant challenge in the area of systems medicine. Ensemble based approaches such as Random Forests have been shown to perform well in both individual sensitivity prediction studies and team science based prediction challenges. However, Random Forests generate a deterministic predictive model for each drug based on the genetic characterization of the cell lines and ignores the relationship between different drug sensitivities during model generation. This application motivates the need for generation of multivariate ensemble learning techniques that can increase prediction accuracy and improve variable importance ranking by incorporating the relationships between different output responses. In this article, we propose a novel cost criterion that captures the dissimilarity in the output response structure between the training data and node samples as the difference in the two empirical copulas. We illustrate that copulas are suitable for capturing the multivariate structure of output responses independent of the marginal distributions and the copula based multivariate random forest framework can provide higher accuracy prediction and improved variable selection. The proposed framework has been validated on genomics of drug sensitivity for cancer and cancer cell line encyclopedia database.
Algorithms for random generation and counting a Markov chain approach
Sinclair, Alistair
1993-01-01
This monograph studies two classical computational problems: counting the elements of a finite set of combinatorial structures, and generating them at random from some probability distribution. Apart from their intrinsic interest, these problems arise naturally in many branches of mathematics and the natural sciences.
International Nuclear Information System (INIS)
Amendola, A.; Astolfi, M.; Lisanti, B.
1983-01-01
The report describes the how-to-use of the codes: MUP (Monte Carlo Uncertainty Propagation) for uncertainty analysis by Monte Carlo simulation, including correlation analysis, extreme value identification and study of selected ranges of the variable space; CEC-DES (Central Composite Design) for building experimental matrices according to the requirements of Central Composite and Factorial Experimental Designs; and, STRADE (Stratified Random Design) for experimental designs based on the Latin Hypercube Sampling Techniques. Application fields, of the codes are probabilistic risk assessment, experimental design, sensitivity analysis and system identification problems
From medium heterogeneity to flow and transport: A time-domain random walk approach
Hakoun, V.; Comolli, A.; Dentz, M.
2017-12-01
The prediction of flow and transport processes in heterogeneous porous media is based on the qualitative and quantitative understanding of the interplay between 1) spatial variability of hydraulic conductivity, 2) groundwater flow and 3) solute transport. Using a stochastic modeling approach, we study this interplay through direct numerical simulations of Darcy flow and advective transport in heterogeneous media. First, we study flow in correlated hydraulic permeability fields and shed light on the relationship between the statistics of log-hydraulic conductivity, a medium attribute, and the flow statistics. Second, we determine relationships between Eulerian and Lagrangian velocity statistics, this means, between flow and transport attributes. We show how Lagrangian statistics and thus transport behaviors such as late particle arrival times are influenced by the medium heterogeneity on one hand and the initial particle velocities on the other. We find that equidistantly sampled Lagrangian velocities can be described by a Markov process that evolves on the characteristic heterogeneity length scale. We employ a stochastic relaxation model for the equidistantly sampled particle velocities, which is parametrized by the velocity correlation length. This description results in a time-domain random walk model for the particle motion, whose spatial transitions are characterized by the velocity correlation length and temporal transitions by the particle velocities. This approach relates the statistical medium and flow properties to large scale transport, and allows for conditioning on the initial particle velocities and thus to the medium properties in the injection region. The approach is tested against direct numerical simulations.
Event-triggered synchronization for reaction-diffusion complex networks via random sampling
Dong, Tao; Wang, Aijuan; Zhu, Huiyun; Liao, Xiaofeng
2018-04-01
In this paper, the synchronization problem of the reaction-diffusion complex networks (RDCNs) with Dirichlet boundary conditions is considered, where the data is sampled randomly. An event-triggered controller based on the sampled data is proposed, which can reduce the number of controller and the communication load. Under this strategy, the synchronization problem of the diffusion complex network is equivalently converted to the stability of a of reaction-diffusion complex dynamical systems with time delay. By using the matrix inequality technique and Lyapunov method, the synchronization conditions of the RDCNs are derived, which are dependent on the diffusion term. Moreover, it is found the proposed control strategy can get rid of the Zeno behavior naturally. Finally, a numerical example is given to verify the obtained results.
Random Valued Impulse Noise Removal Using Region Based Detection Approach
Directory of Open Access Journals (Sweden)
S. Banerjee
2017-12-01
Full Text Available Removal of random valued noisy pixel is extremely challenging when the noise density is above 50%. The existing filters are generally not capable of eliminating such noise when density is above 70%. In this paper a region wise density based detection algorithm for random valued impulse noise has been proposed. On the basis of the intensity values, the pixels of a particular window are sorted and then stored into four regions. The higher density based region is considered for stepwise detection of noisy pixels. As a result of this detection scheme a maximum of 75% of noisy pixels can be detected. For this purpose this paper proposes a unique noise removal algorithm. It was experimentally proved that the proposed algorithm not only performs exceptionally when it comes to visual qualitative judgment of standard images but also this filter combination outsmarts the existing algorithm in terms of MSE, PSNR and SSIM comparison even up to 70% noise density level.
Novel Sample-handling Approach for XRD Analysis with Minimal Sample Preparation
Sarrazin, P.; Chipera, S.; Bish, D.; Blake, D.; Feldman, S.; Vaniman, D.; Bryson, C.
2004-01-01
Sample preparation and sample handling are among the most critical operations associated with X-ray diffraction (XRD) analysis. These operations require attention in a laboratory environment, but they become a major constraint in the deployment of XRD instruments for robotic planetary exploration. We are developing a novel sample handling system that dramatically relaxes the constraints on sample preparation by allowing characterization of coarse-grained material that would normally be impossible to analyze with conventional powder-XRD techniques.
Landslide Susceptibility Assessment Using Frequency Ratio Technique with Iterative Random Sampling
Directory of Open Access Journals (Sweden)
Hyun-Joo Oh
2017-01-01
Full Text Available This paper assesses the performance of the landslide susceptibility analysis using frequency ratio (FR with an iterative random sampling. A pair of before-and-after digital aerial photographs with 50 cm spatial resolution was used to detect landslide occurrences in Yongin area, Korea. Iterative random sampling was run ten times in total and each time it was applied to the training and validation datasets. Thirteen landslide causative factors were derived from the topographic, soil, forest, and geological maps. The FR scores were calculated from the causative factors and training occurrences repeatedly ten times. The ten landslide susceptibility maps were obtained from the integration of causative factors that assigned FR scores. The landslide susceptibility maps were validated by using each validation dataset. The FR method achieved susceptibility accuracies from 89.48% to 93.21%. And the landslide susceptibility accuracy of the FR method is higher than 89%. Moreover, the ten times iterative FR modeling may contribute to a better understanding of a regularized relationship between the causative factors and landslide susceptibility. This makes it possible to incorporate knowledge-driven considerations of the causative factors into the landslide susceptibility analysis and also be extensively used to other areas.
A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling
Directory of Open Access Journals (Sweden)
Ying Yan
2017-01-01
Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.
International Nuclear Information System (INIS)
Eperon, Isabelle; Vassilakos, Pierre; Navarria, Isabelle; Menoud, Pierre-Alain; Gauthier, Aude; Pache, Jean-Claude; Boulvain, Michel; Untiet, Sarah; Petignat, Patrick
2013-01-01
To evaluate if human papillomavirus (HPV) self-sampling (Self-HPV) using a dry vaginal swab is a valid alternative for HPV testing. Women attending colposcopy clinic were recruited to collect two consecutive Self-HPV samples: a Self-HPV using a dry swab (S-DRY) and a Self-HPV using a standard wet transport medium (S-WET). These samples were analyzed for HPV using real time PCR (Roche Cobas). Participants were randomized to determine the order of the tests. Questionnaires assessing preferences and acceptability for both tests were conducted. Subsequently, women were invited for colposcopic examination; a physician collected a cervical sample (physician-sampling) with a broom-type device and placed it into a liquid-based cytology medium. Specimens were then processed for the production of cytology slides and a Hybrid Capture HPV DNA test (Qiagen) was performed from the residual liquid. Biopsies were performed if indicated. Unweighted kappa statistics (κ) and McNemar tests were used to measure the agreement among the sampling methods. A total of 120 women were randomized. Overall HPV prevalence was 68.7% (95% Confidence Interval (CI) 59.3–77.2) by S-WET, 54.4% (95% CI 44.8–63.9) by S-DRY and 53.8% (95% CI 43.8–63.7) by HC. Among paired samples (S-WET and S-DRY), the overall agreement was good (85.7%; 95% CI 77.8–91.6) and the κ was substantial (0.70; 95% CI 0.57-0.70). The proportion of positive type-specific HPV agreement was also good (77.3%; 95% CI 68.2-84.9). No differences in sensitivity for cervical intraepithelial neoplasia grade one (CIN1) or worse between the two Self-HPV tests were observed. Women reported the two Self-HPV tests as highly acceptable. Self-HPV using dry swab transfer does not appear to compromise specimen integrity. Further study in a large screening population is needed. ClinicalTrials.gov: http://clinicaltrials.gov/show/NCT01316120
Short Note An integrated remote sampling approach for aquatic ...
African Journals Online (AJOL)
A sampling method and apparatus for collecting meaningful and quantifiable samples of aquatic macroinvertebrates, and the macrophytes they are associated with, are presented. Where physical danger from wildlife is a significant factor, especially in Africa, this apparatus offers some safety in that it can be operated from a ...
Wang, Shao-Jiang; Guo, Qi; Cai, Rong-Gen
2017-12-01
We investigate the impact of different redshift distributions of random samples on the baryon acoustic oscillations (BAO) measurements of D_V(z)r_d^fid/r_d from the two-point correlation functions of galaxies in the Data Release 12 of the Baryon Oscillation Spectroscopic Survey (BOSS). Big surveys, such as BOSS, usually assign redshifts to the random samples by randomly drawing values from the measured redshift distributions of the data, which would necessarily introduce fiducial signals of fluctuations into the random samples, weakening the signals of BAO, if the cosmic variance cannot be ignored. We propose a smooth function of redshift distribution that fits the data well to populate the random galaxy samples. The resulting cosmological parameters match the input parameters of the mock catalogue very well. The significance of BAO signals has been improved by 0.33σ for a low-redshift sample and by 0.03σ for a constant-stellar-mass sample, though the absolute values do not change significantly. Given the precision of the measurements of current cosmological parameters, it would be appreciated for the future improvements on the measurements of galaxy clustering.
Evaluation of Approaches to Analyzing Continuous Correlated Eye Data When Sample Size Is Small.
Huang, Jing; Huang, Jiayan; Chen, Yong; Ying, Gui-Shuang
2018-02-01
To evaluate the performance of commonly used statistical methods for analyzing continuous correlated eye data when sample size is small. We simulated correlated continuous data from two designs: (1) two eyes of a subject in two comparison groups; (2) two eyes of a subject in the same comparison group, under various sample size (5-50), inter-eye correlation (0-0.75) and effect size (0-0.8). Simulated data were analyzed using paired t-test, two sample t-test, Wald test and score test using the generalized estimating equations (GEE) and F-test using linear mixed effects model (LMM). We compared type I error rates and statistical powers, and demonstrated analysis approaches through analyzing two real datasets. In design 1, paired t-test and LMM perform better than GEE, with nominal type 1 error rate and higher statistical power. In design 2, no test performs uniformly well: two sample t-test (average of two eyes or a random eye) achieves better control of type I error but yields lower statistical power. In both designs, the GEE Wald test inflates type I error rate and GEE score test has lower power. When sample size is small, some commonly used statistical methods do not perform well. Paired t-test and LMM perform best when two eyes of a subject are in two different comparison groups, and t-test using the average of two eyes performs best when the two eyes are in the same comparison group. When selecting the appropriate analysis approach the study design should be considered.
Henry, M J; Pasco, J A; Seeman, E; Nicholson, G C; Sanders, K M; Kotowicz, M A
2001-01-01
Fracture risk is determined by bone mineral density (BMD). The T-score, a measure of fracture risk, is the position of an individual's BMD in relation to a reference range. The aim of this study was to determine the magnitude of change in the T-score when different sampling techniques were used to produce the reference range. Reference ranges were derived from three samples, drawn from the same region: (1) an age-stratified population-based random sample, (2) unselected volunteers, and (3) a selected healthy subset of the population-based sample with no diseases or drugs known to affect bone. T-scores were calculated using the three reference ranges for a cohort of women who had sustained a fracture and as a group had a low mean BMD (ages 35-72 yr; n = 484). For most comparisons, the T-scores for the fracture cohort were more negative using the population reference range. The difference in T-scores reached 1.0 SD. The proportion of the fracture cohort classified as having osteoporosis at the spine was 26, 14, and 23% when the population, volunteer, and healthy reference ranges were applied, respectively. The use of inappropriate reference ranges results in substantial changes to T-scores and may lead to inappropriate management.
Lyapunov exponent of the random frequency oscillator: cumulant expansion approach
International Nuclear Information System (INIS)
Anteneodo, C; Vallejos, R O
2010-01-01
We consider a one-dimensional harmonic oscillator with a random frequency, focusing on both the standard and the generalized Lyapunov exponents, λ and λ* respectively. We discuss the numerical difficulties that arise in the numerical calculation of λ* in the case of strong intermittency. When the frequency corresponds to a Ornstein-Uhlenbeck process, we compute analytically λ* by using a cumulant expansion including up to the fourth order. Connections with the problem of finding an analytical estimate for the largest Lyapunov exponent of a many-body system with smooth interactions are discussed.
Matrix product approach for the asymmetric random average process
International Nuclear Information System (INIS)
Zielen, F; Schadschneider, A
2003-01-01
We consider the asymmetric random average process which is a one-dimensional stochastic lattice model with nearest-neighbour interaction but continuous and unbounded state variables. First, the explicit functional representations, so-called beta densities, of all local interactions leading to steady states of product measure form are rigorously derived. This also completes an outstanding proof given in a previous publication. Then we present an alternative solution for the processes with factorized stationary states by using a matrix product ansatz. Due to continuous state variables we obtain a matrix algebra in the form of a functional equation which can be solved exactly
Evaluation of random temperature fluctuation problems with frequency response approach
International Nuclear Information System (INIS)
Lejeail, Yves; Kasahara, Naoto
2000-01-01
Since thermal striping is a coupled thermohydraulic and thermomechanical phenomenon, sodium mock-up tests were usually required to confirm structural integrity. Authors have developed the frequency response function to establish design-by-analysis methodology for this phenomenon. Applicability of this method to sinusoidal fluctuation was validated through two benchmark problems with FAENA and TIFFSS facilities under EJCC contract. This report describes the extension of the frequency response method to random fluctuations. As an example of application, fatigue strength of a Tee junction of PHENIX secondary piping system was investigated. (author)
Dhruba Das; Hemanta K. Baruah
2015-01-01
In this article, based on Zadeh’s extension principle we have apply the parametric programming approach to construct the membership functions of the performance measures when the interarrival time and the service time are fuzzy numbers based on the Baruah’s Randomness- Fuzziness Consistency Principle. The Randomness-Fuzziness Consistency Principle leads to defining a normal law of fuzziness using two different laws of randomness. In this article, two fuzzy queues FM...
Directory of Open Access Journals (Sweden)
Jennifer L Smith
Full Text Available Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF, generally collected using the recommended gold-standard cluster randomized surveys (CRS. Integrated Threshold Mapping (ITM has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters.Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i the district prevalence of TF; (ii the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii the enrollment rate in schools.Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates
Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory
Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.
2016-01-01
random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various
A New Approach on Sampling Microorganisms from the Lower Stratosphere
Gunawan, B.; Lehnen, J. N.; Prince, J.; Bering, E., III; Rodrigues, D.
2017-12-01
University of Houston's Undergraduate Student Instrumentation Project (USIP) astrobiology group will attempt to provide a cross-sectional analysis of microorganisms in the lower stratosphere by collecting living microbial samples using a sterile and lightweight balloon-borne payload. Refer to poster by Dr. Edgar Bering in session ED032. The purpose of this research is two-fold: first, to design a new system that is capable of greater mass air intake, unlike the previous iterations where heavy and power-intensive pumps are used; and second, to provide proof of concept that live samples are accumulated in the upper atmosphere and are viable for extensive studies and consequent examination for their potential weather-altering characteristics. Multiple balloon deployments will be conducted to increase accuracy and to provide larger set of data. This paper will also discuss visual presentation of the payload along with analyzed information of the captured samples. Design details will be presented to NASA investigators for professional studies
Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch
2017-06-06
An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.
Gray bootstrap method for estimating frequency-varying random vibration signals with small samples
Directory of Open Access Journals (Sweden)
Wang Yanqing
2014-04-01
Full Text Available During environment testing, the estimation of random vibration signals (RVS is an important technique for the airborne platform safety and reliability. However, the available methods including extreme value envelope method (EVEM, statistical tolerances method (STM and improved statistical tolerance method (ISTM require large samples and typical probability distribution. Moreover, the frequency-varying characteristic of RVS is usually not taken into account. Gray bootstrap method (GBM is proposed to solve the problem of estimating frequency-varying RVS with small samples. Firstly, the estimated indexes are obtained including the estimated interval, the estimated uncertainty, the estimated value, the estimated error and estimated reliability. In addition, GBM is applied to estimating the single flight testing of certain aircraft. At last, in order to evaluate the estimated performance, GBM is compared with bootstrap method (BM and gray method (GM in testing analysis. The result shows that GBM has superiority for estimating dynamic signals with small samples and estimated reliability is proved to be 100% at the given confidence level.
Directory of Open Access Journals (Sweden)
P. M. A. Diaz
2016-06-01
Full Text Available This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.
Non-response weighting adjustment approach in survey sampling ...
African Journals Online (AJOL)
Hence the discussion is illustrated with real examples from surveys (in particular 2003 KDHS) conducted by Central Bureau of Statistics (CBS) - Kenya. Some suggestions are made for improving the quality of non-response weighting. Keywords: Survey non-response; non-response adjustment factors; weighting; sampling ...
Constrained optimisation of spatial sampling : a geostatistical approach
Groenigen, van J.W.
1999-01-01
This thesis aims at the development of optimal sampling strategies for geostatistical studies. Special emphasis is on the optimal use of ancillary data, such as co-related imagery, preliminary observations and historic knowledge. Although the object of all studies
Clerkin, Elise M; Magee, Joshua C; Wells, Tony T; Beard, Courtney; Barnett, Nancy P
2016-12-01
Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Adult participants (N = 86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Clerkin, Elise M.; Magee, Joshua C.; Wells, Tony T.; Beard, Courtney; Barnett, Nancy P.
2016-01-01
Objective Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Method Adult participants (N=86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Results Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. Conclusions These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. PMID:27591918
Directory of Open Access Journals (Sweden)
R Drew Carleton
Full Text Available Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with "pre-sampling" data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n ∼ 100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand was the most efficient, with sample means converging on true mean density for sample sizes of n ∼ 25-40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods.
Green approaches in sample preparation of bioanalytical samples prior to chromatographic analysis.
Filippou, Olga; Bitas, Dimitrios; Samanidou, Victoria
2017-02-01
Sample preparation is considered to be the most challenging step of the analytical procedure, since it has an effect on the whole analytical methodology, therefore it contributes significantly to the greenness or lack of it of the entire process. The elimination of the sample treatment steps, pursuing at the same time the reduction of the amount of the sample, strong reductions in consumption of hazardous reagents and energy also maximizing safety for operators and environment, the avoidance of the use of big amount of organic solvents, form the basis for greening sample preparation and analytical methods. In the last decade, the development and utilization of greener and sustainable microextraction techniques is an alternative to classical sample preparation procedures. In this review, the main green microextraction techniques (solid phase microextraction, stir bar sorptive extraction, hollow-fiber liquid phase microextraction, dispersive liquid - liquid microextraction, etc.) will be presented, with special attention to bioanalytical applications of these environment-friendly sample preparation techniques which comply with the green analytical chemistry principles. Copyright © 2016 Elsevier B.V. All rights reserved.
Depletion benchmarks calculation of random media using explicit modeling approach of RMC
International Nuclear Information System (INIS)
Liu, Shichang; She, Ding; Liang, Jin-gang; Wang, Kan
2016-01-01
Highlights: • Explicit modeling of RMC is applied to depletion benchmark for HTGR fuel element. • Explicit modeling can provide detailed burnup distribution and burnup heterogeneity. • The results would serve as a supplement for the HTGR fuel depletion benchmark. • The method of adjacent burnup regions combination is proposed for full-core problems. • The combination method can reduce memory footprint, keeping the computing accuracy. - Abstract: Monte Carlo method plays an important role in accurate simulation of random media, owing to its advantages of the flexible geometry modeling and the use of continuous-energy nuclear cross sections. Three stochastic geometry modeling methods including Random Lattice Method, Chord Length Sampling and explicit modeling approach with mesh acceleration technique, have been implemented in RMC to simulate the particle transport in the dispersed fuels, in which the explicit modeling method is regarded as the best choice. In this paper, the explicit modeling method is applied to the depletion benchmark for HTGR fuel element, and the method of combination of adjacent burnup regions has been proposed and investigated. The results show that the explicit modeling can provide detailed burnup distribution of individual TRISO particles, and this work would serve as a supplement for the HTGR fuel depletion benchmark calculations. The combination of adjacent burnup regions can effectively reduce the memory footprint while keeping the computational accuracy.
Diversification Strategies and Firm Performance: A Sample Selection Approach
Santarelli, Enrico; Tran, Hien Thu
2013-01-01
This paper is based upon the assumption that firm profitability is determined by its degree of diversification which in turn is strongly related to the antecedent decision to carry out diversification activities. This calls for an empirical approach that permits the joint analysis of the three interrelated and consecutive stages of the overall diversification process: diversification decision, degree of diversification, and outcome of diversification. We apply parametric and semiparametric ap...
A Geostatistical Approach to Indoor Surface Sampling Strategies
DEFF Research Database (Denmark)
Schneider, Thomas; Petersen, Ole Holm; Nielsen, Allan Aasbjerg
1990-01-01
Particulate surface contamination is of concern in production industries such as food processing, aerospace, electronics and semiconductor manufacturing. There is also an increased awareness that surface contamination should be monitored in industrial hygiene surveys. A conceptual and theoretical...... framework for designing sampling strategies is thus developed. The distribution and spatial correlation of surface contamination can be characterized using concepts from geostatistical science, where spatial applications of statistics is most developed. The theory is summarized and particulate surface...... contamination, sampled from small areas on a table, have been used to illustrate the method. First, the spatial correlation is modelled and the parameters estimated from the data. Next, it is shown how the contamination at positions not measured can be estimated with kriging, a minimum mean square error method...
Song, Zhuoyi; Zhou, Yu; Juusola, Mikko
2016-01-01
Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779
Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures
Energy Technology Data Exchange (ETDEWEB)
Calyam, Prasad
2014-09-15
The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federation policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.
Brus, D.J.; Gruijter, de J.J.
1997-01-01
Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based
Exploring multicollinearity using a random matrix theory approach.
Feher, Kristen; Whelan, James; Müller, Samuel
2012-01-01
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
The contribution of simple random sampling to observed variations in faecal egg counts.
Torgerson, Paul R; Paul, Michaela; Lewis, Fraser I
2012-09-10
It has been over 100 years since the classical paper published by Gosset in 1907, under the pseudonym "Student", demonstrated that yeast cells suspended in a fluid and measured by a haemocytometer conformed to a Poisson process. Similarly parasite eggs in a faecal suspension also conform to a Poisson process. Despite this there are common misconceptions how to analyse or interpret observations from the McMaster or similar quantitative parasitic diagnostic techniques, widely used for evaluating parasite eggs in faeces. The McMaster technique can easily be shown from a theoretical perspective to give variable results that inevitably arise from the random distribution of parasite eggs in a well mixed faecal sample. The Poisson processes that lead to this variability are described and illustrative examples of the potentially large confidence intervals that can arise from observed faecal eggs counts that are calculated from the observations on a McMaster slide. Attempts to modify the McMaster technique, or indeed other quantitative techniques, to ensure uniform egg counts are doomed to failure and belie ignorance of Poisson processes. A simple method to immediately identify excess variation/poor sampling from replicate counts is provided. Copyright © 2012 Elsevier B.V. All rights reserved.
Random matrix approach to the dynamics of stock inventory variations
International Nuclear Information System (INIS)
Zhou Weixing; Mu Guohua; Kertész, János
2012-01-01
It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient C ij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ 1 and λ 2 ) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ 1 ) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients C VR between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets. (paper)
Blind Measurement Selection: A Random Matrix Theory Approach
Elkhalil, Khalil
2016-12-14
This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter vector. The exhaustive search inspecting each of the $n\\\\choose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.
Random matrix approach to the dynamics of stock inventory variations
Zhou, Wei-Xing; Mu, Guo-Hua; Kertész, János
2012-09-01
It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient Cij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ1 and λ2) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ1) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients CV R between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets.
Chaudhuri, Arijit
2014-01-01
Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...
Reducing approach bias to achieve smoking cessation: A pilot randomized placebo-controlled trial
Baird, S.O.; Rinck, M.; Rosenfield, D.; Davis, M.L.; Fisher, J.R.; Becker, E.S.; Powers, M.B.; Smits, J.A.J.
2017-01-01
This study aimed to provide a preliminary test of the efficacy of a brief cognitive bias modification program for reducing approach bias in adult smokers motivated to quit. Participants were 52 smokers who were randomly assigned to four sessions of approach bias modification training (AAT) or sham
Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory
Suliman, Mohamed Abdalla Elhag
2016-10-06
In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.
Walvoort, D.J.J.; Brus, D.J.; Gruijter, de J.J.
2010-01-01
Both for mapping and for estimating spatial means of an environmental variable, the accuracy of the result will usually be increased by dispersing the sample locations so that they cover the study area as uniformly as possible. We developed a new R package for designing spatial coverage samples for
Complete super-sample lensing covariance in the response approach
Barreira, Alexandre; Krause, Elisabeth; Schmidt, Fabian
2018-06-01
We derive the complete super-sample covariance (SSC) of the matter and weak lensing convergence power spectra using the power spectrum response formalism to accurately describe the coupling of super- to sub-survey modes. The SSC term is completely characterized by the survey window function, the nonlinear matter power spectrum and the full first-order nonlinear power spectrum response function, which describes the response to super-survey density and tidal field perturbations. Generalized separate universe simulations can efficiently measure these responses in the nonlinear regime of structure formation, which is necessary for lensing applications. We derive the lensing SSC formulae for two cases: one under the Limber and flat-sky approximations, and a more general one that goes beyond the Limber approximation in the super-survey mode and is valid for curved sky applications. Quantitatively, we find that for sky fractions fsky ≈ 0.3 and a single source redshift at zS=1, the use of the flat-sky and Limber approximation underestimates the total SSC contribution by ≈ 10%. The contribution from super-survey tidal fields to the lensing SSC, which has not been included in cosmological analyses so far, is shown to represent about 5% of the total lensing covariance on multipoles l1,l2 gtrsim 300. The SSC is the dominant off-diagonal contribution to the total lensing covariance, making it appropriate to include these tidal terms and beyond flat-sky/Limber corrections in cosmic shear analyses.
Stefan Landgraeber; Henning Quitmann; Sebastian Güth; Marcel Haversath; Wojciech Kowalczyk; Andrés Kecskeméthy; Hansjörg Heep; Marcus Jäger
2013-01-01
There is still controversy as to whether minimally invasive total hip arthroplasty enhances the postoperative outcome. The aim of this study was to compare the outcome of patients who underwent total hip replacement through an anterolateral minimally invasive (MIS) or a conventional lateral approach (CON). We performed a randomized, prospective study of 75 patients with primary hip arthritis, who underwent hip replacement through the MIS (n=36) or CON (n=39) approach. The Western Ontario and ...
WANG, P. T.
2015-12-01
Groundwater modeling requires to assign hydrogeological properties to every numerical grid. Due to the lack of detailed information and the inherent spatial heterogeneity, geological properties can be treated as random variables. Hydrogeological property is assumed to be a multivariate distribution with spatial correlations. By sampling random numbers from a given statistical distribution and assigning a value to each grid, a random field for modeling can be completed. Therefore, statistics sampling plays an important role in the efficiency of modeling procedure. Latin Hypercube Sampling (LHS) is a stratified random sampling procedure that provides an efficient way to sample variables from their multivariate distributions. This study combines the the stratified random procedure from LHS and the simulation by using LU decomposition to form LULHS. Both conditional and unconditional simulations of LULHS were develpoed. The simulation efficiency and spatial correlation of LULHS are compared to the other three different simulation methods. The results show that for the conditional simulation and unconditional simulation, LULHS method is more efficient in terms of computational effort. Less realizations are required to achieve the required statistical accuracy and spatial correlation.
Discriminative motif discovery via simulated evolution and random under-sampling.
Directory of Open Access Journals (Sweden)
Tao Song
Full Text Available Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.
Discriminative motif discovery via simulated evolution and random under-sampling.
Song, Tao; Gu, Hong
2014-01-01
Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs) training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.
A symbolic dynamics approach for the complexity analysis of chaotic pseudo-random sequences
International Nuclear Information System (INIS)
Xiao Fanghong
2004-01-01
By considering a chaotic pseudo-random sequence as a symbolic sequence, authors present a symbolic dynamics approach for the complexity analysis of chaotic pseudo-random sequences. The method is applied to the cases of Logistic map and one-way coupled map lattice to demonstrate how it works, and a comparison is made between it and the approximate entropy method. The results show that this method is applicable to distinguish the complexities of different chaotic pseudo-random sequences, and it is superior to the approximate entropy method
Li, Tiandong
2012-01-01
In large-scale assessments, such as the National Assessment of Educational Progress (NAEP), plausible values based on Multiple Imputations (MI) have been used to estimate population characteristics for latent constructs under complex sample designs. Mislevy (1991) derived a closed-form analytic solution for a fixed-effect model in creating…
Jannink, I; Bennen, J N; Blaauw, J; van Diest, P J; Baak, J P
1995-01-01
This study compares the influence of two different nuclear sampling methods on the prognostic value of assessments of mean and standard deviation of nuclear area (MNA, SDNA) in 191 consecutive invasive breast cancer patients with long term follow up. The first sampling method used was 'at convenience' sampling (ACS); the second, systematic random sampling (SRS). Both sampling methods were tested with a sample size of 50 nuclei (ACS-50 and SRS-50). To determine whether, besides the sampling methods, sample size had impact on prognostic value as well, the SRS method was also tested using a sample size of 100 nuclei (SRS-100). SDNA values were systematically lower for ACS, obviously due to (unconsciously) not including small and large nuclei. Testing prognostic value of a series of cut off points, MNA and SDNA values assessed by the SRS method were prognostically significantly stronger than the values obtained by the ACS method. This was confirmed in Cox regression analysis. For the MNA, the Mantel-Cox p-values from SRS-50 and SRS-100 measurements were not significantly different. However, for the SDNA, SRS-100 yielded significantly lower p-values than SRS-50. In conclusion, compared with the 'at convenience' nuclear sampling method, systematic random sampling of nuclei is not only superior with respect to reproducibility of results, but also provides a better prognostic value in patients with invasive breast cancer.
Directory of Open Access Journals (Sweden)
Gunter eSpöck
2015-05-01
Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.
Sumner, Anne E; Luercio, Marcella F; Frempong, Barbara A; Ricks, Madia; Sen, Sabyasachi; Kushner, Harvey; Tulloch-Reid, Marshall K
2009-02-01
The disposition index, the product of the insulin sensitivity index (S(I)) and the acute insulin response to glucose, is linked in African Americans to chromosome 11q. This link was determined with S(I) calculated with the nonlinear regression approach to the minimal model and data from the reduced-sample insulin-modified frequently-sampled intravenous glucose tolerance test (Reduced-Sample-IM-FSIGT). However, the application of the nonlinear regression approach to calculate S(I) using data from the Reduced-Sample-IM-FSIGT has been challenged as being not only inaccurate but also having a high failure rate in insulin-resistant subjects. Our goal was to determine the accuracy and failure rate of the Reduced-Sample-IM-FSIGT using the nonlinear regression approach to the minimal model. With S(I) from the Full-Sample-IM-FSIGT considered the standard and using the nonlinear regression approach to the minimal model, we compared the agreement between S(I) from the Full- and Reduced-Sample-IM-FSIGT protocols. One hundred African Americans (body mass index, 31.3 +/- 7.6 kg/m(2) [mean +/- SD]; range, 19.0-56.9 kg/m(2)) had FSIGTs. Glucose (0.3 g/kg) was given at baseline. Insulin was infused from 20 to 25 minutes (total insulin dose, 0.02 U/kg). For the Full-Sample-IM-FSIGT, S(I) was calculated based on the glucose and insulin samples taken at -1, 1, 2, 3, 4, 5, 6, 7, 8,10, 12, 14, 16, 19, 22, 23, 24, 25, 27, 30, 40, 50, 60, 70, 80, 90, 100, 120, 150, and 180 minutes. For the Reduced-Sample-FSIGT, S(I) was calculated based on the time points that appear in bold. Agreement was determined by Spearman correlation, concordance, and the Bland-Altman method. In addition, for both protocols, the population was divided into tertiles of S(I). Insulin resistance was defined by the lowest tertile of S(I) from the Full-Sample-IM-FSIGT. The distribution of subjects across tertiles was compared by rank order and kappa statistic. We found that the rate of failure of resolution of S(I) by
Shen, Lujun; Yang, Lei; Zhang, Jing; Zhang, Meng
2018-01-01
To explore the effect of expressive writing of positive emotions on test anxiety among senior-high-school students. The Test Anxiety Scale (TAS) was used to assess the anxiety level of 200 senior-high-school students. Seventy-five students with high anxiety were recruited and divided randomly into experimental and control groups. Each day for 30 days, the experimental group engaged in 20 minutes of expressive writing of positive emotions, while the control group was asked to merely write down their daily events. A second test was given after the month-long experiment to analyze whether there had been a reduction in anxiety among the sample. Quantitative data was obtained from TAS scores. The NVivo10.0 software program was used to examine the frequency of particular word categories used in participants' writing manuscripts. Senior-high-school students indicated moderate to high test anxiety. There was a significant difference in post-test results (P 0.05). Students' writing manuscripts were mainly encoded on five code categories: cause, anxiety manifestation, positive emotion, insight and evaluation. There was a negative relation between positive emotion, insight codes and test anxiety. There were significant differences in the positive emotion, anxiety manifestation, and insight code categories between the first 10 days' manuscripts and the last 10 days' ones. Long-term expressive writing of positive emotions appears to help reduce test anxiety by using insight and positive emotion words for Chinese students. Efficient and effective intervention programs to ease test anxiety can be designed based on this study.
Directory of Open Access Journals (Sweden)
Lujun Shen
Full Text Available To explore the effect of expressive writing of positive emotions on test anxiety among senior-high-school students.The Test Anxiety Scale (TAS was used to assess the anxiety level of 200 senior-high-school students. Seventy-five students with high anxiety were recruited and divided randomly into experimental and control groups. Each day for 30 days, the experimental group engaged in 20 minutes of expressive writing of positive emotions, while the control group was asked to merely write down their daily events. A second test was given after the month-long experiment to analyze whether there had been a reduction in anxiety among the sample. Quantitative data was obtained from TAS scores. The NVivo10.0 software program was used to examine the frequency of particular word categories used in participants' writing manuscripts.Senior-high-school students indicated moderate to high test anxiety. There was a significant difference in post-test results (P 0.05. Students' writing manuscripts were mainly encoded on five code categories: cause, anxiety manifestation, positive emotion, insight and evaluation. There was a negative relation between positive emotion, insight codes and test anxiety. There were significant differences in the positive emotion, anxiety manifestation, and insight code categories between the first 10 days' manuscripts and the last 10 days' ones.Long-term expressive writing of positive emotions appears to help reduce test anxiety by using insight and positive emotion words for Chinese students. Efficient and effective intervention programs to ease test anxiety can be designed based on this study.
Zhang, Jing; Zhang, Meng
2018-01-01
Purpose To explore the effect of expressive writing of positive emotions on test anxiety among senior-high-school students. Methods The Test Anxiety Scale (TAS) was used to assess the anxiety level of 200 senior-high-school students. Seventy-five students with high anxiety were recruited and divided randomly into experimental and control groups. Each day for 30 days, the experimental group engaged in 20 minutes of expressive writing of positive emotions, while the control group was asked to merely write down their daily events. A second test was given after the month-long experiment to analyze whether there had been a reduction in anxiety among the sample. Quantitative data was obtained from TAS scores. The NVivo10.0 software program was used to examine the frequency of particular word categories used in participants’ writing manuscripts. Results Senior-high-school students indicated moderate to high test anxiety. There was a significant difference in post-test results (P 0.05). Students’ writing manuscripts were mainly encoded on five code categories: cause, anxiety manifestation, positive emotion, insight and evaluation. There was a negative relation between positive emotion, insight codes and test anxiety. There were significant differences in the positive emotion, anxiety manifestation, and insight code categories between the first 10 days’ manuscripts and the last 10 days’ ones. Conclusions Long-term expressive writing of positive emotions appears to help reduce test anxiety by using insight and positive emotion words for Chinese students. Efficient and effective intervention programs to ease test anxiety can be designed based on this study. PMID:29401473
International Nuclear Information System (INIS)
Plevnik, Lucijan; Žerovnik, Gašper
2016-01-01
Highlights: • Methods for random sampling of correlated parameters. • Link to open-source code for sampling of resonance parameters in ENDF-6 format. • Validation of the code on realistic and artificial data. • Validation of covariances in three major contemporary nuclear data libraries. - Abstract: Methods for random sampling of correlated parameters are presented. The methods are implemented for sampling of resonance parameters in ENDF-6 format and a link to the open-source code ENDSAM is given. The code has been validated on realistic data. Additionally, consistency of covariances of resonance parameters of three major contemporary nuclear data libraries (JEFF-3.2, ENDF/B-VII.1 and JENDL-4.0u2) has been checked.
Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Fr?d?ric; Petignat, Patrick
2015-01-01
Background Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. Methods A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed f...
Prediction of soil CO2 flux in sugarcane management systems using the Random Forest approach
Directory of Open Access Journals (Sweden)
Rose Luiza Moraes Tavares
Full Text Available ABSTRACT: The Random Forest algorithm is a data mining technique used for classifying attributes in order of importance to explain the variation in an attribute-target, as soil CO2 flux. This study aimed to identify prediction of soil CO2 flux variables in management systems of sugarcane through the machine-learning algorithm called Random Forest. Two different management areas of sugarcane in the state of São Paulo, Brazil, were selected: burned and green. In each area, we assembled a sampling grid with 81 georeferenced points to assess soil CO2 flux through automated portable soil gas chamber with measuring spectroscopy in the infrared during the dry season of 2011 and the rainy season of 2012. In addition, we sampled the soil to evaluate physical, chemical, and microbiological attributes. For data interpretation, we used the Random Forest algorithm, based on the combination of predicted decision trees (machine learning algorithms in which every tree depends on the values of a random vector sampled independently with the same distribution to all the trees of the forest. The results indicated that clay content in the soil was the most important attribute to explain the CO2 flux in the areas studied during the evaluated period. The use of the Random Forest algorithm originated a model with a good fit (R2 = 0.80 for predicted and observed values.
A sub-sampled approach to extremely low-dose STEM
Energy Technology Data Exchange (ETDEWEB)
Stevens, A. [OptimalSensing, Southlake, Texas 76092, USA; Duke University, ECE, Durham, North Carolina 27708, USA; Luzi, L. [Rice University, ECE, Houston, Texas 77005, USA; Yang, H. [Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA; Kovarik, L. [Pacific NW National Laboratory, Richland, Washington 99354, USA; Mehdi, B. L. [Pacific NW National Laboratory, Richland, Washington 99354, USA; University of Liverpool, Materials Engineering, Liverpool L69 3GH, United Kingdom; Liyu, A. [Pacific NW National Laboratory, Richland, Washington 99354, USA; Gehm, M. E. [Duke University, ECE, Durham, North Carolina 27708, USA; Browning, N. D. [Pacific NW National Laboratory, Richland, Washington 99354, USA; University of Liverpool, Materials Engineering, Liverpool L69 3GH, United Kingdom
2018-01-22
The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e^{-}Å^{2}) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis of the node distribution in metal-organic frameworks (MOFs).
International Nuclear Information System (INIS)
Makepeace, C.E.; Horvath, F.J.; Stocker, H.
1981-11-01
The aim of a stratified random sampling plan is to provide the best estimate (in the absence of full-shift personal gravimetric sampling) of personal exposure to respirable quartz among underground miners. One also gains information of the exposure distribution of all the miners at the same time. Three variables (or strata) are considered in the present scheme: locations, occupations and times of sampling. Random sampling within each stratum ensures that each location, occupation and time of sampling has equal opportunity of being selected without bias. Following implementation of the plan and analysis of collected data, one can determine the individual exposures and the mean. This information can then be used to identify those groups whose exposure contributes significantly to the collective exposure. In turn, this identification, along with other considerations, allows the mine operator to carry out a cost-benefit optimization and eventual implementation of engineering controls for these groups. This optimization and engineering control procedure, together with the random sampling plan, can then be used in an iterative manner to minimize the mean value of the distribution and collective exposures
Albumin to creatinine ratio in a random urine sample: Correlation with severity of preeclampsia
Directory of Open Access Journals (Sweden)
Fady S. Moiety
2014-06-01
Conclusions: Random urine ACR may be a reliable method for prediction and assessment of severity of preeclampsia. Using the estimated cut-off may add to the predictive value of such a simple quick test.
Analogies between colored Lévy noise and random channel approach to disordered kinetics
Vlad, Marcel O.; Velarde, Manuel G.; Ross, John
2004-02-01
We point out some interesting analogies between colored Lévy noise and the random channel approach to disordered kinetics. These analogies are due to the fact that the probability density of the Lévy noise source plays a similar role as the probability density of rate coefficients in disordered kinetics. Although the equations for the two approaches are not identical, the analogies can be used for deriving new, useful results for both problems. The random channel approach makes it possible to generalize the fractional Uhlenbeck-Ornstein processes (FUO) for space- and time-dependent colored noise. We describe the properties of colored noise in terms of characteristic functionals, which are evaluated by using a generalization of Huber's approach to complex relaxation [Phys. Rev. B 31, 6070 (1985)]. We start out by investigating the properties of symmetrical white noise and then define the Lévy colored noise in terms of a Langevin equation with a Lévy white noise source. We derive exact analytical expressions for the various characteristic functionals, which characterize the noise, and a functional fractional Fokker-Planck equation for the probability density functional of the noise at a given moment in time. Second, by making an analogy between the theory of colored noise and the random channel approach to disordered kinetics, we derive fractional equations for the evolution of the probability densities of the random rate coefficients in disordered kinetics. These equations serve as a basis for developing methods for the evaluation of the statistical properties of the random rate coefficients from experimental data. Special attention is paid to the analysis of systems for which the observed kinetic curves can be described by linear or nonlinear stretched exponential kinetics.
A novel approach to generate random surface thermal loads in piping
Energy Technology Data Exchange (ETDEWEB)
Costa Garrido, Oriol, E-mail: oriol.costa@ijs.si; El Shawish, Samir; Cizelj, Leon
2014-07-01
Highlights: • Approach for generating continuous and time-dependent random thermal fields. • Temperature fields simulate fluid mixing thermal loads at fluid–wall interface. • Through plane-wave decomposition, experimental temperature statistics are reproduced. • Validation of the approach with a case study from literature. • Random surface thermal loads generation for future thermal fatigue analyses of piping. - Abstract: There is a need to perform three-dimensional mechanical analyses of pipes, subjected to complex thermo-mechanical loadings such as the ones evolving from turbulent fluid mixing in a T-junction. A novel approach is proposed in this paper for fast and reliable generation of random thermal loads at the pipe surface. The resultant continuous and time-dependent temperature fields simulate the fluid mixing thermal loads at the fluid–wall interface. The approach is based on reproducing discrete fluid temperature statistics, from experimental readings or computational fluid dynamic simulation's results, at interface locations through plane-wave decomposition of temperature fluctuations. The obtained random thermal fields contain large scale instabilities such as cold and hot spots traveling at flow velocities. These low frequency instabilities are believed to be among the major causes of the thermal fatigue in T-junction configurations. The case study found in the literature has been used to demonstrate the generation of random surface thermal loads. The thermal fields generated with the proposed approach are statistically equivalent (within the first two moments) to those from CFD simulations results of similar characteristics. The fields maintain the input data at field locations for a large set of parameters used to generate the thermal loads. This feature will be of great advantage in future sensitivity fatigue analyses of three-dimensional pipe structures.
A novel approach to generate random surface thermal loads in piping
International Nuclear Information System (INIS)
Costa Garrido, Oriol; El Shawish, Samir; Cizelj, Leon
2014-01-01
Highlights: • Approach for generating continuous and time-dependent random thermal fields. • Temperature fields simulate fluid mixing thermal loads at fluid–wall interface. • Through plane-wave decomposition, experimental temperature statistics are reproduced. • Validation of the approach with a case study from literature. • Random surface thermal loads generation for future thermal fatigue analyses of piping. - Abstract: There is a need to perform three-dimensional mechanical analyses of pipes, subjected to complex thermo-mechanical loadings such as the ones evolving from turbulent fluid mixing in a T-junction. A novel approach is proposed in this paper for fast and reliable generation of random thermal loads at the pipe surface. The resultant continuous and time-dependent temperature fields simulate the fluid mixing thermal loads at the fluid–wall interface. The approach is based on reproducing discrete fluid temperature statistics, from experimental readings or computational fluid dynamic simulation's results, at interface locations through plane-wave decomposition of temperature fluctuations. The obtained random thermal fields contain large scale instabilities such as cold and hot spots traveling at flow velocities. These low frequency instabilities are believed to be among the major causes of the thermal fatigue in T-junction configurations. The case study found in the literature has been used to demonstrate the generation of random surface thermal loads. The thermal fields generated with the proposed approach are statistically equivalent (within the first two moments) to those from CFD simulations results of similar characteristics. The fields maintain the input data at field locations for a large set of parameters used to generate the thermal loads. This feature will be of great advantage in future sensitivity fatigue analyses of three-dimensional pipe structures
Edgington, Eugene
2007-01-01
Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani
Application of the random vibration approach in the seismic analysis of LMFBR structures
International Nuclear Information System (INIS)
Preumont, A.
1988-01-01
The first part discusses the general topic of the spectral analysis of linear multi-degree-of-freedom structure subjected to a stationary random field. Particular attention is given to structures with non-classical damping and hereditary characteristics. The method is implemented in the computer programme RANDOM. Next, the same concepts are applied to multi-supported structures subjected to a stationary seismic excitation. The method is implemented in the computer programme SEISME. Two related problems are dealt with in the next two chapters: (i) the relation between the input of the random vibration analysis and the traditional ground motion specification for seismic analysis (the Design Response Spectra) and (ii) the application of random vibration techniques to the direct generation of floor response spectra. Finally the problem of extracting information from costly time history analyses is addressed. This study has mainly been concerned with the methodology and the development of appropriate softwares. Some qualitative conclusions have been drawn regarding the expected benefit of the approach. They have been judged promising enough to motivate a benchmark exercise. Specifically, the random vibration approach will be compared to the current approximate methods (response spectrum) and time-history analyses (considered as representative of the true response) for a set of typical structures. The hope is that some of the flaws of the current approximate methods can be removed
Billong, Serge Clotaire; Fokam, Joseph; Penda, Calixte Ida; Amadou, Salmon; Kob, David Same; Billong, Edson-Joan; Colizzi, Vittorio; Ndjolo, Alexis; Bisseck, Anne-Cecile Zoung-Kani; Elat, Jean-Bosco Nfetam
2016-11-15
Retention on lifelong antiretroviral therapy (ART) is essential in sustaining treatment success while preventing HIV drug resistance (HIVDR), especially in resource-limited settings (RLS). In an era of rising numbers of patients on ART, mastering patients in care is becoming more strategic for programmatic interventions. Due to lapses and uncertainty with the current WHO sampling approach in Cameroon, we thus aimed to ascertain the national performance of, and determinants in, retention on ART at 12 months. Using a systematic random sampling, a survey was conducted in the ten regions (56 sites) of Cameroon, within the "reporting period" of October 2013-November 2014, enrolling 5005 eligible adults and children. Performance in retention on ART at 12 months was interpreted following the definition of HIVDR early warning indicator: excellent (>85%), fair (85-75%), poor (sampling strategy could be further strengthened for informed ART monitoring and HIVDR prevention perspectives.
International Nuclear Information System (INIS)
Matsuda, Hideharu; Minato, Susumu
2002-01-01
The accuracy of statistical quantity like the mean value and contour map obtained by measurement of the environmental gamma-ray dose rate was evaluated by random sampling of 5 different model distribution maps made by the mean slope, -1.3, of power spectra calculated from the actually measured values. The values were derived from 58 natural gamma dose rate data reported worldwide ranging in the means of 10-100 Gy/h rates and 10 -3 -10 7 km 2 areas. The accuracy of the mean value was found around ±7% even for 60 or 80 samplings (the most frequent number) and the standard deviation had the accuracy less than 1/4-1/3 of the means. The correlation coefficient of the frequency distribution was found 0.860 or more for 200-400 samplings (the most frequent number) but of the contour map, 0.502-0.770. (K.H.)
Random-walk approach to the d -dimensional disordered Lorentz gas
Adib, Artur B.
2008-02-01
A correlated random walk approach to diffusion is applied to the disordered nonoverlapping Lorentz gas. By invoking the Lu-Torquato theory for chord-length distributions in random media [J. Chem. Phys. 98, 6472 (1993)], an analytic expression for the diffusion constant in arbitrary number of dimensions d is obtained. The result corresponds to an Enskog-like correction to the Boltzmann prediction, being exact in the dilute limit, and better or nearly exact in comparison to renormalized kinetic theory predictions for all allowed densities in d=2,3 . Extensive numerical simulations were also performed to elucidate the role of the approximations involved.
Instanton Approach to the Langevin Motion of a Particle in a Random Potential
International Nuclear Information System (INIS)
Lopatin, A. V.; Vinokur, V. M.
2001-01-01
We develop an instanton approach to the nonequilibrium dynamics in one-dimensional random environments. The long time behavior is controlled by rare fluctuations of the disorder potential and, accordingly, by the tail of the distribution function for the time a particle needs to propagate along the system (the delay time). The proposed method allows us to find the tail of the delay time distribution function and delay time moments, providing thus an exact description of the long time dynamics. We analyze arbitrary environments covering different types of glassy dynamics: dynamics in a short-range random field, creep, and Sinai's motion
Directory of Open Access Journals (Sweden)
Timothy C. Guetterman
2015-05-01
Full Text Available Although recommendations exist for determining qualitative sample sizes, the literature appears to contain few instances of research on the topic. Practical guidance is needed for determining sample sizes to conduct rigorous qualitative research, to develop proposals, and to budget resources. The purpose of this article is to describe qualitative sample size and sampling practices within published studies in education and the health sciences by research design: case study, ethnography, grounded theory methodology, narrative inquiry, and phenomenology. I analyzed the 51 most highly cited studies using predetermined content categories and noteworthy sampling characteristics that emerged. In brief, the findings revealed a mean sample size of 87. Less than half of the studies identified a sampling strategy. I include a description of findings by approach and recommendations for sampling to assist methodologists, reviewers, program officers, graduate students, and other qualitative researchers in understanding qualitative sampling practices in recent studies. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1502256
Hoerning, Sebastian; Bardossy, Andras; du Plessis, Jaco
2017-04-01
Most geostatistical inverse groundwater flow and transport modelling approaches utilize a numerical solver to minimize the discrepancy between observed and simulated hydraulic heads and/or hydraulic concentration values. The optimization procedure often requires many model runs, which for complex models lead to long run times. Random Mixing is a promising new geostatistical technique for inverse modelling. The method is an extension of the gradual deformation approach. It works by finding a field which preserves the covariance structure and maintains observed hydraulic conductivities. This field is perturbed by mixing it with new fields that fulfill the homogeneous conditions. This mixing is expressed as an optimization problem which aims to minimize the difference between the observed and simulated hydraulic heads and/or concentration values. To preserve the spatial structure, the mixing weights must lie on the unit hyper-sphere. We present a modification to the Random Mixing algorithm which significantly reduces the number of model runs required. The approach involves taking n equally spaced points on the unit circle as weights for mixing conditional random fields. Each of these mixtures provides a solution to the forward model at the conditioning locations. For each of the locations the solutions are then interpolated around the circle to provide solutions for additional mixing weights at very low computational cost. The interpolated solutions are used to search for a mixture which maximally reduces the objective function. This is in contrast to other approaches which evaluate the objective function for the n mixtures and then interpolate the obtained values. Keeping the mixture on the unit circle makes it easy to generate equidistant sampling points in the space; however, this means that only two fields are mixed at a time. Once the optimal mixture for two fields has been found, they are combined to form the input to the next iteration of the algorithm. This
Lin, Zhixiang; Sanders, Stephan J; Li, Mingfeng; Sestan, Nenad; State, Matthew W; Zhao, Hongyu
2015-03-01
Human neurodevelopment is a highly regulated biological process. In this article, we study the dynamic changes of neurodevelopment through the analysis of human brain microarray data, sampled from 16 brain regions in 15 time periods of neurodevelopment. We develop a two-step inferential procedure to identify expressed and unexpressed genes and to detect differentially expressed genes between adjacent time periods. Markov Random Field (MRF) models are used to efficiently utilize the information embedded in brain region similarity and temporal dependency in our approach. We develop and implement a Monte Carlo expectation-maximization (MCEM) algorithm to estimate the model parameters. Simulation studies suggest that our approach achieves lower misclassification error and potential gain in power compared with models not incorporating spatial similarity and temporal dependency.
Dynamic flow-through approaches for metal fractionation in environmentally relevant solid samples
DEFF Research Database (Denmark)
Miró, Manuel; Hansen, Elo Harald; Chomchoei, Roongrat
2005-01-01
generations of flow-injection analysis. Special attention is also paid to a novel, robust, non-invasive approach for on-site continuous sampling of soil solutions, capitalizing on flow-through microdialysis, which presents itself as an appealing complementary approach to the conventional lysimeter experiments...
Directory of Open Access Journals (Sweden)
Dhruba Das
2015-04-01
Full Text Available In this article, based on Zadeh’s extension principle we have apply the parametric programming approach to construct the membership functions of the performance measures when the interarrival time and the service time are fuzzy numbers based on the Baruah’s Randomness- Fuzziness Consistency Principle. The Randomness-Fuzziness Consistency Principle leads to defining a normal law of fuzziness using two different laws of randomness. In this article, two fuzzy queues FM/M/1 and M/FM/1 has been studied and constructed their membership functions of the system characteristics based on the aforesaid principle. The former represents a queue with fuzzy exponential arrivals and exponential service rate while the latter represents a queue with exponential arrival rate and fuzzy exponential service rate.
A sampling approach to constructing Lyapunov functions for nonlinear continuous–time systems
Bobiti, R.V.; Lazar, M.
2016-01-01
The problem of constructing a Lyapunov function for continuous-time nonlinear dynamical systems is tackled in this paper via a sampling-based approach. The main idea of the sampling-based method is to verify a Lyapunov-type inequality for a finite number of points (known state vectors) in the
Directory of Open Access Journals (Sweden)
Elsa Tavernier
Full Text Available We aimed to examine the extent to which inaccurate assumptions for nuisance parameters used to calculate sample size can affect the power of a randomized controlled trial (RCT. In a simulation study, we separately considered an RCT with continuous, dichotomous or time-to-event outcomes, with associated nuisance parameters of standard deviation, success rate in the control group and survival rate in the control group at some time point, respectively. For each type of outcome, we calculated a required sample size N for a hypothesized treatment effect, an assumed nuisance parameter and a nominal power of 80%. We then assumed a nuisance parameter associated with a relative error at the design stage. For each type of outcome, we randomly drew 10,000 relative errors of the associated nuisance parameter (from empirical distributions derived from a previously published review. Then, retro-fitting the sample size formula, we derived, for the pre-calculated sample size N, the real power of the RCT, taking into account the relative error for the nuisance parameter. In total, 23%, 0% and 18% of RCTs with continuous, binary and time-to-event outcomes, respectively, were underpowered (i.e., the real power was 90%. Even with proper calculation of sample size, a substantial number of trials are underpowered or overpowered because of imprecise knowledge of nuisance parameters. Such findings raise questions about how sample size for RCTs should be determined.
Medvedovici, Andrei; Udrescu, Stefan; Albu, Florin; Tache, Florentin; David, Victor
2011-09-01
Liquid-liquid extraction of target compounds from biological matrices followed by the injection of a large volume from the organic layer into the chromatographic column operated under reversed-phase (RP) conditions would successfully combine the selectivity and the straightforward character of the procedure in order to enhance sensitivity, compared with the usual approach of involving solvent evaporation and residue re-dissolution. Large-volume injection of samples in diluents that are not miscible with the mobile phase was recently introduced in chromatographic practice. The risk of random errors produced during the manipulation of samples is also substantially reduced. A bioanalytical method designed for the bioequivalence of fenspiride containing pharmaceutical formulations was based on a sample preparation procedure involving extraction of the target analyte and the internal standard (trimetazidine) from alkalinized plasma samples in 1-octanol. A volume of 75 µl from the octanol layer was directly injected on a Zorbax SB C18 Rapid Resolution, 50 mm length × 4.6 mm internal diameter × 1.8 µm particle size column, with the RP separation being carried out under gradient elution conditions. Detection was made through positive ESI and MS/MS. Aspects related to method development and validation are discussed. The bioanalytical method was successfully applied to assess bioequivalence of a modified release pharmaceutical formulation containing 80 mg fenspiride hydrochloride during two different studies carried out as single-dose administration under fasting and fed conditions (four arms), and multiple doses administration, respectively. The quality attributes assigned to the bioanalytical method, as resulting from its application to the bioequivalence studies, are highlighted and fully demonstrate that sample preparation based on large-volume injection of immiscible diluents has an increased potential for application in bioanalysis.
Reynolds, Maureen D; Tarter, Ralph E; Kirisci, Levent
2004-09-06
Men qualifying for substance use disorder (SUD) consequent to consumption of an illicit drug were compared according to recruitment method. It was hypothesized that volunteers would be more self-disclosing and exhibit more severe disturbances compared to randomly recruited subjects. Personal, demographic, family, social, substance use, psychiatric, and SUD characteristics of volunteers (N = 146) were compared to randomly recruited (N = 102) subjects. Volunteers had lower socioceconomic status, were more likely to be African American, and had lower IQ than randomly recruited subjects. Volunteers also evidenced greater social and family maladjustment and more frequently had received treatment for substance abuse. In addition, lower social desirability response bias was observed in the volunteers. SUD was not more severe in the volunteers; however, they reported a higher lifetime rate of opiate, diet, depressant, and analgesic drug use. Volunteers and randomly recruited subjects qualifying for SUD consequent to illicit drug use are similar in SUD severity but differ in terms of severity of psychosocial disturbance and history of drug involvement. The factors discriminating volunteers and randomly recruited subjects are well known to impact on outcome, hence they need to be considered in research design, especially when selecting a sampling strategy in treatment research.
Borak, T B
1986-04-01
Periodic grab sampling in combination with time-of-occupancy surveys has been the accepted procedure for estimating the annual exposure of underground U miners to Rn daughters. Temporal variations in the concentration of potential alpha energy in the mine generate uncertainties in this process. A system to randomize the selection of locations for measurement is described which can reduce uncertainties and eliminate systematic biases in the data. In general, a sample frequency of 50 measurements per year is sufficient to satisfy the criteria that the annual exposure be determined in working level months to within +/- 50% of the true value with a 95% level of confidence. Suggestions for implementing this randomization scheme are presented.
Directory of Open Access Journals (Sweden)
Hongqiang Liu
2016-06-01
Full Text Available A Bayesian random effects modeling approach was used to examine the influence of neighborhood characteristics on burglary risks in Jianghan District, Wuhan, China. This random effects model is essentially spatial; a spatially structured random effects term and an unstructured random effects term are added to the traditional non-spatial Poisson regression model. Based on social disorganization and routine activity theories, five covariates extracted from the available data at the neighborhood level were used in the modeling. Three regression models were fitted and compared by the deviance information criterion to identify which model best fit our data. A comparison of the results from the three models indicates that the Bayesian random effects model is superior to the non-spatial models in fitting the data and estimating regression coefficients. Our results also show that neighborhoods with above average bar density and department store density have higher burglary risks. Neighborhood-specific burglary risks and posterior probabilities of neighborhoods having a burglary risk greater than 1.0 were mapped, indicating the neighborhoods that should warrant more attention and be prioritized for crime intervention and reduction. Implications and limitations of the study are discussed in our concluding section.
Khlyupin, Aleksey; Aslyamov, Timur
2017-06-01
Realistic fluid-solid interaction potentials are essential in description of confined fluids especially in the case of geometric heterogeneous surfaces. Correlated random field is considered as a model of random surface with high geometric roughness. We provide the general theory of effective coarse-grained fluid-solid potential by proper averaging of the free energy of fluid molecules which interact with the solid media. This procedure is largely based on the theory of random processes. We apply first passage time probability problem and assume the local Markov properties of random surfaces. General expression of effective fluid-solid potential is obtained. In the case of small surface irregularities analytical approximation for effective potential is proposed. Both amorphous materials with large surface roughness and crystalline solids with several types of fcc lattices are considered. It is shown that the wider the lattice spacing in terms of molecular diameter of the fluid, the more obtained potentials differ from classical ones. A comparison with published Monte-Carlo simulations was discussed. The work provides a promising approach to explore how the random geometric heterogeneity affects on thermodynamic properties of the fluids.
International Nuclear Information System (INIS)
Preumont, A.; Shilab, S.; Cornaggia, L.; Reale, M.; Labbe, P.; Noe, H.
1992-01-01
This benchmark exercise is the continuation of the state-of-the-art review (EUR 11369 EN) which concluded that the random vibration approach could be an effective tool in seismic analysis of nuclear power plants, with potential advantages on time history and response spectrum techniques. As compared to the latter, the random vibration method provides an accurate treatment of multisupport excitations, non classical damping as well as the combination of high-frequency modal components. With respect to the former, the random vibration method offers direct information on statistical variability (probability distribution) and cheaper computations. The disadvantages of the random vibration method are that it is based on stationary results, and requires a power spectral density input instead of a response spectrum. A benchmark exercise to compare the three methods from the various aspects mentioned above, on one or several simple structures has been made. The following aspects have been covered with the simplest possible models: (i) statistical variability, (ii) multisupport excitation, (iii) non-classical damping. The random vibration method is therefore concluded to be a reliable method of analysis. Its use is recommended, particularly for preliminary design, owing to its computational advantage on multiple time history analysis
Directory of Open Access Journals (Sweden)
Andreas Steimer
Full Text Available Oscillations between high and low values of the membrane potential (UP and DOWN states respectively are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs of the exponential integrate and fire (EIF model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing
Steimer, Andreas; Schindler, Kaspar
2015-01-01
Oscillations between high and low values of the membrane potential (UP and DOWN states respectively) are an ubiquitous feature of cortical neurons during slow wave sleep and anesthesia. Nevertheless, a surprisingly small number of quantitative studies have been conducted only that deal with this phenomenon's implications for computation. Here we present a novel theory that explains on a detailed mathematical level the computational benefits of UP states. The theory is based on random sampling by means of interspike intervals (ISIs) of the exponential integrate and fire (EIF) model neuron, such that each spike is considered a sample, whose analog value corresponds to the spike's preceding ISI. As we show, the EIF's exponential sodium current, that kicks in when balancing a noisy membrane potential around values close to the firing threshold, leads to a particularly simple, approximative relationship between the neuron's ISI distribution and input current. Approximation quality depends on the frequency spectrum of the current and is improved upon increasing the voltage baseline towards threshold. Thus, the conceptually simpler leaky integrate and fire neuron that is missing such an additional current boost performs consistently worse than the EIF and does not improve when voltage baseline is increased. For the EIF in contrast, the presented mechanism is particularly effective in the high-conductance regime, which is a hallmark feature of UP-states. Our theoretical results are confirmed by accompanying simulations, which were conducted for input currents of varying spectral composition. Moreover, we provide analytical estimations of the range of ISI distributions the EIF neuron can sample from at a given approximation level. Such samples may be considered by any algorithmic procedure that is based on random sampling, such as Markov Chain Monte Carlo or message-passing methods. Finally, we explain how spike-based random sampling relates to existing computational
International Nuclear Information System (INIS)
Martens, B.R.
1989-01-01
In the context of random sampling tests, parameters are checked on the waste barrels and criteria are given on which these tests are based. Also, it is shown how faulty data on the properties of the waste or faulty waste barrels should be treated. To decide the extent of testing, the properties of the waste relevant to final storage are determined based on the conditioning process used. (DG) [de
Random or systematic sampling to detect a localised microbial contamination within a batch of food
Jongenburger, I.; Reij, M.W.; Boer, E.P.J.; Gorris, L.G.M.; Zwietering, M.H.
2011-01-01
Pathogenic microorganisms are known to be distributed heterogeneously in food products that are solid, semi-solid or powdered, like for instance peanut butter, cereals, or powdered milk. This complicates effective detection of the pathogens by sampling. Two-class sampling plans, which are deployed
Conditional estimation of exponential random graph models from snowball sampling designs
Pattison, Philippa E.; Robins, Garry L.; Snijders, Tom A. B.; Wang, Peng
2013-01-01
A complete survey of a network in a large population may be prohibitively difficult and costly. So it is important to estimate models for networks using data from various network sampling designs, such as link-tracing designs. We focus here on snowball sampling designs, designs in which the members
Kane, Michael
2002-01-01
Reviews the criticisms of sampling assumptions in generalizability theory (and in reliability theory) and examines the feasibility of using representative sampling, stratification, homogeneity assumptions, and replications to address these criticisms. Suggests some general outlines for the conduct of generalizability theory studies. (SLD)
Sampling maternal care behaviour in domestic dogs: What's the best approach?
Czerwinski, Veronika H; Smith, Bradley P; Hynd, Philip I; Hazel, Susan J
2017-07-01
Our understanding of the frequency and duration of maternal care behaviours in the domestic dog during the first two postnatal weeks is limited, largely due to the inconsistencies in the sampling methodologies that have been employed. In order to develop a more concise picture of maternal care behaviour during this period, and to help establish the sampling method that represents these behaviours best, we compared a variety of time sampling methods Six litters were continuously observed for a total of 96h over postnatal days 3, 6, 9 and 12 (24h per day). Frequent (dam presence, nursing duration, contact duration) and infrequent maternal behaviours (anogenital licking duration and frequency) were coded using five different time sampling methods that included: 12-h night (1800-0600h), 12-h day (0600-1800h), one hour period during the night (1800-0600h), one hour period during the day (0600-1800h) and a one hour period anytime. Each of the one hour time sampling method consisted of four randomly chosen 15-min periods. Two random sets of four 15-min period were also analysed to ensure reliability. We then determined which of the time sampling methods averaged over the three 24-h periods best represented the frequency and duration of behaviours. As might be expected, frequently occurring behaviours were adequately represented by short (oneh) sampling periods, however this was not the case with the infrequent behaviour. Thus, we argue that the time sampling methodology employed must match the behaviour of interest. This caution applies to maternal behaviour in altricial species, such as canids, as well as all systematic behavioural observations utilising time sampling methodology. Copyright © 2017. Published by Elsevier B.V.
An effective approach to attenuate random noise based on compressive sensing and curvelet transform
International Nuclear Information System (INIS)
Liu, Wei; Cao, Siyuan; Zu, Shaohuan; Chen, Yangkang
2016-01-01
Random noise attenuation is an important step in seismic data processing. In this paper, we propose a novel denoising approach based on compressive sensing and the curvelet transform. We formulate the random noise attenuation problem as an L _1 norm regularized optimization problem. We propose to use the curvelet transform as the sparse transform in the optimization problem to regularize the sparse coefficients in order to separate signal and noise and to use the gradient projection for sparse reconstruction (GPSR) algorithm to solve the formulated optimization problem with an easy implementation and a fast convergence. We tested the performance of our proposed approach on both synthetic and field seismic data. Numerical results show that the proposed approach can effectively suppress the distortion near the edge of seismic events during the noise attenuation process and has high computational efficiency compared with the traditional curvelet thresholding and iterative soft thresholding based denoising methods. Besides, compared with f-x deconvolution, the proposed denoising method is capable of eliminating the random noise more effectively while preserving more useful signals. (paper)
Shemtov-Yona, K; Rittel, D
2016-09-01
The fatigue performance of dental implants is usually assessed on the basis of cyclic S/N curves. This neither provides information on the anticipated service performance of the implant, nor does it allow for detailed comparisons between implants unless a thorough statistical analysis is performed, of the kind not currently required by certification standards. The notion of endurance limit is deemed to be of limited applicability, given unavoidable stress concentrations and random load excursions, that all characterize dental implants and their service conditions. We propose a completely different approach, based on random spectrum loading, as long used in aeronautical design. The implant is randomly loaded by a sequence of loads encompassing all load levels it would endure during its service life. This approach provides a quantitative and comparable estimate of its performance in terms of lifetime, based on the very fact that the implant will fracture sooner or later, instead of defining a fatigue endurance limit of limited practical application. Five commercial monolithic Ti-6Al-4V implants were tested under cyclic, and another 5 under spectrum loading conditions, at room temperature and dry air. The failure modes and fracture planes were identical for all implants. The approach is discussed, including its potential applications, for systematic, straightforward and reliable comparisons of various implant designs and environments, without the need for cumbersome statistical analyses. It is believed that spectrum loading can be considered for the generation of new standardization procedures and design applications. Copyright © 2016 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Savage, Gordon J.; Zhang, Xufang; Son, Young Kap; Pandey, Mahesh D.
2016-01-01
Resonance in a dynamic system is to be avoided since it often leads to impaired performance, overstressing, fatigue fracture and adverse human reactions. Thus, it is necessary to know the modal frequencies and ensure they do not coincide with any applied periodic loadings. For a rotating planar mechanism, the coefficients in the mass and stiffness matrices are periodically varying, and if the underlying geometry and material properties are treated as random variables then the modal frequencies are both position-dependent and probabilistic. The avoidance of resonance is now a complex problem. Herein, free vibration analysis helps determine ranges of modal frequencies that in turn, identify the running speeds of the mechanism to be avoided. This paper presents an efficient and accurate sample-based approach to determine probabilistic minimum and maximum extremes of the fundamental frequencies and the angular positions of their occurrence. Then, given critical lower and upper frequency constraints it is straightforward to determine reliability in terms of probability of exceedance. The novelty of the proposed approach is that the original expensive and implicit mechanistic model is replaced by an explicit meta-model that captures the tolerances of the design variables over the entire range of angular positions: position-dependent eigenvalues can be found easily and quickly. Extreme-value statistics of the modal frequencies and extreme-value statistics of the angular positions are readily computed through MCS. Limit-state surfaces that connect the frequencies to the design variables may be easily constructed. Error analysis identifies three errors and the paper presents ways to control them so the methodology can be sufficiently accurate. A numerical example of a flexible four-bar linkage shows the proposed methodology has engineering applications. The impact of the proposed methodology is two-fold: it presents a safe-side analysis based on free vibration methods to
Pi sampling: a methodical and flexible approach to initial macromolecular crystallization screening
International Nuclear Information System (INIS)
Gorrec, Fabrice; Palmer, Colin M.; Lebon, Guillaume; Warne, Tony
2011-01-01
Pi sampling, derived from the incomplete factorial approach, is an effort to maximize the diversity of macromolecular crystallization conditions and to facilitate the preparation of 96-condition initial screens. The Pi sampling method is derived from the incomplete factorial approach to macromolecular crystallization screen design. The resulting ‘Pi screens’ have a modular distribution of a given set of up to 36 stock solutions. Maximally diverse conditions can be produced by taking into account the properties of the chemicals used in the formulation and the concentrations of the corresponding solutions. The Pi sampling method has been implemented in a web-based application that generates screen formulations and recipes. It is particularly adapted to screens consisting of 96 different conditions. The flexibility and efficiency of Pi sampling is demonstrated by the crystallization of soluble proteins and of an integral membrane-protein sample
A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data.
Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming
2018-01-01
The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks.
A Dictionary Learning Approach for Signal Sampling in Task-Based fMRI for Reduction of Big Data
Ge, Bao; Li, Xiang; Jiang, Xi; Sun, Yifei; Liu, Tianming
2018-01-01
The exponential growth of fMRI big data offers researchers an unprecedented opportunity to explore functional brain networks. However, this opportunity has not been fully explored yet due to the lack of effective and efficient tools for handling such fMRI big data. One major challenge is that computing capabilities still lag behind the growth of large-scale fMRI databases, e.g., it takes many days to perform dictionary learning and sparse coding of whole-brain fMRI data for an fMRI database of average size. Therefore, how to reduce the data size but without losing important information becomes a more and more pressing issue. To address this problem, we propose a signal sampling approach for significant fMRI data reduction before performing structurally-guided dictionary learning and sparse coding of whole brain's fMRI data. We compared the proposed structurally guided sampling method with no sampling, random sampling and uniform sampling schemes, and experiments on the Human Connectome Project (HCP) task fMRI data demonstrated that the proposed method can achieve more than 15 times speed-up without sacrificing the accuracy in identifying task-evoked functional brain networks. PMID:29706880
Boyacı, Ezel; Rodríguez-Lafuente, Ángel; Gorynski, Krzysztof; Mirnaghi, Fatemeh; Souza-Silva, Érica A; Hein, Dietmar; Pawliszyn, Janusz
2015-05-11
In chemical analysis, sample preparation is frequently considered the bottleneck of the entire analytical method. The success of the final method strongly depends on understanding the entire process of analysis of a particular type of analyte in a sample, namely: the physicochemical properties of the analytes (solubility, volatility, polarity etc.), the environmental conditions, and the matrix components of the sample. Various sample preparation strategies have been developed based on exhaustive or non-exhaustive extraction of analytes from matrices. Undoubtedly, amongst all sample preparation approaches, liquid extraction, including liquid-liquid (LLE) and solid phase extraction (SPE), are the most well-known, widely used, and commonly accepted methods by many international organizations and accredited laboratories. Both methods are well documented and there are many well defined procedures, which make them, at first sight, the methods of choice. However, many challenging tasks, such as complex matrix applications, on-site and in vivo applications, and determination of matrix-bound and free concentrations of analytes, are not easily attainable with these classical approaches for sample preparation. In the last two decades, the introduction of solid phase microextraction (SPME) has brought significant progress in the sample preparation area by facilitating on-site and in vivo applications, time weighted average (TWA) and instantaneous concentration determinations. Recently introduced matrix compatible coatings for SPME facilitate direct extraction from complex matrices and fill the gap in direct sampling from challenging matrices. Following introduction of SPME, numerous other microextraction approaches evolved to address limitations of the above mentioned techniques. There is not a single method that can be considered as a universal solution for sample preparation. This review aims to show the main advantages and limitations of the above mentioned sample
Min, M.
2017-10-01
Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.
Random Walks on Directed Networks: Inference and Respondent-Driven Sampling
Directory of Open Access Journals (Sweden)
Malmros Jens
2016-06-01
Full Text Available Respondent-driven sampling (RDS is often used to estimate population properties (e.g., sexual risk behavior in hard-to-reach populations. In RDS, already sampled individuals recruit population members to the sample from their social contacts in an efficient snowball-like sampling procedure. By assuming a Markov model for the recruitment of individuals, asymptotically unbiased estimates of population characteristics can be obtained. Current RDS estimation methodology assumes that the social network is undirected, that is, all edges are reciprocal. However, empirical social networks in general also include a substantial number of nonreciprocal edges. In this article, we develop an estimation method for RDS in populations connected by social networks that include reciprocal and nonreciprocal edges. We derive estimators of the selection probabilities of individuals as a function of the number of outgoing edges of sampled individuals. The proposed estimators are evaluated on artificial and empirical networks and are shown to generally perform better than existing estimators. This is the case in particular when the fraction of directed edges in the network is large.
Characterization of electron microscopes with binary pseudo-random multilayer test samples
International Nuclear Information System (INIS)
Yashchuk, Valeriy V.; Conley, Raymond; Anderson, Erik H.; Barber, Samuel K.; Bouet, Nathalie; McKinney, Wayne R.; Takacs, Peter Z.; Voronov, Dmitriy L.
2010-01-01
We discuss the results of SEM and TEM measurements with the BPRML test samples fabricated from a BPRML (WSi2/Si with fundamental layer thickness of 3 nm) with a Dual Beam FIB (focused ion beam)/SEM technique. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize x-ray microscopes. Corresponding work with x-ray microscopes is in progress.
Vidovszky, Márton; Kohl, Claudia; Boldogh, Sándor; Görföl, Tamás; Wibbelt, Gudrun; Kurth, Andreas; Harrach, Balázs
2015-12-01
From over 1250 extant species of the order Chiroptera, 25 and 28 are known to occur in Germany and Hungary, respectively. Close to 350 samples originating from 28 bat species (17 from Germany, 27 from Hungary) were screened for the presence of adenoviruses (AdVs) using a nested PCR that targets the DNA polymerase gene of AdVs. An additional PCR was designed and applied to amplify a fragment from the gene encoding the IVa2 protein of mastadenoviruses. All German samples originated from organs of bats found moribund or dead. The Hungarian samples were excrements collected from colonies of known bat species, throat or rectal swab samples, taken from live individuals that had been captured for faunistic surveys and migration studies, as well as internal organs of dead specimens. Overall, 51 samples (14.73%) were found positive. We detected 28 seemingly novel and six previously described bat AdVs by sequencing the PCR products. The positivity rate was the highest among the guano samples of bat colonies. In phylogeny reconstructions, the AdVs detected in bats clustered roughly, but not perfectly, according to the hosts' families (Vespertilionidae, Rhinolophidae, Hipposideridae, Phyllostomidae and Pteropodidae). In a few cases, identical sequences were derived from animals of closely related species. On the other hand, some bat species proved to harbour more than one type of AdV. The high prevalence of infection and the large number of chiropteran species worldwide make us hypothesise that hundreds of different yet unknown AdV types might circulate in bats.
Energy Technology Data Exchange (ETDEWEB)
Berkolaiko, G., E-mail: berko@math.tamu.edu [Department of Mathematics, Texas A and M University, College Station, Texas 77843-3368 (United States); Kuipers, J., E-mail: Jack.Kuipers@physik.uni-regensburg.de [Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg (Germany)
2013-11-15
To study electronic transport through chaotic quantum dots, there are two main theoretical approaches. One involves substituting the quantum system with a random scattering matrix and performing appropriate ensemble averaging. The other treats the transport in the semiclassical approximation and studies correlations among sets of classical trajectories. There are established evaluation procedures within the semiclassical evaluation that, for several linear and nonlinear transport moments to which they were applied, have always resulted in the agreement with random matrix predictions. We prove that this agreement is universal: any semiclassical evaluation within the accepted procedures is equivalent to the evaluation within random matrix theory. The equivalence is shown by developing a combinatorial interpretation of the trajectory sets as ribbon graphs (maps) with certain properties and exhibiting systematic cancellations among their contributions. Remaining trajectory sets can be identified with primitive (palindromic) factorisations whose number gives the coefficients in the corresponding expansion of the moments of random matrices. The equivalence is proved for systems with and without time reversal symmetry.
Haggard, Megan C; Kang, Linda L; Rowatt, Wade C; Shen, Megan Johnson
2015-01-01
The connection between religiousness and volunteering for the community can be explained through two distinct features of religion. First, religious organizations are social groups that encourage members to help others through planned opportunities. Second, helping others is regarded as an important value for members in religious organizations to uphold. We examined the relationship between religiousness and self-reported community volunteering in two independent national random surveys of American adults (i.e., the 2005 and 2007 waves of the Baylor Religion Survey). In both waves, frequency of religious service attendance was associated with an increase in likelihood that individuals would volunteer, whether through their religious organization or not, whereas frequency of reading sacred texts outside of religious services was associated with an increase in likelihood of volunteering only for or through their religious organization. The role of religion in community volunteering is discussed in light of these findings.
Re-estimating sample size in cluster randomized trials with active recruitment within clusters
van Schie, Sander; Moerbeek, Mirjam
2014-01-01
Often only a limited number of clusters can be obtained in cluster randomised trials, although many potential participants can be recruited within each cluster. Thus, active recruitment is feasible within the clusters. To obtain an efficient sample size in a cluster randomised trial, the cluster
da Costa, Nuno Maçarico; Hepp, Klaus; Martin, Kevan A C
2009-05-30
Synapses can only be morphologically identified by electron microscopy and this is often a very labor-intensive and time-consuming task. When quantitative estimates are required for pathways that contribute a small proportion of synapses to the neuropil, the problems of accurate sampling are particularly severe and the total time required may become prohibitive. Here we present a sampling method devised to count the percentage of rarely occurring synapses in the neuropil using a large sample (approximately 1000 sampling sites), with the strong constraint of doing it in reasonable time. The strategy, which uses the unbiased physical disector technique, resembles that used in particle physics to detect rare events. We validated our method in the primary visual cortex of the cat, where we used biotinylated dextran amine to label thalamic afferents and measured the density of their synapses using the physical disector method. Our results show that we could obtain accurate counts of the labeled synapses, even when they represented only 0.2% of all the synapses in the neuropil.
A dimensional approach to personality disorders in a sample of juvenile offenders
Directory of Open Access Journals (Sweden)
Daniela Cantone
2012-03-01
Full Text Available In a sample of 60 male Italian subjects imprisoned at a juvenile detention institute (JDI, psychopathological aspects of the AXIS II were described and the validity of a psychopathological dimensional approach for describing criminological issues was examined. The data show that the sample has psychopathological characteristics which revolve around ego weakness and poor management of relations and aggression. Statistically these psychopathological characteristics explain 85% of criminal behavior.
Paul, C L; Redman, S; Sanson-Fisher, R W
2004-12-01
Printed materials have been a primary mode of communication in public health education. Three major approaches to the development of these materials--the application of characteristics identified in the literature, behavioral strategies and marketing strategies--have major implications for both the effectiveness and cost of materials. However, little attention has been directed towards the cost-effectiveness of such approaches. In the present study, three pamphlets were developed using successive addition of each approach: first literature characteristics only ('C' pamphlet), then behavioral strategies ('C + B' pamphlet) and then marketing strategies ('C + B + M' pamphlet). Each pamphlet encouraged women to join a Pap Test Reminder Service (PTRS). Each pamphlet was mailed to a randomly selected sample of 2700 women aged 50-69 years. Registrations with the PTRS were monitored and 420 women in each pamphlet group were surveyed by telephone. It was reported that the 'C + B' and 'C + B + M' pamphlets were significantly more effective than the 'C' pamphlet. The 'C + B' pamphlet was the most cost-effective of the three pamphlets. There were no significant differences between any of the pamphlet groups on acceptability, knowledge or attitudes. It was suggested that the inclusion of behavioral strategies is likely to be a cost-effective approach to the development of printed health education materials.
Van Broeck, Bianca; Timmers, Maarten; Ramael, Steven; Bogert, Jennifer; Shaw, Leslie M; Mercken, Marc; Slemmon, John; Van Nueten, Luc; Engelborghs, Sebastiaan; Streffer, Johannes Rolf
2016-05-19
Cerebrospinal fluid (CSF) amyloid-beta (Aβ) peptides are predictive biomarkers for Alzheimer's disease and are proposed as pharmacodynamic markers for amyloid-lowering therapies. However, frequent sampling results in fluctuating CSF Aβ levels that have a tendency to increase compared with baseline. The impact of sampling frequency, volume, catheterization procedure, and ibuprofen pretreatment on CSF Aβ levels using continuous sampling over 36 h was assessed. In this open-label biomarker study, healthy participants (n = 18; either sex, age 55-85 years) were randomized into one of three cohorts (n = 6/cohort; high-frequency sampling). In all cohorts except cohort 2 (sampling started 6 h post catheterization), sampling through lumbar catheterization started immediately post catheterization. Cohort 3 received ibuprofen (800 mg) before catheterization. Following interim data review, an additional cohort 4 (n = 6) with an optimized sampling scheme (low-frequency and lower volume) was included. CSF Aβ(1-37), Aβ(1-38), Aβ(1-40), and Aβ(1-42) levels were analyzed. Increases and fluctuations in mean CSF Aβ levels occurred in cohorts 1-3 at times of high-frequency sampling. Some outliers were observed (cohorts 2 and 3) with an extreme pronunciation of this effect. Cohort 4 demonstrated minimal fluctuation of CSF Aβ both on a group and an individual level. Intersubject variability in CSF Aβ profiles over time was observed in all cohorts. CSF Aβ level fluctuation upon catheterization primarily depends on the sampling frequency and volume, but not on the catheterization procedure or inflammatory reaction. An optimized low-frequency sampling protocol minimizes or eliminates fluctuation of CSF Aβ levels, which will improve the capability of accurately measuring the pharmacodynamic read-out for amyloid-lowering therapies. ClinicalTrials.gov NCT01436188 . Registered 15 September 2011.
Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae
Huillet, Thierry E.
2017-07-01
We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.
Seroincidence of non-typhoid Salmonella infections: convenience vs. random community-based sampling.
Emborg, H-D; Simonsen, J; Jørgensen, C S; Harritshøj, L H; Krogfelt, K A; Linneberg, A; Mølbak, K
2016-01-01
The incidence of reported infections of non-typhoid Salmonella is affected by biases inherent to passive laboratory surveillance, whereas analysis of blood sera may provide a less biased alternative to estimate the force of Salmonella transmission in humans. We developed a mathematical model that enabled a back-calculation of the annual seroincidence of Salmonella based on measurements of specific antibodies. The aim of the present study was to determine the seroincidence in two convenience samples from 2012 (Danish blood donors, n = 500, and pregnant women, n = 637) and a community-based sample of healthy individuals from 2006 to 2007 (n = 1780). The lowest antibody levels were measured in the samples from the community cohort and the highest in pregnant women. The annual Salmonella seroincidences were 319 infections/1000 pregnant women [90% credibility interval (CrI) 210-441], 182/1000 in blood donors (90% CrI 85-298) and 77/1000 in the community cohort (90% CrI 45-114). Although the differences between study populations decreased when accounting for different age distributions the estimates depend on the study population. It is important to be aware of this issue and define a certain population under surveillance in order to obtain consistent results in an application of serological measures for public health purposes.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
A double-loop adaptive sampling approach for sensitivity-free dynamic reliability analysis
International Nuclear Information System (INIS)
Wang, Zequn; Wang, Pingfeng
2015-01-01
Dynamic reliability measures reliability of an engineered system considering time-variant operation condition and component deterioration. Due to high computational costs, conducting dynamic reliability analysis at an early system design stage remains challenging. This paper presents a confidence-based meta-modeling approach, referred to as double-loop adaptive sampling (DLAS), for efficient sensitivity-free dynamic reliability analysis. The DLAS builds a Gaussian process (GP) model sequentially to approximate extreme system responses over time, so that Monte Carlo simulation (MCS) can be employed directly to estimate dynamic reliability. A generic confidence measure is developed to evaluate the accuracy of dynamic reliability estimation while using the MCS approach based on developed GP models. A double-loop adaptive sampling scheme is developed to efficiently update the GP model in a sequential manner, by considering system input variables and time concurrently in two sampling loops. The model updating process using the developed sampling scheme can be terminated once the user defined confidence target is satisfied. The developed DLAS approach eliminates computationally expensive sensitivity analysis process, thus substantially improves the efficiency of dynamic reliability analysis. Three case studies are used to demonstrate the efficacy of DLAS for dynamic reliability analysis. - Highlights: • Developed a novel adaptive sampling approach for dynamic reliability analysis. • POD Developed a new metric to quantify the accuracy of dynamic reliability estimation. • Developed a new sequential sampling scheme to efficiently update surrogate models. • Three case studies were used to demonstrate the efficacy of the new approach. • Case study results showed substantially enhanced efficiency with high accuracy
The Randomized CRM: An Approach to Overcoming the Long-Memory Property of the CRM.
Koopmeiners, Joseph S; Wey, Andrew
2017-01-01
The primary object of a Phase I clinical trial is to determine the maximum tolerated dose (MTD). Typically, the MTD is identified using a dose-escalation study, where initial subjects are treated at the lowest dose level and subsequent subjects are treated at progressively higher dose levels until the MTD is identified. The continual reassessment method (CRM) is a popular model-based dose-escalation design, which utilizes a formal model for the relationship between dose and toxicity to guide dose finding. Recently, it was shown that the CRM has a tendency to get "stuck" on a dose level, with little escalation or de-escalation in the late stages of the trial, due to the long-memory property of the CRM. We propose the randomized CRM (rCRM), which introduces random escalation and de-escalation into the standard CRM dose-finding algorithm, as well as a hybrid approach that incorporates escalation and de-escalation only when certain criteria are met. Our simulation results show that both the rCRM and the hybrid approach reduce the trial-to-trial variability in the number of cohorts treated at the MTD but that the hybrid approach has a more favorable tradeoff with respect to the average number treated at the MTD.
Active Learning Not Associated with Student Learning in a Random Sample of College Biology Courses
Andrews, T. M.; Leonard, M. J.; Colgrove, C. A.; Kalinowski, S. T.
2011-01-01
Previous research has suggested that adding active learning to traditional college science lectures substantially improves student learning. However, this research predominantly studied courses taught by science education researchers, who are likely to have exceptional teaching expertise. The present study investigated introductory biology courses randomly selected from a list of prominent colleges and universities to include instructors representing a broader population. We examined the relationship between active learning and student learning in the subject area of natural selection. We found no association between student learning gains and the use of active-learning instruction. Although active learning has the potential to substantially improve student learning, this research suggests that active learning, as used by typical college biology instructors, is not associated with greater learning gains. We contend that most instructors lack the rich and nuanced understanding of teaching and learning that science education researchers have developed. Therefore, active learning as designed and implemented by typical college biology instructors may superficially resemble active learning used by education researchers, but lacks the constructivist elements necessary for improving learning. PMID:22135373
Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.
2017-12-01
The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.
Species richness in soil bacterial communities: a proposed approach to overcome sample size bias.
Youssef, Noha H; Elshahed, Mostafa S
2008-09-01
Estimates of species richness based on 16S rRNA gene clone libraries are increasingly utilized to gauge the level of bacterial diversity within various ecosystems. However, previous studies have indicated that regardless of the utilized approach, species richness estimates obtained are dependent on the size of the analyzed clone libraries. We here propose an approach to overcome sample size bias in species richness estimates in complex microbial communities. Parametric (Maximum likelihood-based and rarefaction curve-based) and non-parametric approaches were used to estimate species richness in a library of 13,001 near full-length 16S rRNA clones derived from soil, as well as in multiple subsets of the original library. Species richness estimates obtained increased with the increase in library size. To obtain a sample size-unbiased estimate of species richness, we calculated the theoretical clone library sizes required to encounter the estimated species richness at various clone library sizes, used curve fitting to determine the theoretical clone library size required to encounter the "true" species richness, and subsequently determined the corresponding sample size-unbiased species richness value. Using this approach, sample size-unbiased estimates of 17,230, 15,571, and 33,912 were obtained for the ML-based, rarefaction curve-based, and ACE-1 estimators, respectively, compared to bias-uncorrected values of 15,009, 11,913, and 20,909.
Randomized trial of two swallowing assessment approaches in patients with acquired brain injury
DEFF Research Database (Denmark)
Kjaersgaard, Annette; Nielsen, Lars Hedemann; Sjölund, Bengt H.
2014-01-01
trial. SETTING: Specialized, national neurorehabilitation centre. SUBJECTS: Adult patients with acquired brain injury. Six hundred and seventy-nine patients were assessed for eligibility and 138 were randomly allocated between June 2009 and April 2011. INTERVENTIONS: Assessment by Facial-Oral Tract....... Seven patients were left for analysis, 4 of whom developed aspiration pneumonia within 10 days after initiating oral intake (1 control/3 interventions). CONCLUSION: In the presence of a structured clinical assessment with the Facial-Oral Tract Therapy approach, it is unnecessary to undertake...
Vaeth, Michael; Skovlund, Eva
2004-06-15
For a given regression problem it is possible to identify a suitably defined equivalent two-sample problem such that the power or sample size obtained for the two-sample problem also applies to the regression problem. For a standard linear regression model the equivalent two-sample problem is easily identified, but for generalized linear models and for Cox regression models the situation is more complicated. An approximately equivalent two-sample problem may, however, also be identified here. In particular, we show that for logistic regression and Cox regression models the equivalent two-sample problem is obtained by selecting two equally sized samples for which the parameters differ by a value equal to the slope times twice the standard deviation of the independent variable and further requiring that the overall expected number of events is unchanged. In a simulation study we examine the validity of this approach to power calculations in logistic regression and Cox regression models. Several different covariate distributions are considered for selected values of the overall response probability and a range of alternatives. For the Cox regression model we consider both constant and non-constant hazard rates. The results show that in general the approach is remarkably accurate even in relatively small samples. Some discrepancies are, however, found in small samples with few events and a highly skewed covariate distribution. Comparison with results based on alternative methods for logistic regression models with a single continuous covariate indicates that the proposed method is at least as good as its competitors. The method is easy to implement and therefore provides a simple way to extend the range of problems that can be covered by the usual formulas for power and sample size determination. Copyright 2004 John Wiley & Sons, Ltd.
Oh, Paul; Lee, Sukho; Kang, Moon Gi
2017-06-28
Recently, several RGB-White (RGBW) color filter arrays (CFAs) have been proposed, which have extra white (W) pixels in the filter array that are highly sensitive. Due to the high sensitivity, the W pixels have better SNR (Signal to Noise Ratio) characteristics than other color pixels in the filter array, especially, in low light conditions. However, most of the RGBW CFAs are designed so that the acquired RGBW pattern image can be converted into the conventional Bayer pattern image, which is then again converted into the final color image by using conventional demosaicing methods, i.e., color interpolation techniques. In this paper, we propose a new RGBW color filter array based on a totally different color interpolation technique, the colorization algorithm. The colorization algorithm was initially proposed for colorizing a gray image into a color image using a small number of color seeds. Here, we adopt this algorithm as a color interpolation technique, so that the RGBW color filter array can be designed with a very large number of W pixels to make the most of the highly sensitive characteristics of the W channel. The resulting RGBW color filter array has a pattern with a large proportion of W pixels, while the small-numbered RGB pixels are randomly distributed over the array. The colorization algorithm makes it possible to reconstruct the colors from such a small number of RGB values. Due to the large proportion of W pixels, the reconstructed color image has a high SNR value, especially higher than those of conventional CFAs in low light condition. Experimental results show that many important information which are not perceived in color images reconstructed with conventional CFAs are perceived in the images reconstructed with the proposed method.
International Nuclear Information System (INIS)
Majaron, B; Milanic, M
2010-01-01
Pulsed photothermal profiling involves reconstruction of temperature depth profile induced in a layered sample by single-pulse laser exposure, based on transient change in mid-infrared (IR) emission from its surface. Earlier studies have indicated that in watery tissues, featuring a pronounced spectral variation of mid-IR absorption coefficient, analysis of broadband radiometric signals within the customary monochromatic approximation adversely affects profiling accuracy. We present here an experimental comparison of pulsed photothermal profiling in layered agar gel samples utilizing a spectrally composite kernel matrix vs. the customary approach. By utilizing a custom reconstruction code, the augmented approach reduces broadening of individual temperature peaks to 14% of the absorber depth, in contrast to 21% obtained with the customary approach.
Ayam, Rufus Tekoh
2011-01-01
PURPOSE: The two approaches to audit sampling; statistical and nonstatistical have been examined in this study. The overall purpose of the study is to explore the current extent at which statistical and nonstatistical sampling approaches are utilized by independent auditors during auditing practices. Moreover, the study also seeks to achieve two additional purposes; the first is to find out whether auditors utilize different sampling techniques when auditing SME´s (Small and Medium-Sized Ente...
A normative inference approach for optimal sample sizes in decisions from experience
Ostwald, Dirk; Starke, Ludger; Hertwig, Ralph
2015-01-01
“Decisions from experience” (DFE) refers to a body of work that emerged in research on behavioral decision making over the last decade. One of the major experimental paradigms employed to study experience-based choice is the “sampling paradigm,” which serves as a model of decision making under limited knowledge about the statistical structure of the world. In this paradigm respondents are presented with two payoff distributions, which, in contrast to standard approaches in behavioral economics, are specified not in terms of explicit outcome-probability information, but by the opportunity to sample outcomes from each distribution without economic consequences. Participants are encouraged to explore the distributions until they feel confident enough to decide from which they would prefer to draw from in a final trial involving real monetary payoffs. One commonly employed measure to characterize the behavior of participants in the sampling paradigm is the sample size, that is, the number of outcome draws which participants choose to obtain from each distribution prior to terminating sampling. A natural question that arises in this context concerns the “optimal” sample size, which could be used as a normative benchmark to evaluate human sampling behavior in DFE. In this theoretical study, we relate the DFE sampling paradigm to the classical statistical decision theoretic literature and, under a probabilistic inference assumption, evaluate optimal sample sizes for DFE. In our treatment we go beyond analytically established results by showing how the classical statistical decision theoretic framework can be used to derive optimal sample sizes under arbitrary, but numerically evaluable, constraints. Finally, we critically evaluate the value of deriving optimal sample sizes under this framework as testable predictions for the experimental study of sampling behavior in DFE. PMID:26441720
Landgraeber, Stefan; Quitmann, Henning; Güth, Sebastian; Haversath, Marcel; Kowalczyk, Wojciech; Kecskeméthy, Andrés; Heep, Hansjörg; Jäger, Marcus
2013-01-01
There is still controversy as to whether minimally invasive total hip arthroplasty enhances the postoperative outcome. The aim of this study was to compare the outcome of patients who underwent total hip replacement through an anterolateral minimally invasive (MIS) or a conventional lateral approach (CON). We performed a randomized, prospective study of 75 patients with primary hip arthritis, who underwent hip replacement through the MIS (n=36) or CON (n=39) approach. The Western Ontario and McMaster Universities Osteoarthritis Index and Harris Hip score (HHS) were evaluated at frequent intervals during the early postoperative follow-up period and then after 3.5 years. Pain sensations were recorded. Serological and radiological analyses were performed. In the MIS group the patients had smaller skin incisions and there was a significantly lower rate of patients with a positive Trendelenburg sign after six weeks postoperatively. After six weeks the HHS was 6.85 points higher in the MIS group (P=0.045). But calculating the mean difference between the baseline and the six weeks HHS we evaluated no significant differences. Blood loss was greater and the duration of surgery was longer in the MIS group. The other parameters, especially after the twelfth week, did not differ significantly. Radiographs showed the inclination of the acetabular component to be significantly higher in the MIS group, but on average it was within the same permitted tolerance range as in the CON group. Both approaches are adequate for hip replacement. Given the data, there appears to be no significant long term advantage to the MIS approach, as described in this study. PMID:24191179
Directory of Open Access Journals (Sweden)
Stefan Landgraeber
2013-07-01
Full Text Available There is still controversy as to whether minimally invasive total hip arthroplasty enhances the postoperative outcome. The aim of this study was to compare the outcome of patients who underwent total hip replacement through an anterolateral minimally invasive (MIS or a conventional lateral approach (CON. We performed a randomized, prospective study of 75 patients with primary hip arthritis, who underwent hip replacement through the MIS (n=36 or CON (n=39 approach. The Western Ontario\tand\tMcMaster\tUniversities Osteoarthritis Index and Harris Hip score (HHS were evaluated at frequent intervals during the early postoperative follow-up period and then after 3.5 years. Pain sensations were recorded. Serological and radiological analyses were performed. In the MIS group the patients had smaller skin incisions and there was a significantly lower rate of patients with a positive Trendelenburg sign after six weeks postoperatively. After six weeks the HHS was 6.85 points higher in the MIS group (P=0.045. But calculating the mean difference between the baseline and the six weeks HHS we evaluated no significant differences. Blood loss was greater and the duration of surgery was longer in the MIS group. The other parameters, especially after the twelfth week, did not differ significantly. Radiographs showed the inclination of the acetabular component to be significantly higher in the MIS group, but on average it was within the same permitted tolerance range as in the CON group. Both approaches are adequate for hip replacement. Given the data, there appears to be no significant long term advantage to the MIS approach, as described in this study.
Wallace, Byron C; Noel-Storr, Anna; Marshall, Iain J; Cohen, Aaron M; Smalheiser, Neil R; Thomas, James
2017-11-01
Identifying all published reports of randomized controlled trials (RCTs) is an important aim, but it requires extensive manual effort to separate RCTs from non-RCTs, even using current machine learning (ML) approaches. We aimed to make this process more efficient via a hybrid approach using both crowdsourcing and ML. We trained a classifier to discriminate between citations that describe RCTs and those that do not. We then adopted a simple strategy of automatically excluding citations deemed very unlikely to be RCTs by the classifier and deferring to crowdworkers otherwise. Combining ML and crowdsourcing provides a highly sensitive RCT identification strategy (our estimates suggest 95%-99% recall) with substantially less effort (we observed a reduction of around 60%-80%) than relying on manual screening alone. Hybrid crowd-ML strategies warrant further exploration for biomedical curation/annotation tasks. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association.
On Generating Optimal Signal Probabilities for Random Tests: A Genetic Approach
Directory of Open Access Journals (Sweden)
M. Srinivas
1996-01-01
Full Text Available Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed. A brief overview of Genetic Algorithms (GAs and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance of our GAbased approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger.
Directory of Open Access Journals (Sweden)
Elena Hilario
Full Text Available Genotyping by sequencing (GBS is a restriction enzyme based targeted approach developed to reduce the genome complexity and discover genetic markers when a priori sequence information is unavailable. Sufficient coverage at each locus is essential to distinguish heterozygous from homozygous sites accurately. The number of GBS samples able to be pooled in one sequencing lane is limited by the number of restriction sites present in the genome and the read depth required at each site per sample for accurate calling of single-nucleotide polymorphisms. Loci bias was observed using a slight modification of the Elshire et al.some restriction enzyme sites were represented in higher proportions while others were poorly represented or absent. This bias could be due to the quality of genomic DNA, the endonuclease and ligase reaction efficiency, the distance between restriction sites, the preferential amplification of small library restriction fragments, or bias towards cluster formation of small amplicons during the sequencing process. To overcome these issues, we have developed a GBS method based on randomly tagging genomic DNA (rtGBS. By randomly landing on the genome, we can, with less bias, find restriction sites that are far apart, and undetected by the standard GBS (stdGBS method. The study comprises two types of biological replicates: six different kiwifruit plants and two independent DNA extractions per plant; and three types of technical replicates: four samples of each DNA extraction, stdGBS vs. rtGBS methods, and two independent library amplifications, each sequenced in separate lanes. A statistically significant unbiased distribution of restriction fragment size by rtGBS showed that this method targeted 49% (39,145 of BamH I sites shared with the reference genome, compared to only 14% (11,513 by stdGBS.
Global Stratigraphy of Venus: Analysis of a Random Sample of Thirty-Six Test Areas
Basilevsky, Alexander T.; Head, James W., III
1995-01-01
The age relations between 36 impact craters with dark paraboloids and other geologic units and structures at these localities have been studied through photogeologic analysis of Magellan SAR images of the surface of Venus. Geologic settings in all 36 sites, about 1000 x 1000 km each, could be characterized using only 10 different terrain units and six types of structures. These units and structures form a major stratigraphic and geologic sequence (from oldest to youngest): (1) tessera terrain; (2) densely fractured terrains associated with coronae and in the form of remnants among plains; (3) fractured and ridged plains and ridge belts; (4) plains with wrinkle ridges; (5) ridges associated with coronae annulae and ridges of arachnoid annulae which are contemporary with wrinkle ridges of the ridged plains; (6) smooth and lobate plains; (7) fractures of coronae annulae, and fractures not related to coronae annulae, which disrupt ridged and smooth plains; (8) rift-associated fractures; and (9) craters with associated dark paraboloids, which represent the youngest 1O% of the Venus impact crater population (Campbell et al.), and are on top of all volcanic and tectonic units except the youngest episodes of rift-associated fracturing and volcanism; surficial streaks and patches are approximately contemporary with dark-paraboloid craters. Mapping of such units and structures in 36 randomly distributed large regions (each approximately 10(exp 6) sq km) shows evidence for a distinctive regional and global stratigraphic and geologic sequence. On the basis of this sequence we have developed a model that illustrates several major themes in the history of Venus. Most of the history of Venus (that of its first 80% or so) is not preserved in the surface geomorphological record. The major deformation associated with tessera formation in the period sometime between 0.5-1.0 b.y. ago (Ivanov and Basilevsky) is the earliest event detected. In the terminal stages of tessera fon
Gradl-Dietsch, Gertraud; Lübke, Cavan; Horst, Klemens; Simon, Melanie; Modabber, Ali; Sönmez, Tolga T; Münker, Ralf; Nebelung, Sven; Knobe, Matthias
2016-11-03
The objectives of this prospective randomized trial were to assess the impact of Peyton's four-step approach on the acquisition of complex psychomotor skills and to examine the influence of gender on learning outcomes. We randomly assigned 95 third to fifth year medical students to an intervention group which received instructions according to Peyton (PG) or a control group, which received conventional teaching (CG). Both groups attended four sessions on the principles of manual therapy and specific manipulative and diagnostic techniques for the spine. We assessed differences in theoretical knowledge (multiple choice (MC) exam) and practical skills (Objective Structured Practical Examination (OSPE)) with respect to type of intervention and gender. Participants took a second OSPE 6 months after completion of the course. There were no differences between groups with respect to the MC exam. Students in the PG group scored significantly higher in the OSPE. Gender had no additional impact. Results of the second OSPE showed a significant decline in competency regardless of gender and type of intervention. Peyton's approach is superior to standard instruction for teaching complex spinal manipulation skills regardless of gender. Skills retention was equally low for both techniques.
Novikov, I; Fund, N; Freedman, L S
2010-01-15
Different methods for the calculation of sample size for simple logistic regression (LR) with one normally distributed continuous covariate give different results. Sometimes the difference can be large. Furthermore, some methods require the user to specify the prevalence of cases when the covariate equals its population mean, rather than the more natural population prevalence. We focus on two commonly used methods and show through simulations that the power for a given sample size may differ substantially from the nominal value for one method, especially when the covariate effect is large, while the other method performs poorly if the user provides the population prevalence instead of the required parameter. We propose a modification of the method of Hsieh et al. that requires specification of the population prevalence and that employs Schouten's sample size formula for a t-test with unequal variances and group sizes. This approach appears to increase the accuracy of the sample size estimates for LR with one continuous covariate.
Flexible automated approach for quantitative liquid handling of complex biological samples.
Palandra, Joe; Weller, David; Hudson, Gary; Li, Jeff; Osgood, Sarah; Hudson, Emily; Zhong, Min; Buchholz, Lisa; Cohen, Lucinda H
2007-11-01
A fully automated protein precipitation technique for biological sample preparation has been developed for the quantitation of drugs in various biological matrixes. All liquid handling during sample preparation was automated using a Hamilton MicroLab Star Robotic workstation, which included the preparation of standards and controls from a Watson laboratory information management system generated work list, shaking of 96-well plates, and vacuum application. Processing time is less than 30 s per sample or approximately 45 min per 96-well plate, which is then immediately ready for injection onto an LC-MS/MS system. An overview of the process workflow is discussed, including the software development. Validation data are also provided, including specific liquid class data as well as comparative data of automated vs manual preparation using both quality controls and actual sample data. The efficiencies gained from this automated approach are described.
Alves, Gilberto; Rodrigues, Márcio; Fortuna, Ana; Falcão, Amílcar; Queiroz, João
2013-06-01
Sample preparation is widely accepted as the most labor-intensive and error-prone part of the bioanalytical process. The recent advances in this field have been focused on the miniaturization and integration of sample preparation online with analytical instrumentation, in order to reduce laboratory workload and increase analytical performance. From this perspective, microextraction by packed sorbent (MEPS) has emerged in the last few years as a powerful sample preparation approach suitable to be easily automated with liquid and gas chromatographic systems applied in a variety of bioanalytical areas (pharmaceutical, clinical, toxicological, environmental and food research). This paper aims to provide an overview and a critical discussion of recent bioanalytical methods reported in literature based on MEPS, with special emphasis on those developed for the quantification of therapeutic drugs and/or metabolites in biological samples. The advantages and some limitations of MEPS, as well as its comparison with other extraction techniques, are also addressed herein.
Free Energy Calculations using a Swarm-Enhanced Sampling Molecular Dynamics Approach.
Burusco, Kepa K; Bruce, Neil J; Alibay, Irfan; Bryce, Richard A
2015-10-26
Free energy simulations are an established computational tool in modelling chemical change in the condensed phase. However, sampling of kinetically distinct substates remains a challenge to these approaches. As a route to addressing this, we link the methods of thermodynamic integration (TI) and swarm-enhanced sampling molecular dynamics (sesMD), where simulation replicas interact cooperatively to aid transitions over energy barriers. We illustrate the approach by using alchemical alkane transformations in solution, comparing them with the multiple independent trajectory TI (IT-TI) method. Free energy changes for transitions computed by using IT-TI grew increasingly inaccurate as the intramolecular barrier was heightened. By contrast, swarm-enhanced sampling TI (sesTI) calculations showed clear improvements in sampling efficiency, leading to more accurate computed free energy differences, even in the case of the highest barrier height. The sesTI approach, therefore, has potential in addressing chemical change in systems where conformations exist in slow exchange. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Jia, Jianhua; Liu, Zi; Xiao, Xuan; Liu, Bingxiang; Chou, Kuo-Chen
2016-04-07
Being one type of post-translational modifications (PTMs), protein lysine succinylation is important in regulating varieties of biological processes. It is also involved with some diseases, however. Consequently, from the angles of both basic research and drug development, we are facing a challenging problem: for an uncharacterized protein sequence having many Lys residues therein, which ones can be succinylated, and which ones cannot? To address this problem, we have developed a predictor called pSuc-Lys through (1) incorporating the sequence-coupled information into the general pseudo amino acid composition, (2) balancing out skewed training dataset by random sampling, and (3) constructing an ensemble predictor by fusing a series of individual random forest classifiers. Rigorous cross-validations indicated that it remarkably outperformed the existing methods. A user-friendly web-server for pSuc-Lys has been established at http://www.jci-bioinfo.cn/pSuc-Lys, by which users can easily obtain their desired results without the need to go through the complicated mathematical equations involved. It has not escaped our notice that the formulation and approach presented here can also be used to analyze many other problems in computational proteomics. Copyright © 2016 Elsevier Ltd. All rights reserved.
Active learning for clinical text classification: is it better than random sampling?
Figueroa, Rosa L; Ngo, Long H; Goryachev, Sergey; Wiechmann, Eduardo P
2012-01-01
Objective This study explores active learning algorithms as a way to reduce the requirements for large training sets in medical text classification tasks. Design Three existing active learning algorithms (distance-based (DIST), diversity-based (DIV), and a combination of both (CMB)) were used to classify text from five datasets. The performance of these algorithms was compared to that of passive learning on the five datasets. We then conducted a novel investigation of the interaction between dataset characteristics and the performance results. Measurements Classification accuracy and area under receiver operating characteristics (ROC) curves for each algorithm at different sample sizes were generated. The performance of active learning algorithms was compared with that of passive learning using a weighted mean of paired differences. To determine why the performance varies on different datasets, we measured the diversity and uncertainty of each dataset using relative entropy and correlated the results with the performance differences. Results The DIST and CMB algorithms performed better than passive learning. With a statistical significance level set at 0.05, DIST outperformed passive learning in all five datasets, while CMB was found to be better than passive learning in four datasets. We found strong correlations between the dataset diversity and the DIV performance, as well as the dataset uncertainty and the performance of the DIST algorithm. Conclusion For medical text classification, appropriate active learning algorithms can yield performance comparable to that of passive learning with considerably smaller training sets. In particular, our results suggest that DIV performs better on data with higher diversity and DIST on data with lower uncertainty. PMID:22707743
DEFF Research Database (Denmark)
Vega, Mabel V Martínez; Sharifzadeh, Sara; Wulfsohn, Dvoralai
2013-01-01
BACKGROUND Visible–near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used represent......BACKGROUND Visible–near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used...... representative samples of an early and a late season apple cultivar to evaluate model robustness (in terms of prediction ability and error) on the soluble solids content (SSC) and acidity prediction, in the wavelength range 400–1100 nm. RESULTS A total of 196 middle–early season and 219 late season apples (Malus...... training and test sets (‘smooth fractionator’, by date of measurement after harvest and random). Using the ‘smooth fractionator’ sampling method, fewer spectral bands (26) and elastic net resulted in improved performance for SSC models of ‘Aroma’ apples, with a coefficient of variation CVSSC = 13...
Curtis, S; Gesler, W; Smith, G; Washburn, S
2000-04-01
This paper focuses on the question of sampling (or selection of cases) in qualitative research. Although the literature includes some very useful discussions of qualitative sampling strategies, the question of sampling often seems to receive less attention in methodological discussion than questions of how data is collected or is analysed. Decisions about sampling are likely to be important in many qualitative studies (although it may not be an issue in some research). There are varying accounts of the principles applicable to sampling or case selection. Those who espouse 'theoretical sampling', based on a 'grounded theory' approach, are in some ways opposed to those who promote forms of 'purposive sampling' suitable for research informed by an existing body of social theory. Diversity also results from the many different methods for drawing purposive samples which are applicable to qualitative research. We explore the value of a framework suggested by Miles and Huberman [Miles, M., Huberman,, A., 1994. Qualitative Data Analysis, Sage, London.], to evaluate the sampling strategies employed in three examples of research by the authors. Our examples comprise three studies which respectively involve selection of: 'healing places'; rural places which incorporated national anti-malarial policies; young male interviewees, identified as either chronically ill or disabled. The examples are used to show how in these three studies the (sometimes conflicting) requirements of the different criteria were resolved, as well as the potential and constraints placed on the research by the selection decisions which were made. We also consider how far the criteria Miles and Huberman suggest seem helpful for planning 'sample' selection in qualitative research.
An inversion-relaxation approach for sampling stationary points of spin model Hamiltonians
International Nuclear Information System (INIS)
Hughes, Ciaran; Mehta, Dhagash; Wales, David J.
2014-01-01
Sampling the stationary points of a complicated potential energy landscape is a challenging problem. Here, we introduce a sampling method based on relaxation from stationary points of the highest index of the Hessian matrix. We illustrate how this approach can find all the stationary points for potentials or Hamiltonians bounded from above, which includes a large class of important spin models, and we show that it is far more efficient than previous methods. For potentials unbounded from above, the relaxation part of the method is still efficient in finding minima and transition states, which are usually the primary focus of attention for atomistic systems
Martínez Vega, Mabel V; Sharifzadeh, Sara; Wulfsohn, Dvoralai; Skov, Thomas; Clemmensen, Line Harder; Toldam-Andersen, Torben B
2013-12-01
Visible-near infrared spectroscopy remains a method of increasing interest as a fast alternative for the evaluation of fruit quality. The success of the method is assumed to be achieved by using large sets of samples to produce robust calibration models. In this study we used representative samples of an early and a late season apple cultivar to evaluate model robustness (in terms of prediction ability and error) on the soluble solids content (SSC) and acidity prediction, in the wavelength range 400-1100 nm. A total of 196 middle-early season and 219 late season apples (Malus domestica Borkh.) cvs 'Aroma' and 'Holsteiner Cox' samples were used to construct spectral models for SSC and acidity. Partial least squares (PLS), ridge regression (RR) and elastic net (EN) models were used to build prediction models. Furthermore, we compared three sub-sample arrangements for forming training and test sets ('smooth fractionator', by date of measurement after harvest and random). Using the 'smooth fractionator' sampling method, fewer spectral bands (26) and elastic net resulted in improved performance for SSC models of 'Aroma' apples, with a coefficient of variation CVSSC = 13%. The model showed consistently low errors and bias (PLS/EN: R(2) cal = 0.60/0.60; SEC = 0.88/0.88°Brix; Biascal = 0.00/0.00; R(2) val = 0.33/0.44; SEP = 1.14/1.03; Biasval = 0.04/0.03). However, the prediction acidity and for SSC (CV = 5%) of the late cultivar 'Holsteiner Cox' produced inferior results as compared with 'Aroma'. It was possible to construct local SSC and acidity calibration models for early season apple cultivars with CVs of SSC and acidity around 10%. The overall model performance of these data sets also depend on the proper selection of training and test sets. The 'smooth fractionator' protocol provided an objective method for obtaining training and test sets that capture the existing variability of the fruit samples for construction of visible-NIR prediction models. The implication
Sampled-Data Control of Spacecraft Rendezvous with Discontinuous Lyapunov Approach
Directory of Open Access Journals (Sweden)
Zhuoshi Li
2013-01-01
Full Text Available This paper investigates the sampled-data stabilization problem of spacecraft relative positional holding with improved Lyapunov function approach. The classical Clohessy-Wiltshire equation is adopted to describe the relative dynamic model. The relative position holding problem is converted into an output tracking control problem using sampling signals. A time-dependent discontinuous Lyapunov functionals approach is developed, which will lead to essentially less conservative results for the stability analysis and controller design of the corresponding closed-loop system. Sufficient conditions for the exponential stability analysis and the existence of the proposed controller are provided, respectively. Finally, a simulation result is established to illustrate the effectiveness of the proposed control scheme.
Directory of Open Access Journals (Sweden)
Eric S Walsh
Full Text Available Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment contamination from the sub-estuary to broader estuary extent. For this study, a Random Forest (RF model was implemented to predict the distribution of a model contaminant, triclosan (5-chloro-2-(2,4-dichlorophenoxyphenol (TCS, in Narragansett Bay, Rhode Island, USA. TCS is an unregulated contaminant used in many personal care products. The RF explanatory variables were associated with TCS transport and fate (proxies and direct and indirect environmental entry. The continuous RF TCS concentration predictions were discretized into three levels of contamination (low, medium, and high for three different quantile thresholds. The RF model explained 63% of the variance with a minimum number of variables. Total organic carbon (TOC (transport and fate proxy was a strong predictor of TCS contamination causing a mean squared error increase of 59% when compared to permutations of randomized values of TOC. Additionally, combined sewer overflow discharge (environmental entry and sand (transport and fate proxy were strong predictors. The discretization models identified a TCS area of greatest concern in the northern reach of Narragansett Bay (Providence River sub-estuary, which was validated with independent test samples. This decision-support tool performed well at the sub-estuary extent and provided the means to identify areas of concern and prioritize bay-wide sampling.
Lü, Hui; Shangguan, Wen-Bin; Yu, Dejie
2017-09-01
Automotive brake systems are always subjected to various types of uncertainties and two types of random-fuzzy uncertainties may exist in the brakes. In this paper, a unified approach is proposed for squeal instability analysis of disc brakes with two types of random-fuzzy uncertainties. In the proposed approach, two uncertainty analysis models with mixed variables are introduced to model the random-fuzzy uncertainties. The first one is the random and fuzzy model, in which random variables and fuzzy variables exist simultaneously and independently. The second one is the fuzzy random model, in which uncertain parameters are all treated as random variables while their distribution parameters are expressed as fuzzy numbers. Firstly, the fuzziness is discretized by using α-cut technique and the two uncertainty analysis models are simplified into random-interval models. Afterwards, by temporarily neglecting interval uncertainties, the random-interval models are degraded into random models, in which the expectations, variances, reliability indexes and reliability probabilities of system stability functions are calculated. And then, by reconsidering the interval uncertainties, the bounds of the expectations, variances, reliability indexes and reliability probabilities are computed based on Taylor series expansion. Finally, by recomposing the analysis results at each α-cut level, the fuzzy reliability indexes and probabilities can be obtained, by which the brake squeal instability can be evaluated. The proposed approach gives a general framework to deal with both types of random-fuzzy uncertainties that may exist in the brakes and its effectiveness is demonstrated by numerical examples. It will be a valuable supplement to the systematic study of brake squeal considering uncertainty.
An Efficient Approach for Mars Sample Return Using Emerging Commercial Capabilities.
Gonzales, Andrew A; Stoker, Carol R
2016-06-01
Mars Sample Return is the highest priority science mission for the next decade as recommended by the 2011 Decadal Survey of Planetary Science [1]. This article presents the results of a feasibility study for a Mars Sample Return mission that efficiently uses emerging commercial capabilities expected to be available in the near future. The motivation of our study was the recognition that emerging commercial capabilities might be used to perform Mars Sample Return with an Earth-direct architecture, and that this may offer a desirable simpler and lower cost approach. The objective of the study was to determine whether these capabilities can be used to optimize the number of mission systems and launches required to return the samples, with the goal of achieving the desired simplicity. All of the major element required for the Mars Sample Return mission are described. Mission system elements were analyzed with either direct techniques or by using parametric mass estimating relationships. The analysis shows the feasibility of a complete and closed Mars Sample Return mission design based on the following scenario: A SpaceX Falcon Heavy launch vehicle places a modified version of a SpaceX Dragon capsule, referred to as "Red Dragon", onto a Trans Mars Injection trajectory. The capsule carries all the hardware needed to return to Earth Orbit samples collected by a prior mission, such as the planned NASA Mars 2020 sample collection rover. The payload includes a fully fueled Mars Ascent Vehicle; a fueled Earth Return Vehicle, support equipment, and a mechanism to transfer samples from the sample cache system onboard the rover to the Earth Return Vehicle. The Red Dragon descends to land on the surface of Mars using Supersonic Retropropulsion. After collected samples are transferred to the Earth Return Vehicle, the single-stage Mars Ascent Vehicle launches the Earth Return Vehicle from the surface of Mars to a Mars phasing orbit. After a brief phasing period, the Earth Return
Xiao, Ning-Cong; Li, Yan-Feng; Wang, Zhonglai; Peng, Weiwen; Huang, Hong-Zhong
2013-01-01
In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to cal...
Catarino, Rosa; Vassilakos, Pierre; Bilancioni, Aline; Vanden Eynde, Mathieu; Meyer-Hamme, Ulrike; Menoud, Pierre-Alain; Guerry, Frédéric; Petignat, Patrick
2015-01-01
Human papillomavirus (HPV) self-sampling (self-HPV) is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab. A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY) or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA). After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET). HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic. HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2) detected by s-DRY, 56.2% (95%CI: 47.6-64.4) by Dr-WET, and 54.6% (95%CI: 46.1-62.9) by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34), and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56). Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+) were: 64.0% (95%CI: 44.5-79.8) for s-FTA, 84.6% (95%CI: 66.5-93.9) for s-DRY, and 76.9% (95%CI: 58.0-89.0) for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%). Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD)/per card vs. ~1 USD/per swab). Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA. International Standard Randomized Controlled Trial Number (ISRCTN): 43310942.
Directory of Open Access Journals (Sweden)
Rosa Catarino
Full Text Available Human papillomavirus (HPV self-sampling (self-HPV is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab.A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA. After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET. HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic.HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2 detected by s-DRY, 56.2% (95%CI: 47.6-64.4 by Dr-WET, and 54.6% (95%CI: 46.1-62.9 by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34, and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56. Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+ were: 64.0% (95%CI: 44.5-79.8 for s-FTA, 84.6% (95%CI: 66.5-93.9 for s-DRY, and 76.9% (95%CI: 58.0-89.0 for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%. Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD/per card vs. ~1 USD/per swab.Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA.International Standard Randomized Controlled Trial Number (ISRCTN: 43310942.
Directory of Open Access Journals (Sweden)
Marzieh Kafeshani
2015-01-01
Full Text Available Background: Insulin receptor substrate (IRS Type 1 is a main substrate for the insulin receptor, controls insulin signaling in skeletal muscle, adipose tissue, and the vascular, so it is an important candidate gene for insulin resistance (IR. We aimed to compare the effects of the Dietary Approaches to Stop Hypertension (DASH and Usual Dietary Advices (UDA on IRS1 gene expression in women at risk for cardiovascular disease. Materials and Methods: A randomized controlled clinical trial was performed in 44 women at risk for cardiovascular disease. Participants were randomly assigned to a UDA diet or the DASH diet. The DASH diet was rich in fruits, vegetables, whole grains, and low-fat dairy products and low in saturated fat, total fat, cholesterol, refined grains, and sweets, with a total of 2400 mg/day sodium. The UDA diet was a regular diet with healthy dietary advice. Gene expression was assessed by the real-time polymerase chain reaction at the first of study and after 12 weeks. Independent sample t-test and paired-samples t-test were used to compare means of all variables within and between two groups respectively. Results: IRS1 gene expression was increased in DASH group compared with UDA diet (P = 0.00. Weight and waist circumference decreased in DASH group significantly compared to the UDA group (P < 0.05 but the results between the two groups showed no significant difference. Conclusion: DASH diet increased IRS1 gene expression and probably has beneficial effects on IR risks.
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
A novel approach for small sample size family-based association studies: sequential tests.
Ilk, Ozlem; Rajabli, Farid; Dungul, Dilay Ciglidag; Ozdag, Hilal; Ilk, Hakki Gokhan
2011-08-01
In this paper, we propose a sequential probability ratio test (SPRT) to overcome the problem of limited samples in studies related to complex genetic diseases. The results of this novel approach are compared with the ones obtained from the traditional transmission disequilibrium test (TDT) on simulated data. Although TDT classifies single-nucleotide polymorphisms (SNPs) to only two groups (SNPs associated with the disease and the others), SPRT has the flexibility of assigning SNPs to a third group, that is, those for which we do not have enough evidence and should keep sampling. It is shown that SPRT results in smaller ratios of false positives and negatives, as well as better accuracy and sensitivity values for classifying SNPs when compared with TDT. By using SPRT, data with small sample size become usable for an accurate association analysis.
Liu, Xiao
2017-03-21
Privacy risks of recommender systems have caused increasing attention. Users’ private data is often collected by probably untrusted recommender system in order to provide high-quality recommendation. Meanwhile, malicious attackers may utilize recommendation results to make inferences about other users’ private data. Existing approaches focus either on keeping users’ private data protected during recommendation computation or on preventing the inference of any single user’s data from the recommendation result. However, none is designed for both hiding users’ private data and preventing privacy inference. To achieve this goal, we propose in this paper a hybrid approach for privacy-preserving recommender systems by combining differential privacy (DP) with randomized perturbation (RP). We theoretically show the noise added by RP has limited effect on recommendation accuracy and the noise added by DP can be well controlled based on the sensitivity analysis of functions on the perturbed data. Extensive experiments on three large-scale real world datasets show that the hybrid approach generally provides more privacy protection with acceptable recommendation accuracy loss, and surprisingly sometimes achieves better privacy without sacrificing accuracy, thus validating its feasibility in practice.
Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey
Directory of Open Access Journals (Sweden)
Steven R. Corman
2013-12-01
Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.
Fowkes, F G; Lowe, G D; Rumley, A; Lennie, S E; Smith, F B; Donnan, P T
1993-05-01
Blood viscosity is elevated in hypertensive subjects, but the association of viscosity with arterial blood pressure in the general population, and the influence of social, lifestyle and disease characteristics on this association, are not established. In the Edinburgh Artery Study, 1592 men and women aged 55-74 years selected randomly from the general population attended a university clinic. A fasting blood sample was taken for the measurement of blood viscosity and its major determinants (haematocrit, plasma viscosity and fibrinogen). Systolic pressure was related univariately to blood viscosity (P viscosity (P index. Diastolic pressure was related univariately to blood viscosity (P viscosity (P viscosity and systolic pressure was confined to males. Blood viscosity was associated equally with systolic and diastolic pressures in males, and remained independently related on multivariate analysis adjusting for age, sex, body mass index, social class, smoking, alcohol intake, exercise, angina, HDL and non-HDL cholesterol, diabetes mellitus, plasma viscosity, fibrinogen, and haematocrit.
International Nuclear Information System (INIS)
Ito, Motohiro; Endo, Tomohiro; Yamamoto, Akio; Kuroda, Yusuke; Yoshii, Takashi
2017-01-01
The bias factor method based on the random sampling technique is applied to the benchmark problem of Peach Bottom Unit 2. Validity and availability of the present method, i.e. correction of calculation results and reduction of uncertainty, are confirmed in addition to features and performance of the present method. In the present study, core characteristics in cycle 3 are corrected with the proposed method using predicted and 'measured' critical eigenvalues in cycles 1 and 2. As the source of uncertainty, variance-covariance of cross sections is considered. The calculation results indicate that bias between predicted and measured results, and uncertainty owing to cross section can be reduced. Extension to other uncertainties such as thermal hydraulics properties will be a future task. (author)
A New Approach To Soil Sampling For Risk Assessment Of Nutrient Mobilisation.
Jonczyk, J. C.; Owen, G. J.; Snell, M. A.; Barber, N.; Benskin, C.; Reaney, S. M.; Haygarth, P.; Quinn, P. F.; Barker, P. A.; Aftab, A.; Burke, S.; Cleasby, W.; Surridge, B.; Perks, M. T.
2016-12-01
Traditionally, risks of nutrient and sediment losses from soils are assessed through a combination of field soil nutrient values on soil samples taken over the whole field and the proximity of the field to water courses. The field average nutrient concentration of the soil is used by farmers to determine fertiliser needs. These data are often used by scientists to assess the risk of nutrient losses to water course, though are not really `fit' for this purpose. The Eden Demonstration Test Catchment (http://www.edendtc.org.uk/) is a research project based in the River Eden catchment, NW UK, with the aim of cost effectively mitigating diffuse pollution from agriculture whilst maintaining agricultural productivity. Three instrumented focus catchments have been monitored since 2011, providing high resolution in-stream chemistry and ecological data, alongside some spatial data on soils, land use and nutrient inputs. An approach to mitigation was demonstrated in a small sub-catchment, where surface runoff was identified as the key drivers of nutrient losses, using a suite of runoff attenuation features. Other issues identified were management of hard- standings and soil compaction. A new approach for evaluating nutrient losses from soils is assessed in the Eden DTC project. The Sensitive Catchment Integrated Modelling and Prediction (SCIMAP) model is a risk-mapping framework designed to identify where in the landscape diffuse pollution is most likely to be originating (http://www.scimap.org.uk) and was used to look at the spatial pattern of erosion potential. The aim of this work was to assess if erosion potential identified through the model could be used to inform a new soil sampling strategy, to better assess risk of erosion and risk of transport of sediment-bound phosphorus. Soil samples were taken from areas with different erosion potential. The chemical analysis of these targeted samples are compared to those obtained using more traditional sampling approaches
Verweij, Karin J H; Treur, Jorien L; Vink, Jacqueline M
2018-07-01
Epidemiological studies consistently show co-occurrence of use of different addictive substances. Whether these associations are causal or due to overlapping underlying influences remains an important question in addiction research. Methodological advances have made it possible to use published genetic associations to infer causal relationships between phenotypes. In this exploratory study, we used Mendelian randomization (MR) to examine the causality of well-established associations between nicotine, alcohol, caffeine and cannabis use. Two-sample MR was employed to estimate bidirectional causal effects between four addictive substances: nicotine (smoking initiation and cigarettes smoked per day), caffeine (cups of coffee per day), alcohol (units per week) and cannabis (initiation). Based on existing genome-wide association results we selected genetic variants associated with the exposure measure as an instrument to estimate causal effects. Where possible we applied sensitivity analyses (MR-Egger and weighted median) more robust to horizontal pleiotropy. Most MR tests did not reveal causal associations. There was some weak evidence for a causal positive effect of genetically instrumented alcohol use on smoking initiation and of cigarettes per day on caffeine use, but these were not supported by the sensitivity analyses. There was also some suggestive evidence for a positive effect of alcohol use on caffeine use (only with MR-Egger) and smoking initiation on cannabis initiation (only with weighted median). None of the suggestive causal associations survived corrections for multiple testing. Two-sample Mendelian randomization analyses found little evidence for causal relationships between nicotine, alcohol, caffeine and cannabis use. © 2018 Society for the Study of Addiction.
Chen, Chia-Hsiu; Tanaka, Kenichi; Funatsu, Kimito
2018-04-22
The Quantitative Structure - Property Relationship (QSPR) approach was performed to study the fluorescence absorption wavelengths and emission wavelengths of 413 fluorescent dyes in different solvent conditions. The dyes included the chromophore derivatives of cyanine, xanthene, coumarin, pyrene, naphthalene, anthracene and etc., with the wavelength ranging from 250 nm to 800 nm. An ensemble method, random forest (RF), was employed to construct nonlinear prediction models compared with the results of linear partial least squares and nonlinear support vector machine regression models. Quantum chemical descriptors derived from density functional theory method and solvent information were also used by constructing models. The best prediction results were obtained from RF model, with the squared correlation coefficients [Formula: see text] of 0.940 and 0.905 for λ abs and λ em , respectively. The descriptors used in the models were discussed in detail in this report by comparing the feature importance of RF.
Directory of Open Access Journals (Sweden)
K. Mohaideen Pitchai
2017-07-01
Full Text Available Wireless Sensor Network (WSN consists of a large number of small sensors with restricted energy. Prolonged network lifespan, scalability, node mobility and load balancing are important needs for several WSN applications. Clustering the sensor nodes is an efficient technique to reach these goals. WSN have the characteristics of topology dynamics because of factors like energy conservation and node movement that leads to Dynamic Load Balanced Clustering Problem (DLBCP. In this paper, Elitism based Random Immigrant Genetic Approach (ERIGA is proposed to solve DLBCP which adapts to topology dynamics. ERIGA uses the dynamic Genetic Algorithm (GA components for solving the DLBCP. The performance of load balanced clustering process is enhanced with the help of this dynamic GA. As a result, the ERIGA achieves to elect suitable cluster heads which balances the network load and increases the lifespan of the network.
Elearning approaches to prevent weight gain in young adults: A randomized controlled study.
Nikolaou, Charoula Konstantia; Hankey, Catherine Ruth; Lean, Michael Ernest John
2015-12-01
Preventing obesity among young adults should be a preferred public health approach given the limited efficacy of treatment interventions. This study examined whether weight gain can be prevented by online approaches using two different behavioral models, one overtly directed at obesity and the other covertly. A three-group parallel randomized controlled intervention was conducted in 2012-2013; 20,975 young adults were allocated a priori to one control and two "treatment" groups. Two treatment groups were offered online courses over 19 weeks on (1) personal weight control ("Not the Ice Cream Van," NTICV) and, (2) political, environmental, and social issues around food ("Goddess Demetra," "GD"). Control group received no contact. The primary outcome was weight change over 40 weeks. Within-group 40-week weight changes were different between groups (P < 0.001): Control (n = 2,134): +2.0 kg (95% CI = 1.5, 2.3 kg); NTICV (n = 1,810): -1.0 kg (95% CI = -1.3, -0.5); and GD (n = 2,057): -1.35 kg (95% CI = -1.4 to -0.7). Relative risks for weight gain vs. NTICV = 0.13 kg (95% CI = 0.10, 0.15), P < 0.0001; GD = 0.07 kg (95% CI = 0.05, 0.10), P < 0.0001. Both interventions were associated with prevention of the weight gain observed among control subjects. This low-cost intervention could be widely transferable as one tool against the obesity epidemic. Outside the randomized controlled trial setting, it could be enhanced using supporting advertising and social media. © 2015 The Obesity Society.
Zhang, Hua; Yang, Hui; Li, Hongxing; Huang, Guangnan; Ding, Zheyi
2018-04-01
The attenuation of random noise is important for improving the signal to noise ratio (SNR). However, the precondition for most conventional denoising methods is that the noisy data must be sampled on a uniform grid, making the conventional methods unsuitable for non-uniformly sampled data. In this paper, a denoising method capable of regularizing the noisy data from a non-uniform grid to a specified uniform grid is proposed. Firstly, the denoising method is performed for every time slice extracted from the 3D noisy data along the source and receiver directions, then the 2D non-equispaced fast Fourier transform (NFFT) is introduced in the conventional fast discrete curvelet transform (FDCT). The non-equispaced fast discrete curvelet transform (NFDCT) can be achieved based on the regularized inversion of an operator that links the uniformly sampled curvelet coefficients to the non-uniformly sampled noisy data. The uniform curvelet coefficients can be calculated by using the inversion algorithm of the spectral projected-gradient for ℓ1-norm problems. Then local threshold factors are chosen for the uniform curvelet coefficients for each decomposition scale, and effective curvelet coefficients are obtained respectively for each scale. Finally, the conventional inverse FDCT is applied to the effective curvelet coefficients. This completes the proposed 3D denoising method using the non-equispaced curvelet transform in the source-receiver domain. The examples for synthetic data and real data reveal the effectiveness of the proposed approach in applications to noise attenuation for non-uniformly sampled data compared with the conventional FDCT method and wavelet transformation.
A Novel Method of Adrenal Venous Sampling via an Antecubital Approach
Energy Technology Data Exchange (ETDEWEB)
Jiang, Xiongjing, E-mail: jxj103@hotmail.com; Dong, Hui; Peng, Meng; Che, Wuqiang; Zou, Yubao; Song, Lei; Zhang, Huimin; Wu, Haiying [Chinese Academy of Medical Sciences and Peking Union Medical College, Department of Cardiology, Fuwai Hospital, National Center for Cardiovascular Disease (China)
2017-03-15
PurposeCurrently, almost all adrenal venous sampling (AVS) procedures are performed by femoral vein access. The purpose of this study was to establish the technique of AVS via an antecubital approach and evaluate its safety and feasibility.Materials and MethodsFrom January 2012 to June 2015, 194 consecutive patients diagnosed as primary aldosteronism underwent AVS via an antecubital approach without ACTH simulation. Catheters used for bilateral adrenal cannulations were recorded. The success rate of bilateral adrenal sampling, operation time, fluoroscopy time, dosage of contrast, and incidence of complications were calculated.ResultsA 5F MPA1 catheter was first used to attempt right adrenal cannulation in all patients. Cannulation of the right adrenal vein was successfully performed in 164 (84.5%) patients. The 5F JR5, Cobra2, and TIG catheters were the ultimate catheters for right adrenal cannulation in 16 (8.2%), 5 (2.6%), and 9 (4.6%) patients, respectively. For left adrenal cannulation, JR5 and Cobra2 catheters were used in 19 (9.8%) and 10 (5.2%) patients, respectively, while only TIG catheters were used in the remaining 165 (85.1%) patients. The rate of successful adrenal sampling on the right, left, and bilateral sides was 91.8%, 93.3%, and 87.6%, respectively. The mean time of operation was (16.3 ± 4.3) minutes, mean fluoroscopy time was (4.7 ± 1.3) minutes, and the mean use of contrast was (14.3 ± 4.7) ml. The incidence of adrenal hematoma was 1.0%.ConclusionsThis study showed that AVS via an antecubital approach was safe and feasible, with a high rate of successful sampling.
Biomarker discovery in heterogeneous tissue samples -taking the in-silico deconfounding approach
Directory of Open Access Journals (Sweden)
Parida Shreemanta K
2010-01-01
Full Text Available Abstract Background For heterogeneous tissues, such as blood, measurements of gene expression are confounded by relative proportions of cell types involved. Conclusions have to rely on estimation of gene expression signals for homogeneous cell populations, e.g. by applying micro-dissection, fluorescence activated cell sorting, or in-silico deconfounding. We studied feasibility and validity of a non-negative matrix decomposition algorithm using experimental gene expression data for blood and sorted cells from the same donor samples. Our objective was to optimize the algorithm regarding detection of differentially expressed genes and to enable its use for classification in the difficult scenario of reversely regulated genes. This would be of importance for the identification of candidate biomarkers in heterogeneous tissues. Results Experimental data and simulation studies involving noise parameters estimated from these data revealed that for valid detection of differential gene expression, quantile normalization and use of non-log data are optimal. We demonstrate the feasibility of predicting proportions of constituting cell types from gene expression data of single samples, as a prerequisite for a deconfounding-based classification approach. Classification cross-validation errors with and without using deconfounding results are reported as well as sample-size dependencies. Implementation of the algorithm, simulation and analysis scripts are available. Conclusions The deconfounding algorithm without decorrelation using quantile normalization on non-log data is proposed for biomarkers that are difficult to detect, and for cases where confounding by varying proportions of cell types is the suspected reason. In this case, a deconfounding ranking approach can be used as a powerful alternative to, or complement of, other statistical learning approaches to define candidate biomarkers for molecular diagnosis and prediction in biomedicine, in
A novel approach to process carbonate samples for radiocarbon measurements with helium carrier gas
Energy Technology Data Exchange (ETDEWEB)
Wacker, L., E-mail: wacker@phys.ethz.ch [Laboratory of Ion Beam Physics, ETH Zurich, 8093 Zurich (Switzerland); Fueloep, R.-H. [Institute of Geology and Mineralogy, University of Cologne, 50674 Cologne (Germany); Hajdas, I. [Laboratory of Ion Beam Physics, ETH Zurich, 8093 Zurich (Switzerland); Molnar, M. [Laboratory of Ion Beam Physics, ETH Zurich, 8093 Zurich (Switzerland); Institute of Nuclear Research, Hungarian Academy of Sciences, 4026 Debrecen (Hungary); Rethemeyer, J. [Institute of Geology and Mineralogy, University of Cologne, 50674 Cologne (Germany)
2013-01-15
Most laboratories prepare carbonates samples for radiocarbon analysis by acid decomposition in evacuated glass tubes and subsequent reduction of the evolved CO{sub 2} to graphite in self-made reduction manifolds. This process is time consuming and labor intensive. In this work, we have tested a new approach for the preparation of carbonate samples, where any high-vacuum system is avoided and helium is used as a carrier gas. The liberation of CO{sub 2} from carbonates with phosphoric acid is performed in a similar way as it is often done in stable isotope ratio mass spectrometry where CO{sub 2} is released with acid in septum sealed tube under helium atmosphere. The formed CO{sub 2} is later flushed in a helium flow by means of a double-walled needle mounted from the tubes to the zeolite trap of the automated graphitization equipment (AGE). It essentially replaces the elemental analyzer normally used for the combustion of organic samples. The process can be fully automated from sampling the released CO{sub 2} in the septum-sealed tubes with a commercially available auto-sampler to the graphitization with the automated graphitization. The new method yields in low sample blanks of about 50000 years. Results of processed reference materials (IAEA-C2, FIRI-C) are in agreement with their consensus values.
Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G
2015-10-01
Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.
Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.
2016-01-01
Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969
International Nuclear Information System (INIS)
Harper, W.V.; Gupta, S.K.
1983-10-01
A computer code was used to study steady-state flow for a hypothetical borehole scenario. The model consists of three coupled equations with only eight parameters and three dependent variables. This study focused on steady-state flow as the performance measure of interest. Two different approaches to sensitivity/uncertainty analysis were used on this code. One approach, based on Latin Hypercube Sampling (LHS), is a statistical sampling method, whereas, the second approach is based on the deterministic evaluation of sensitivities. The LHS technique is easy to apply and should work well for codes with a moderate number of parameters. Of deterministic techniques, the direct method is preferred when there are many performance measures of interest and a moderate number of parameters. The adjoint method is recommended when there are a limited number of performance measures and an unlimited number of parameters. This unlimited number of parameters capability can be extremely useful for finite element or finite difference codes with a large number of grid blocks. The Office of Nuclear Waste Isolation will use the technique most appropriate for an individual situation. For example, the adjoint method may be used to reduce the scope to a size that can be readily handled by a technique such as LHS. Other techniques for sensitivity/uncertainty analysis, e.g., kriging followed by conditional simulation, will be used also. 15 references, 4 figures, 9 tables
A Sampling Based Approach to Spacecraft Autonomous Maneuvering with Safety Specifications
Starek, Joseph A.; Barbee, Brent W.; Pavone, Marco
2015-01-01
This paper presents a methods for safe spacecraft autonomous maneuvering that leverages robotic motion-planning techniques to spacecraft control. Specifically the scenario we consider is an in-plan rendezvous of a chaser spacecraft in proximity to a target spacecraft at the origin of the Clohessy Wiltshire Hill frame. The trajectory for the chaser spacecraft is generated in a receding horizon fashion by executing a sampling based robotic motion planning algorithm name Fast Marching Trees (FMT) which efficiently grows a tree of trajectories over a set of probabillistically drawn samples in the state space. To enforce safety the tree is only grown over actively safe samples for which there exists a one-burn collision avoidance maneuver that circularizes the spacecraft orbit along a collision-free coasting arc and that can be executed under potential thrusters failures. The overall approach establishes a provably correct framework for the systematic encoding of safety specifications into the spacecraft trajectory generations process and appears amenable to real time implementation on orbit. Simulation results are presented for a two-fault tolerant spacecraft during autonomous approach to a single client in Low Earth Orbit.
Champault, G G; Rizk, N; Catheline, J M; Turner, R; Boutelier, P
1997-12-01
In a prospective randomized trial comparing the totally preperitoneal (TPP) laparoscopic approach and the Stoppa procedure (open), 100 patients with inguinal hernias (Nyhus IIIA, IIIB, IV) were followed over a 3-year period. Both groups were epidemiologically comparable. In the laparoscopic group, operating time was significantly longer (p = 0.01), but hospital stay (3.2 vs. 7.3 days) and delay in return to work (17 vs. 35 days) were significantly reduced (p = 0.01). Postoperative comfort (less pain) was better (p = 0.001) after laparoscopy. In this group, morbidity was also reduced (4 vs. 20%; p = 0.02). The mean follow-up was 605 days, and 93% of the patients were reviewed at 3 years. There were three (6%) recurrences after TPP, especially at the beginning of the surgeon's learning curve, versus one for the Stoppa procedure (NS). For bilateral hernias, the authors suggest the use of a large prosthesis rather than two small ones to minimize the likelihood of recurrence. In the conditions described, the laparoscopic (TPP) approach to inguinal hernia treatment appears to have the same long-term recurrence rate as the open (Stoppa) procedure but a real advantage in the early postoperative period.
Leung, Michael; Bassani, Diego G; Racine-Poon, Amy; Goldenberg, Anna; Ali, Syed Asad; Kang, Gagandeep; Premkumar, Prasanna S; Roth, Daniel E
2017-09-10
Conditioning child growth measures on baseline accounts for regression to the mean (RTM). Here, we present the "conditional random slope" (CRS) model, based on a linear-mixed effects model that incorporates a baseline-time interaction term that can accommodate multiple data points for a child while also directly accounting for RTM. In two birth cohorts, we applied five approaches to estimate child growth velocities from 0 to 12 months to assess the effect of increasing data density (number of measures per child) on the magnitude of RTM of unconditional estimates, and the correlation and concordance between the CRS and four alternative metrics. Further, we demonstrated the differential effect of the choice of velocity metric on the magnitude of the association between infant growth and stunting at 2 years. RTM was minimally attenuated by increasing data density for unconditional growth modeling approaches. CRS and classical conditional models gave nearly identical estimates with two measures per child. Compared to the CRS estimates, unconditional metrics had moderate correlation (r = 0.65-0.91), but poor agreement in the classification of infants with relatively slow growth (kappa = 0.38-0.78). Estimates of the velocity-stunting association were the same for CRS and classical conditional models but differed substantially between conditional versus unconditional metrics. The CRS can leverage the flexibility of linear mixed models while addressing RTM in longitudinal analyses. © 2017 The Authors American Journal of Human Biology Published by Wiley Periodicals, Inc.
Burger, Rulof P; McLaren, Zoë M
2017-09-01
The problem of sample selection complicates the process of drawing inference about populations. Selective sampling arises in many real world situations when agents such as doctors and customs officials search for targets with high values of a characteristic. We propose a new method for estimating population characteristics from these types of selected samples. We develop a model that captures key features of the agent's sampling decision. We use a generalized method of moments with instrumental variables and maximum likelihood to estimate the population prevalence of the characteristic of interest and the agents' accuracy in identifying targets. We apply this method to tuberculosis (TB), which is the leading infectious disease cause of death worldwide. We use a national database of TB test data from South Africa to examine testing for multidrug resistant TB (MDR-TB). Approximately one quarter of MDR-TB cases was undiagnosed between 2004 and 2010. The official estimate of 2.5% is therefore too low, and MDR-TB prevalence is as high as 3.5%. Signal-to-noise ratios are estimated to be between 0.5 and 1. Our approach is widely applicable because of the availability of routinely collected data and abundance of potential instruments. Using routinely collected data to monitor population prevalence can guide evidence-based policy making. Copyright © 2017 John Wiley & Sons, Ltd.
Arrow, Peter; Klobas, Elizabeth
2015-12-01
A pragmatic randomized control trial was undertaken to compare the minimum intervention dentistry (MID) approach, based on the atraumatic restorative treatment procedures (MID-ART: Test), against the standard care approach (Control) to treat early childhood caries in a primary care setting. Consenting parent/child dyads were allocated to the Test or Control group using stratified block randomization. Inclusion and exclusion criteria were applied. Participants were examined at baseline and at follow-up by two calibrated examiners blind to group allocation status (κ = 0.77), and parents completed a questionnaire at baseline and follow-up. Dental therapists trained in MID-ART provided treatment to the Test group and dentists treated the Control group using standard approaches. The primary outcome of interest was the number of children who were referred for specialist pediatric care. Secondary outcomes were the number of teeth treated, changes in child oral health-related quality of life and dental anxiety and parental perceptions of care received. Data were analyzed on an intention to treat basis; risk ratio for referral for specialist care, test of proportions, Wilcoxon rank test and logistic regression were used. Three hundred and seventy parents/carers were initially screened; 273 children were examined at baseline and 254 were randomized (Test = 127; Control = 127): mean age = 3.8 years, SD 0.90; 59% male, mean dmft = 4.9, SD 4.0. There was no statistically significant difference in age, sex, baseline caries experience or child oral health-related quality of life between the Test and Control group. At follow-up (mean interval 11.4 months, SD 3.1 months), 220 children were examined: Test = 115, Control = 105. Case-notes review of 231 children showed Test = 6 (5%) and Control = 53 (49%) were referred for specialist care, P < 0.0001. More teeth were filled in the Test group (mean = 2.93, SD 2.48) than in the Control group (mean = 1.54, SD
Susukida, Ryoko; Crum, Rosa M; Stuart, Elizabeth A; Ebnesajjad, Cyrus; Mojtabai, Ramin
2016-07-01
To compare the characteristics of individuals participating in randomized controlled trials (RCTs) of treatments of substance use disorder (SUD) with individuals receiving treatment in usual care settings, and to provide a summary quantitative measure of differences between characteristics of these two groups of individuals using propensity score methods. Design Analyses using data from RCT samples from the National Institute of Drug Abuse Clinical Trials Network (CTN) and target populations of patients drawn from the Treatment Episodes Data Set-Admissions (TEDS-A). Settings Multiple clinical trial sites and nation-wide usual SUD treatment settings in the United States. A total of 3592 individuals from 10 CTN samples and 1 602 226 individuals selected from TEDS-A between 2001 and 2009. Measurements The propensity scores for enrolling in the RCTs were computed based on the following nine observable characteristics: sex, race/ethnicity, age, education, employment status, marital status, admission to treatment through criminal justice, intravenous drug use and the number of prior treatments. Findings The proportion of those with ≥ 12 years of education and the proportion of those who had full-time jobs were significantly higher among RCT samples than among target populations (in seven and nine trials, respectively, at P difference in the mean propensity scores between the RCTs and the target population was 1.54 standard deviations and was statistically significant at P different from individuals receiving treatment in usual care settings. Notably, RCT participants tend to have more years of education and a greater likelihood of full-time work compared with people receiving care in usual care settings. © 2016 Society for the Study of Addiction.
Mohd Fo'ad Rohani; Mohd Aizaini Maarof; Ali Selamat; Houssain Kettani
2010-01-01
This paper proposes a Multi-Level Sampling (MLS) approach for continuous Loss of Self-Similarity (LoSS) detection using iterative window. The method defines LoSS based on Second Order Self-Similarity (SOSS) statistical model. The Optimization Method (OM) is used to estimate self-similarity parameter since it is fast and more accurate in comparison with other estimation methods known in the literature. Probability of LoSS detection is introduced to measure continuous LoSS detection performance...
Perfluoroalkyl substances in aquatic environment-comparison of fish and passive sampling approaches.
Cerveny, Daniel; Grabic, Roman; Fedorova, Ganna; Grabicova, Katerina; Turek, Jan; Kodes, Vit; Golovko, Oksana; Zlabek, Vladimir; Randak, Tomas
2016-01-01
The concentrations of seven perfluoroalkyl substances (PFASs) were investigated in 36 European chub (Squalius cephalus) individuals from six localities in the Czech Republic. Chub muscle and liver tissue were analysed at all sampling sites. In addition, analyses of 16 target PFASs were performed in Polar Organic Chemical Integrative Samplers (POCISs) deployed in the water at the same sampling sites. We evaluated the possibility of using passive samplers as a standardized method for monitoring PFAS contamination in aquatic environments and the mutual relationships between determined concentrations. Only perfluorooctane sulphonate was above the LOQ in fish muscle samples and 52% of the analysed fish individuals exceeded the Environmental Quality Standard for water biota. Fish muscle concentration is also particularly important for risk assessment of fish consumers. The comparison of fish tissue results with published data showed the similarity of the Czech results with those found in Germany and France. However, fish liver analysis and the passive sampling approach resulted in different fish exposure scenarios. The total concentration of PFASs in fish liver tissue was strongly correlated with POCIS data, but pollutant patterns differed between these two matrices. The differences could be attributed to the metabolic activity of the living organism. In addition to providing a different view regarding the real PFAS cocktail to which the fish are exposed, POCISs fulfil the Three Rs strategy (replacement, reduction, and refinement) in animal testing. Copyright © 2015 Elsevier Inc. All rights reserved.
Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach
Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto
2015-01-01
This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called Aη, is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that So
Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach.
Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto
2015-12-26
This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that So
Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach
Directory of Open Access Journals (Sweden)
Gabriele Ferri
2015-12-01
Full Text Available This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality, used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support. The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided
Li, Ningzhi; Li, Shizhe; Shen, Jun
2017-06-01
In vivo 13C magnetic resonance spectroscopy (MRS) is a unique and effective tool for studying dynamic human brain metabolism and the cycling of neurotransmitters. One of the major technical challenges for in vivo 13C-MRS is the high radio frequency (RF) power necessary for heteronuclear decoupling. In the common practice of in vivo 13C-MRS, alkanyl carbons are detected in the spectra range of 10-65ppm. The amplitude of decoupling pulses has to be significantly greater than the large one-bond 1H-13C scalar coupling (1JCH=125-145 Hz). Two main proton decoupling methods have been developed: broadband stochastic decoupling and coherent composite or adiabatic pulse decoupling (e.g., WALTZ); the latter is widely used because of its efficiency and superb performance under inhomogeneous B1 field. Because the RF power required for proton decoupling increases quadratically with field strength, in vivo 13C-MRS using coherent decoupling is often limited to low magnetic fields (protons via weak long-range 1H-13C scalar couplings, which can be decoupled using low RF power broadband stochastic decoupling. Recently, the carboxylic/amide 13C-MRS technique using low power random RF heteronuclear decoupling was safely applied to human brain studies at 7T. Here, we review the two major decoupling methods and the carboxylic/amide 13C-MRS with low power decoupling strategy. Further decreases in RF power deposition by frequency-domain windowing and time-domain random under-sampling are also discussed. Low RF power decoupling opens the possibility of performing in vivo 13C experiments of human brain at very high magnetic fields (such as 11.7T), where signal-to-noise ratio as well as spatial and temporal spectral resolution are more favorable than lower fields.
Directory of Open Access Journals (Sweden)
Nguyen Phuong H
2012-10-01
Full Text Available Abstract Background Low birth weight and maternal anemia remain intractable problems in many developing countries. The adequacy of the current strategy of providing iron-folic acid (IFA supplements only during pregnancy has been questioned given many women enter pregnancy with poor iron stores, the substantial micronutrient demand by maternal and fetal tissues, and programmatic issues related to timing and coverage of prenatal care. Weekly IFA supplementation for women of reproductive age (WRA improves iron status and reduces the burden of anemia in the short term, but few studies have evaluated subsequent pregnancy and birth outcomes. The Preconcept trial aims to determine whether pre-pregnancy weekly IFA or multiple micronutrient (MM supplementation will improve birth outcomes and maternal and infant iron status compared to the current practice of prenatal IFA supplementation only. This paper provides an overview of study design, methodology and sample characteristics from baseline survey data and key lessons learned. Methods/design We have recruited 5011 WRA in a double-blind stratified randomized controlled trial in rural Vietnam and randomly assigned them to receive weekly supplements containing either: 1 2800 μg folic acid 2 60 mg iron and 2800 μg folic acid or 3 MM. Women who become pregnant receive daily IFA, and are being followed through pregnancy, delivery, and up to three months post-partum. Study outcomes include birth outcomes and maternal and infant iron status. Data are being collected on household characteristics, maternal diet and mental health, anthropometry, infant feeding practices, morbidity and compliance. Discussion The study is timely and responds to the WHO Global Expert Consultation which identified the need to evaluate the long term benefits of weekly IFA and MM supplementation in WRA. Findings will generate new information to help guide policy and programs designed to reduce the burden of anemia in women and
Iterative random vs. Kennard-Stone sampling for IR spectrum-based classification task using PLS2-DA
Lee, Loong Chuen; Liong, Choong-Yeun; Jemain, Abdul Aziz
2018-04-01
External testing (ET) is preferred over auto-prediction (AP) or k-fold-cross-validation in estimating more realistic predictive ability of a statistical model. With IR spectra, Kennard-stone (KS) sampling algorithm is often used to split the data into training and test sets, i.e. respectively for model construction and for model testing. On the other hand, iterative random sampling (IRS) has not been the favored choice though it is theoretically more likely to produce reliable estimation. The aim of this preliminary work is to compare performances of KS and IRS in sampling a representative training set from an attenuated total reflectance - Fourier transform infrared spectral dataset (of four varieties of blue gel pen inks) for PLS2-DA modeling. The `best' performance achievable from the dataset is estimated with AP on the full dataset (APF, error). Both IRS (n = 200) and KS were used to split the dataset in the ratio of 7:3. The classic decision rule (i.e. maximum value-based) is employed for new sample prediction via partial least squares - discriminant analysis (PLS2-DA). Error rate of each model was estimated repeatedly via: (a) AP on full data (APF, error); (b) AP on training set (APS, error); and (c) ET on the respective test set (ETS, error). A good PLS2-DA model is expected to produce APS, error and EVS, error that is similar to the APF, error. Bearing that in mind, the similarities between (a) APS, error vs. APF, error; (b) ETS, error vs. APF, error and; (c) APS, error vs. ETS, error were evaluated using correlation tests (i.e. Pearson and Spearman's rank test), using series of PLS2-DA models computed from KS-set and IRS-set, respectively. Overall, models constructed from IRS-set exhibits more similarities between the internal and external error rates than the respective KS-set, i.e. less risk of overfitting. In conclusion, IRS is more reliable than KS in sampling representative training set.
Bowden, Jack; Del Greco M, Fabiola; Minelli, Cosetta; Davey Smith, George; Sheehan, Nuala A; Thompson, John R
2016-12-01
: MR-Egger regression has recently been proposed as a method for Mendelian randomization (MR) analyses incorporating summary data estimates of causal effect from multiple individual variants, which is robust to invalid instruments. It can be used to test for directional pleiotropy and provides an estimate of the causal effect adjusted for its presence. MR-Egger regression provides a useful additional sensitivity analysis to the standard inverse variance weighted (IVW) approach that assumes all variants are valid instruments. Both methods use weights that consider the single nucleotide polymorphism (SNP)-exposure associations to be known, rather than estimated. We call this the `NO Measurement Error' (NOME) assumption. Causal effect estimates from the IVW approach exhibit weak instrument bias whenever the genetic variants utilized violate the NOME assumption, which can be reliably measured using the F-statistic. The effect of NOME violation on MR-Egger regression has yet to be studied. An adaptation of the I2 statistic from the field of meta-analysis is proposed to quantify the strength of NOME violation for MR-Egger. It lies between 0 and 1, and indicates the expected relative bias (or dilution) of the MR-Egger causal estimate in the two-sample MR context. We call it IGX2 . The method of simulation extrapolation is also explored to counteract the dilution. Their joint utility is evaluated using simulated data and applied to a real MR example. In simulated two-sample MR analyses we show that, when a causal effect exists, the MR-Egger estimate of causal effect is biased towards the null when NOME is violated, and the stronger the violation (as indicated by lower values of IGX2 ), the stronger the dilution. When additionally all genetic variants are valid instruments, the type I error rate of the MR-Egger test for pleiotropy is inflated and the causal effect underestimated. Simulation extrapolation is shown to substantially mitigate these adverse effects. We
Online Problem Solving for Adolescent Brain Injury: A Randomized Trial of 2 Approaches.
Wade, Shari L; Taylor, Hudson Gerry; Yeates, Keith Owen; Kirkwood, Michael; Zang, Huaiyu; McNally, Kelly; Stacin, Terry; Zhang, Nanhua
Adolescent traumatic brain injury (TBI) contributes to deficits in executive functioning and behavior, but few evidence-based treatments exist. We conducted a randomized clinical trial comparing Teen Online Problem Solving with Family (TOPS-Family) with Teen Online Problem Solving with Teen Only (TOPS-TO) or the access to Internet Resources Comparison (IRC) group. Children, aged 11 to 18 years, who sustained a complicated mild-to-severe TBI in the previous 18 months were randomly assigned to the TOPS-Family (49), TOPS-TO (51), or IRC group (52). Parent and self-report measures of externalizing behaviors and executive functioning were completed before treatment and 6 months later. Treatment effects were examined using linear regression models, adjusting for baseline symptom levels. Age, maternal education, and family stresses were examined as moderators. The TOPS-Family group had lower levels of parent-reported executive dysfunction at follow-up than the TOPS-TO group, and differences between the TOPS-Family and IRC groups approached significance. Maternal education moderated improvements in parent-reported externalizing behaviors, with less educated parents in the TOPS-Family group reporting fewer symptoms. On the self-report Behavior Rating Inventory of Executive Functions, treatment efficacy varied with the level of parental stresses. The TOPS-Family group reported greater improvements at low stress levels, whereas the TOPS-TO group reported greater improvement at high-stress levels. The TOPS-TO group did not have significantly lower symptoms than the IRC group on any comparison. Findings support the efficacy of online family problem solving to address executive dysfunction and improve externalizing behaviors among youth with TBI from less advantaged households. Treatment with the teen alone may be indicated in high-stress families.
A randomized clinical study of two interceptive approaches to palatally displaced canines.
Baccetti, Tiziano; Leonardi, Maria; Armi, Pamela
2008-08-01
This study evaluated the effectiveness of two interceptive approaches to palatally displaced canines (PDC), i.e. extraction of the primary canines alone or in association with the use of a cervical-pull headgear. The randomized prospective design comprised 75 subjects with PDC (92 maxillary canines) who were randomly assigned to three groups: extraction of the primary canine only (EG), extraction of the primary canine and cervical-pull headgear (EHG), and an untreated control group (CG). Panoramic radiographs were evaluated at the time of initial observation (T1) and after an average period of 18 months (T2). At T2, an evaluation of the success of canine eruption was undertaken. Between-group statistical comparisons, Kruskal-Wallis test with Bonferroni correction, were performed on the T1-T2 changes of the diagnostic parameters on panoramic radiographs and the prevalence rates of success in canine eruption. A superimposition study on lateral cephalograms at T1 and T2 was carried out to evaluate the changes in the sagittal position of the upper molars in the three groups. The removal of the primary canine as an isolated measure to intercept palatal displacement of maxillary canines showed a success rate of 65.2 per cent, which was significantly greater than that in the untreated controls (36 per cent). The additional use of a headgear resulted in successful eruption in 87.5 per cent of the subjects, with a significant improvement in the measurements for intraosseous canine position. The cephalometric superimposition study showed a significant mesial movement of the upper first molars in the CG and EG when compared with the EHG.
A nuclear reload optimization approach using a real coded genetic algorithm with random keys
International Nuclear Information System (INIS)
Lima, Alan M.M. de; Schirru, Roberto; Medeiros, Jose A.C.C.
2009-01-01
The fuel reload of a Pressurized Water Reactor is made whenever the burn up of the fuel assemblies in the nucleus of the reactor reaches a certain value such that it is not more possible to maintain a critical reactor producing energy at nominal power. The problem of fuel reload optimization consists on determining the positioning of the fuel assemblies within the nucleus of the reactor in an optimized way to minimize the cost benefit relationship of fuel assemblies cost per maximum burn up, and also satisfying symmetry and safety restrictions. The fuel reload optimization problem difficulty grows exponentially with the number of fuel assemblies in the nucleus of the reactor. During decades the fuel reload optimization problem was solved manually by experts that used their knowledge and experience to build configurations of the reactor nucleus, and testing them to verify if safety restrictions of the plant are satisfied. To reduce this burden, several optimization techniques have been used, included the binary code genetic algorithm. In this work we show the use of a real valued coded approach of the genetic algorithm, with different recombination methods, together with a transformation mechanism called random keys, to transform the real values of the genes of each chromosome in a combination of discrete fuel assemblies for evaluation of the reload optimization. Four different recombination methods were tested: discrete recombination, intermediate recombination, linear recombination and extended linear recombination. For each of the 4 recombination methods 10 different tests using different seeds for the random number generator were conducted 10 generating, totaling 40 tests. The results of the application of the genetic algorithm are shown with formulation of real numbers for the problem of the nuclear reload of the plant Angra 1 type PWR. Since the best results in the literature for this problem were found by the parallel PSO we will it use for comparison
Anomalous dispersion in correlated porous media: a coupled continuous time random walk approach
Comolli, Alessandro; Dentz, Marco
2017-09-01
We study the causes of anomalous dispersion in Darcy-scale porous media characterized by spatially heterogeneous hydraulic properties. Spatial variability in hydraulic conductivity leads to spatial variability in the flow properties through Darcy's law and thus impacts on solute and particle transport. We consider purely advective transport in heterogeneity scenarios characterized by broad distributions of heterogeneity length scales and point values. Particle transport is characterized in terms of the stochastic properties of equidistantly sampled Lagrangian velocities, which are determined by the flow and conductivity statistics. The persistence length scales of flow and transport velocities are imprinted in the spatial disorder and reflect the distribution of heterogeneity length scales. Particle transitions over the velocity length scales are kinematically coupled with the transition time through velocity. We show that the average particle motion follows a coupled continuous time random walk (CTRW), which is fully parameterized by the distribution of flow velocities and the medium geometry in terms of the heterogeneity length scales. The coupled CTRW provides a systematic framework for the investigation of the origins of anomalous dispersion in terms of heterogeneity correlation and the distribution of conductivity point values. We derive analytical expressions for the asymptotic scaling of the moments of the spatial particle distribution and first arrival time distribution (FATD), and perform numerical particle tracking simulations of the coupled CTRW to capture the full average transport behavior. Broad distributions of heterogeneity point values and lengths scales may lead to very similar dispersion behaviors in terms of the spatial variance. Their mechanisms, however are very different, which manifests in the distributions of particle positions and arrival times, which plays a central role for the prediction of the fate of dissolved substances in
Directory of Open Access Journals (Sweden)
Alanis Kelly L
2006-02-01
Full Text Available Abstract Background Establishing more sensible measures to treat cocaine-addicted mothers and their children is essential for improving U.S. drug policy. Favorable post-natal environments have moderated potential deleterious prenatal effects. However, since cocaine is an illicit substance having long been demonized, we hypothesized that attitudes toward prenatal cocaine exposure would be more negative than for licit substances, alcohol, nicotine and caffeine. Further, media portrayals about long-term outcomes were hypothesized to influence viewers' attitudes, measured immediately post-viewing. Reducing popular crack baby stigmas could influence future policy decisions by legislators. In Study 1, 336 participants were randomly assigned to 1 of 4 conditions describing hypothetical legal sanction scenarios for pregnant women using cocaine, alcohol, nicotine or caffeine. Participants rated legal sanctions against pregnant women who used one of these substances and risk potential for developing children. In Study 2, 139 participants were randomly assigned to positive, neutral and negative media conditions. Immediately post-viewing, participants rated prenatal cocaine-exposed or non-exposed teens for their academic performance and risk for problems at age18. Results Participants in Study 1 imposed significantly greater legal sanctions for cocaine, perceiving prenatal cocaine exposure as more harmful than alcohol, nicotine or caffeine. A one-way ANOVA for independent samples showed significant differences, beyond .0001. Post-hoc Sheffe test illustrated that cocaine was rated differently from other substances. In Study 2, a one-way ANOVA for independent samples was performed on difference scores for the positive, neutral or negative media conditions about prenatal cocaine exposure. Participants in the neutral and negative media conditions estimated significantly lower grade point averages and more problems for the teen with prenatal cocaine exposure
Directory of Open Access Journals (Sweden)
Dan Tulpan
2013-01-01
Full Text Available This paper presents a novel hybrid DNA encryption (HyDEn approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.
Mamhidir, Anna-Greta; Sjölund, Britt-Marie; Fläckman, Birgitta; Wimo, Anders; Sköldunger, Anders; Engström, Maria
2017-02-28
Chronic pain affects nursing home residents' daily life. Pain assessment is central to adequate pain management. The overall aim was to investigate effects of a pain management intervention on nursing homes residents and to describe staffs' experiences of the intervention. A cluster-randomized trial and a mixed-methods approach. Randomized nursing home assignment to intervention or comparison group. The intervention group after theoretical and practical training sessions, performed systematic pain assessments using predominately observational scales with external and internal facilitators supporting the implementation. No measures were taken in the comparison group; pain management continued as before, but after the study corresponding training was provided. Resident data were collected baseline and at two follow-ups using validated scales and record reviews. Nurse group interviews were carried out twice. Primary outcome measures were wellbeing and proxy-measured pain. Secondary outcome measures were ADL-dependency and pain documentation. Using both non-parametric statistics on residential level and generalized estimating equation (GEE) models to take clustering effects into account, the results revealed non-significant interaction effects for the primary outcome measures, while for ADL-dependency using Katz-ADL there was a significant interaction effect. Comparison group (n = 66 residents) Katz-ADL values showed increased dependency over time, while the intervention group demonstrated no significant change over time (n = 98). In the intervention group, 13/44 residents showed decreased pain scores over the period, 14/44 had no pain score changes ≥ 30% in either direction measured with Doloplus-2. Furthermore, 17/44 residents showed increased pain scores ≥ 30% over time, indicating pain/risk for pain; 8 identified at the first assessment and 9 were new, i.e. developed pain over time. No significant changes in the use of drugs was found in any of
Directory of Open Access Journals (Sweden)
Preksedis M. Ndomba
2008-01-01
Full Text Available This paper presents preliminary findings on the adequacy of one hydrological year sampling programme data in developing an excellent sediment rating curve. The study case is a 1DD1 subcatchment in the upstream of Pangani River Basin (PRB, located in the North Eastern part of Tanzania. 1DD1 is the major runoff-sediment contributing tributary to the downstream hydropower reservoir, the Nyumba Ya Mungu (NYM. In literature sediment rating curve method is known to underestimate the actual sediment load. In the case of developing countries long-term sediment sampling monitoring or conservation campaigns have been reported as unworkable options. Besides, to the best knowledge of the authors, to date there is no consensus on how to develop an excellent rating curve. Daily-midway and intermittent-cross section sediment samples from Depth Integrating sampler (D-74 were used to calibrate the subdaily automatic sediment pumping sampler (ISCO 6712 near bank point samples for developing the rating curve. Sediment load correction factors were derived from both statistical bias estimators and actual sediment load approaches. It should be noted that the ongoing study is guided by findings of other studies in the same catchment. For instance, long term sediment yield rate estimated based on reservoir survey validated the performance of the developed rating curve. The result suggests that excellent rating curve could be developed from one hydrological year sediment sampling programme data. This study has also found that uncorrected rating curve underestimates sediment load. The degreeof underestimation depends on the type of rating curve developed and data used.
Pinto, Miguel; Antelo, Minia; Ferreira, Rita; Azevedo, Jacinta; Santo, Irene; Borrego, Maria José; Gomes, João Paulo
2017-03-01
Syphilis is the sexually transmitted disease caused by Treponema pallidum, a pathogen highly adapted to the human host. As a multistage disease, syphilis presents distinct clinical manifestations that pose different implications for diagnosis. Nevertheless, the inherent factors leading to diverse disease progressions are still unknown. We aimed to assess the association between treponemal loads and dissimilar disease outcomes, to better understand syphilis. We retrospectively analyzed 309 DNA samples distinct anatomic sites associated with particular syphilis manifestations. All samples had previously tested positive by a PCR-based diagnostic kit. An absolute quantitative real-time PCR procedure was used to precisely quantify the number of treponemal and human cells to determine T. pallidum loads in each sample. In general, lesion exudates presented the highest T. pallidum loads in contrast with blood-derived samples. Within the latter, a higher dispersion of T. pallidum quantities was observed for secondary syphilis. T. pallidum was detected in substantial amounts in 37 samples of seronegative individuals and in 13 cases considered as syphilis-treated. No association was found between treponemal loads and serological results or HIV status. This study suggests a scenario where syphilis may be characterized by: i) heterogeneous and high treponemal loads in primary syphilis, regardless of the anatomic site, reflecting dissimilar duration of chancres development and resolution; ii) high dispersion of bacterial concentrations in secondary syphilis, potentially suggesting replication capability of T. pallidum while in the bloodstream; and iii) bacterial evasiveness, either to the host immune system or antibiotic treatment, while remaining hidden in privileged niches. This work highlights the importance of using molecular approaches to study uncultivable human pathogens, such as T. pallidum, in the infection process. Copyright © 2017 Elsevier Ltd. All rights
Pennaforte, Thomas; Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude
2016-02-17
Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. This study is in its preliminary stages and the results are expected to be made available by April, 2016. This will be the first study to explore a new
Liu, Fang
2016-01-01
In both clinical development and post-marketing of a new therapy or a new treatment, incidence of an adverse event (AE) is always a concern. When sample sizes are small, large sample-based inferential approaches on an AE incidence proportion in a certain time period no longer apply. In this brief discussion, we introduce a simple Bayesian framework to quantify, in small sample studies and the rare AE case, (1) the confidence level that the incidence proportion of a particular AE p is over or below a threshold, (2) the lower or upper bounds on p with a certain level of confidence, and (3) the minimum required number of patients with an AE before we can be certain that p surpasses a specific threshold, or the maximum allowable number of patients with an AE after which we can no longer be certain that p is below a certain threshold, given a certain confidence level. The method is easy to understand and implement; the interpretation of the results is intuitive. This article also demonstrates the usefulness of simple Bayesian concepts when it comes to answering practical questions.
Benz-de Bretagne, I; Le Guellec, C; Halimi, J M; Gatault, P; Barbet, C; Alnajjar, A; Büchler, M; Lebranchu, Y; Andres, Christian Robert; Vourcʼh, P; Blasco, H
2012-06-01
%, and about 6% if the bounds of acceptance were set at ± 15%. This Bayesian approach can help to reduce the number of samples required to calculate GFR using Bröchner-Mortensen formula with good accuracy.
Tran, Kathy V; Azhar, Gulrez S; Nair, Rajesh; Knowlton, Kim; Jaiswal, Anjali; Sheffield, Perry; Mavalankar, Dileep; Hess, Jeremy
2013-06-18
Extreme heat is a significant public health concern in India; extreme heat hazards are projected to increase in frequency and severity with climate change. Few of the factors driving population heat vulnerability are documented, though poverty is a presumed risk factor. To facilitate public health preparedness, an assessment of factors affecting vulnerability among slum dwellers was conducted in summer 2011 in Ahmedabad, Gujarat, India. Indicators of heat exposure, susceptibility to heat illness, and adaptive capacity, all of which feed into heat vulnerability, was assessed through a cross-sectional household survey using randomized multistage cluster sampling. Associations between heat-related morbidity and vulnerability factors were identified using multivariate logistic regression with generalized estimating equations to account for clustering effects. Age, preexisting medical conditions, work location, and access to health information and resources were associated with self-reported heat illness. Several of these variables were unique to this study. As sociodemographics, occupational heat exposure, and access to resources were shown to increase vulnerability, future interventions (e.g., health education) might target specific populations among Ahmedabad urban slum dwellers to reduce vulnerability to extreme heat. Surveillance and evaluations of future interventions may also be worthwhile.
Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2015-04-10
A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.
Messiah, Antoine; Lacoste, Jérôme; Gokalsing, Erick; Shultz, James M; Rodríguez de la Vega, Pura; Castro, Grettel; Acuna, Juan M
2016-08-01
Studies on the mental health of families hosting disaster refugees are lacking. This study compares participants in households that hosted 2010 Haitian earthquake disaster refugees with their nonhost counterparts. A random sample survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants were assessed regarding their 2010 earthquake exposure and impact on family and friends and whether they hosted earthquake refugees. Using standardized scores and thresholds, they were evaluated for symptoms of three common mental disorders (CMDs): posttraumatic stress disorder, generalized anxiety disorder, and major depressive disorder (MDD). Participants who hosted refugees (n = 51) had significantly higher percentages of scores beyond thresholds for MDD than those who did not host refugees (n = 365) and for at least one CMD, after adjusting for participants' earthquake exposures and effects on family and friends. Hosting refugees from a natural disaster appears to elevate the risk for MDD and possibly other CMDs, independent of risks posed by exposure to the disaster itself. Families hosting refugees deserve special attention.
Langston, Anne L; McCallum, Marilyn; Campbell, Marion K; Robertson, Clare; Ralston, Stuart H
2005-01-01
Although, consumer involvement in individual studies is often limited, their involvement in guiding health research is generally considered to be beneficial. This paper outlines our experiences of an integrated relationship between the organisers of a clinical trial and a consumer organisation. The PRISM trial is a UK multicentre, randomized controlled trial comparing treatment strategies for Paget's disease of the bone. The National Association for the Relief of Paget's Disease (NARPD) is the only UK support group for sufferers of Paget's disease and has worked closely with the PRISM team from the outset. NARPD involvement is integral to the conduct of the trial and specific roles have included: peer-review; trial steering committee membership; provision of advice to participants, and promotion of the trial amongst Paget's disease patients. The integrated relationship has yielded benefits to both the trial and the consumer organisation. The benefits for the trial have included: recruitment of participants via NARPD contacts; well-informed participants; unsolicited patient advocacy of the trial; and interested and pro-active collaborators. For the NARPD and Paget's disease sufferers, benefits have included: increased awareness of Paget's disease; increased access to relevant health research; increased awareness of the NARPD services; and wider transfer of diagnosis and management knowledge to/from health care professionals. Our experience has shown that an integrated approach between a trial team and a consumer organisation is worthwhile. Adoption of such an approach in other trials may yield significant improvements in recruitment and quality of participant information flow. There are, however, resource implications for both parties.
Leahey, Tricia M; Fava, Joseph L; Seiden, Andrew; Fernandes, Denise; Doyle, Caroline; Kent, Kimberly; La Rue, Molly; Mitchell, Marc; Wing, Rena R
2016-11-01
Weight loss maintenance is a significant challenge in obesity treatment. During maintenance the "costs" of adhering to weight management behaviors may outweigh the "benefits." This study examined the efficacy of a novel approach to weight loss maintenance based on modifying the cost-benefit ratio. Individuals who achieved a 5% weight loss (N=75) were randomized to one of three, 10-month maintenance interventions. All interventions were delivered primarily via the Internet. The Standard arm received traditional weight maintenance strategies. To increase benefits, or rewards, for maintenance behaviors, the two cost-benefit intervention conditions received weekly monetary rewards for self-monitoring and social reinforcement via e-coaching. To decrease behavioral costs (boredom) and increase novelty, participants in the cost-benefit conditions also monitored different evidence-based behaviors every two weeks (e.g., Weeks 1 & 2: steps; Week 3 & 4: red foods). The primary difference between the cost-benefit interventions was type of e-coach providing social reinforcement: Professional (CB Pro) or Peer (CB Peer). Study procedures took place in Providence, RI from 2013 to 2014. Retention was 99%. There were significant group differences in weight regain (p=.01). The Standard arm gained 3.5±5.7kg. In contrast, participants in CB Pro and CB Peer lost an additional 1.8±7.0kg and 0.5±6.4kg, respectively. These results suggest that an Internet delivered cost-benefit approach to weight loss maintenance may be effective for long-term weight control. In addition, using peer coaches to provide reinforcement may be a particularly economic alternative to professionals. These data are promising and provide support for a larger, longer trial. Copyright © 2016 Elsevier Inc. All rights reserved.
Lim, Angelina; Stewart, Kay; Abramson, Michael J; Walker, Susan P; George, Johnson
2012-12-19
Uncontrolled asthma during pregnancy is associated with the maternal hazards of disease exacerbation, and perinatal hazards including intrauterine growth restriction and preterm birth. Interventions directed at achieving better asthma control during pregnancy should be considered a high priority in order to optimise both maternal and perinatal outcomes. Poor compliance with prescribed asthma medications during pregnancy and suboptimal prescribing patterns to pregnant women have both been shown to be contributing factors that jeopardise asthma control. The aim is to design and evaluate an intervention involving multidisciplinary care for women experiencing asthma in pregnancy. A pilot single-blinded parallel-group randomized controlled trial testing a Multidisciplinary Approach to Management of Maternal Asthma (MAMMA©) which involves education and regular monitoring. Pregnant women with asthma will be recruited from antenatal clinics in Victoria, Australia. Recruited participants, stratified by disease severity, will be allocated to the intervention or the usual care group in a 1:1 ratio. Both groups will be followed prospectively throughout pregnancy and outcomes will be compared between groups at three and six months after recruitment to evaluate the effectiveness of this intervention. Outcome measures include Asthma Control Questionnaire (ACQ) scores, oral corticosteroid use, asthma exacerbations and asthma related hospital admissions, and days off work, preventer to reliever ratio, along with pregnancy and neonatal adverse events at delivery. The use of FEV(1)/FEV(6) will be also investigated during this trial as a marker for asthma control. If successful, this model of care could be widely implemented in clinical practice and justify more funding for support services and resources for these women. This intervention will also promote awareness of the risks of poorly controlled asthma and the need for a collaborative, multidisciplinary approach to asthma
A novel approach to assess the treatment response using Gaussian random field in PET
Energy Technology Data Exchange (ETDEWEB)
Wang, Mengdie [Department of Biomedical Engineering, Tsinghua University, Beijing 100084, China and Center for Advanced Medical Imaging Science, Division of Nuclear Medicine and Molecular Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Guo, Ning [Center for Advanced Medical Imaging Science, Division of Nuclear Medicine and Molecular Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Hu, Guangshu; Zhang, Hui, E-mail: hzhang@mail.tsinghua.edu.cn, E-mail: li.quanzheng@mgh.harvard.edu [Department of Biomedical Engineering, Tsinghua University, Beijing 100084 (China); El Fakhri, Georges; Li, Quanzheng, E-mail: hzhang@mail.tsinghua.edu.cn, E-mail: li.quanzheng@mgh.harvard.edu [Center for Advanced Medical Imaging Science, Division of Nuclear Medicine and Molecular Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 and Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115 (United States)
2016-02-15
Purpose: The assessment of early therapeutic response to anticancer therapy is vital for treatment planning and patient management in clinic. With the development of personal treatment plan, the early treatment response, especially before any anatomically apparent changes after treatment, becomes urgent need in clinic. Positron emission tomography (PET) imaging serves an important role in clinical oncology for tumor detection, staging, and therapy response assessment. Many studies on therapy response involve interpretation of differences between two PET images, usually in terms of standardized uptake values (SUVs). However, the quantitative accuracy of this measurement is limited. This work proposes a statistically robust approach for therapy response assessment based on Gaussian random field (GRF) to provide a statistically more meaningful scale to evaluate therapy effects. Methods: The authors propose a new criterion for therapeutic assessment by incorporating image noise into traditional SUV method. An analytical method based on the approximate expressions of the Fisher information matrix was applied to model the variance of individual pixels in reconstructed images. A zero mean unit variance GRF under the null hypothesis (no response to therapy) was obtained by normalizing each pixel of the post-therapy image with the mean and standard deviation of the pretherapy image. The performance of the proposed method was evaluated by Monte Carlo simulation, where XCAT phantoms (128{sup 2} pixels) with lesions of various diameters (2–6 mm), multiple tumor-to-background contrasts (3–10), and different changes in intensity (6.25%–30%) were used. The receiver operating characteristic curves and the corresponding areas under the curve were computed for both the proposed method and the traditional methods whose figure of merit is the percentage change of SUVs. The formula for the false positive rate (FPR) estimation was developed for the proposed therapy response
Amirabadizadeh, Alireza; Nezami, Hossein; Vaughn, Michael G; Nakhaee, Samaneh; Mehrpour, Omid
2018-05-12
Substance abuse exacts considerable social and health care burdens throughout the world. The aim of this study was to create a prediction model to better identify risk factors for drug use. A prospective cross-sectional study was conducted in South Khorasan Province, Iran. Of the total of 678 eligible subjects, 70% (n: 474) were randomly selected to provide a training set for constructing decision tree and multiple logistic regression (MLR) models. The remaining 30% (n: 204) were employed in a holdout sample to test the performance of the decision tree and MLR models. Predictive performance of different models was analyzed by the receiver operating characteristic (ROC) curve using the testing set. Independent variables were selected from demographic characteristics and history of drug use. For the decision tree model, the sensitivity and specificity for identifying people at risk for drug abuse were 66% and 75%, respectively, while the MLR model was somewhat less effective at 60% and 73%. Key independent variables in the analyses included first substance experience, age at first drug use, age, place of residence, history of cigarette use, and occupational and marital status. While study findings are exploratory and lack generalizability they do suggest that the decision tree model holds promise as an effective classification approach for identifying risk factors for drug use. Convergent with prior research in Western contexts is that age of drug use initiation was a critical factor predicting a substance use disorder.
Clagett, Bartholt; Nathanson, Katherine L.; Ciosek, Stephanie L.; McDermoth, Monique; Vaughn, David J.; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A.
2013-01-01
Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-...
International Nuclear Information System (INIS)
Spreemann, Dirk; Hoffmann, Daniel; Folkmer, Bernd; Manoli, Yiannos
2008-01-01
This paper presents a design and optimization strategy for resonant electromagnetic vibration energy harvesting devices. An analytic expression for the magnetic field of cylindrical permanent magnets is used to build up an electromagnetic subsystem model. This subsystem is used to find the optimal resting position of the oscillating mass and to optimize the geometrical parameters (shape and size) of the magnet and coil. The objective function to be investigated is thereby the maximum voltage output of the transducer. An additional mechanical subsystem model based on well-known equations describing the dynamics of spring–mass–damper systems is established to simulate both nonlinear spring characteristics and the effect of internal limit stops. The mechanical subsystem enables the identification of optimal spring characteristics for realistic operation conditions such as stochastic vibrations. With the overall transducer model, a combination of both subsystems connected to a simple electrical circuit, a virtual operation of the optimized vibration transducer excited by a measured random acceleration profile can be performed. It is shown that the optimization approach results in an appreciable increase of the converter performance
DEFF Research Database (Denmark)
Husemoen, L. L. N.; Skaaby, T.; Martinussen, Torben
2014-01-01
Background/Objectives: The aim was to examine the causal effect of vitamin D on serum adiponectin using a multiple instrument Mendelian randomization approach. Subjects/Methods: Serum 25-hydroxy vitamin D (25(OH)D) and serum total or high molecular weight (HMW) adiponectin were measured in two...... doubling of 25(OH)D was 4.78, 95% CI: 1.96, 7.68, Pvitamin D-binding protein gene and the filaggrin gene as instrumental variables, the causal effect in % was estimated to 61.46, 95% CI: 17.51, 120.28, P=0.003 higher adiponectin per doubling of 25(OH)D. In the MONICA10...... effect estimate in % per doubling of 25(OH)D was 37.13, 95% CI:-3.67, 95.20, P=0.080). Conclusions: The results indicate a possible causal association between serum 25(OH)D and total adiponectin. However, the association was not replicated for HMW adiponectin. Thus, further studies are needed to confirm...
Kruse, Christine; Rosenlund, Signe; Broeng, Leif; Overgaard, Søren
2018-01-01
The two most common surgical approaches to total hip arthroplasty are the posterior approach and lateral approach. The surgical approach may influence cup positioning and restoration of the offset, which may affect the biomechanical properties of the hip joint. The primary aim was to compare cup position between posterior approach and lateral approach. Secondary aims were to compare femoral offset, abductor moment arm and leg length discrepancy between the two approaches. Eighty patients with primary hip osteoarthritis were included in a randomized controlled trial and assigned to total hip arthroplasty using posterior approach or lateral approach. Postoperative radiographs from 38 patients in each group were included in this study for measurement of cup anteversion and inclination. Femoral offset, cup offset, total offset, abductor moment arm and leg length discrepancy were measured on preoperative and postoperative radiographs in 28 patients in each group. We found that mean anteversion was 5° larger in the posterior approach group (95% CI, -8.1 to -1.4; p = 0.006), while mean inclination was 5° less steep (95% CI, 2.7 to 7.2; pcup anteversion but less steep cup inclination in the posterior approach group compared with the lateral approach group. Femoral offset and abductor moment arm were restored after total hip arthroplasty using lateral approach but significantly increased when using posterior approach.
Voineskos, Sophocles H; Coroneos, Christopher J; Ziolkowski, Natalia I; Kaur, Manraj N; Banfield, Laura; Meade, Maureen O; Chung, Kevin C; Thoma, Achilleas; Bhandari, Mohit
2016-02-01
The authors examined industry support, conflict of interest, and sample size in plastic surgery randomized controlled trials that compared surgical interventions. They hypothesized that industry-funded trials demonstrate statistically significant outcomes more often, and randomized controlled trials with small sample sizes report statistically significant results more frequently. An electronic search identified randomized controlled trials published between 2000 and 2013. Independent reviewers assessed manuscripts and performed data extraction. Funding source, conflict of interest, primary outcome direction, and sample size were examined. Chi-squared and independent-samples t tests were used in the analysis. The search identified 173 randomized controlled trials, of which 100 (58 percent) did not acknowledge funding status. A relationship between funding source and trial outcome direction was not observed. Both funding status and conflict of interest reporting improved over time. Only 24 percent (six of 25) of industry-funded randomized controlled trials reported authors to have independent control of data and manuscript contents. The mean number of patients randomized was 73 per trial (median, 43, minimum, 3, maximum, 936). Small trials were not found to be positive more often than large trials (p = 0.87). Randomized controlled trials with small sample size were common; however, this provides great opportunity for the field to engage in further collaboration and produce larger, more definitive trials. Reporting of trial funding and conflict of interest is historically poor, but it greatly improved over the study period. Underreporting at author and journal levels remains a limitation when assessing the relationship between funding source and trial outcomes. Improved reporting and manuscript control should be goals that both authors and journals can actively achieve.
Glasscock, David J; Carstensen, Ole; Dalgaard, Vita Ligaya
2018-05-28
Randomized controlled trials (RCTs) of interventions aimed at reducing work-related stress indicate that cognitive behavioural therapy (CBT) is more effective than other interventions. However, definitions of study populations are often unclear and there is a lack of interventions targeting both the individual and the workplace. The aim of this study was to determine whether a stress management intervention combining individual CBT and a workplace focus is superior to no treatment in the reduction of perceived stress and stress symptoms and time to lasting return to work (RTW) in a clinical sample. Patients with work-related stress reactions or adjustment disorders were randomly assigned to an intervention group (n = 57, 84.2% female) or a control group (n = 80, 83.8% female). Subjects were followed via questionnaires and register data. The intervention contained individual CBT and the offer of a workplace meeting. We examined intervention effects by analysing group differences in score changes on the Perceived Stress Scale (PSS-10) and the General Health Questionnaire (GHQ-30). We also tested if intervention led to faster lasting RTW. Mean baseline values of PSS were 24.79 in the intervention group and 23.26 in the control group while the corresponding values for GHQ were 21.3 and 20.27, respectively. There was a significant effect of time. 10 months after baseline, both groups reported less perceived stress and improved mental health. 4 months after baseline, we found significant treatment effects for both perceived stress and mental health. The difference in mean change in PSS after 4 months was - 3.09 (- 5.47, - 0.72), while for GHQ it was - 3.91 (- 7.15, - 0.68). There were no group differences in RTW. The intervention led to faster reductions in perceived stress and stress symptoms amongst patients with work-related stress reactions and adjustment disorders. 6 months after the intervention ended there were no longer differences between
Directory of Open Access Journals (Sweden)
Romain Guignard
Full Text Available OBJECTIVES: It is crucial for policy makers to monitor the evolution of tobacco smoking prevalence. In France, this monitoring is based on a series of cross-sectional general population surveys, the Health Barometers, conducted every five years and based on random samples. A methodological study has been carried out to assess the reliability of a monitoring system based on regular quota sampling surveys for smoking prevalence. DESIGN / OUTCOME MEASURES: In 2010, current and daily tobacco smoking prevalences obtained in a quota survey on 8,018 people were compared with those of the 2010 Health Barometer carried out on 27,653 people. Prevalences were assessed separately according to the telephone equipment of the interviewee (landline phone owner vs "mobile-only", and logistic regressions were conducted in the pooled database to assess the impact of the telephone equipment and of the survey mode on the prevalences found. Finally, logistic regressions adjusted for sociodemographic characteristics were conducted in the random sample in order to determine the impact of the needed number of calls to interwiew "hard-to-reach" people on the prevalence found. RESULTS: Current and daily prevalences were higher in the random sample (respectively 33.9% and 27.5% in 15-75 years-old than in the quota sample (respectively 30.2% and 25.3%. In both surveys, current and daily prevalences were lower among landline phone owners (respectively 31.8% and 25.5% in the random sample and 28.9% and 24.0% in the quota survey. The required number of calls was slightly related to the smoking status after adjustment for sociodemographic characteristics. CONCLUSION: Random sampling appears to be more effective than quota sampling, mainly by making it possible to interview hard-to-reach populations.
Yabusaki, Katsumi; Faits, Tyler; McMullen, Eri; Figueiredo, Jose Luiz; Aikawa, Masanori; Aikawa, Elena
2014-01-01
As computing technology and image analysis techniques have advanced, the practice of histology has grown from a purely qualitative method to one that is highly quantified. Current image analysis software is imprecise and prone to wide variation due to common artifacts and histological limitations. In order to minimize the impact of these artifacts, a more robust method for quantitative image analysis is required. Here we present a novel image analysis software, based on the hue saturation value color space, to be applied to a wide variety of histological stains and tissue types. By using hue, saturation, and value variables instead of the more common red, green, and blue variables, our software offers some distinct advantages over other commercially available programs. We tested the program by analyzing several common histological stains, performed on tissue sections that ranged from 4 µm to 10 µm in thickness, using both a red green blue color space and a hue saturation value color space. We demonstrated that our new software is a simple method for quantitative analysis of histological sections, which is highly robust to variations in section thickness, sectioning artifacts, and stain quality, eliminating sample-to-sample variation.
DEFF Research Database (Denmark)
Porse, B T; Garrett, R A
1995-01-01
Random mutations were generated in the lower half of the peptidyl transferase loop in domain V of 23 S rRNA from Escherichia coli using a polymerase chain reaction (PCR) approach, a rapid procedure for identifying mutants and a plasmid-based expression system. The effects of 21 single-site mutati...
International Nuclear Information System (INIS)
EITTA, M.A.; EL- WAHIDI, G.F.; FOUDA, M.A.; ABO EL-NAGA, E.M.; GAD EL-HAK, N.
2010-01-01
Preoperative radiotherapy in resectable rectal cancer has a number of potential advantages, most importantly reducing local recurrence, increasing survival and down-staging effect. Purpose: This prospective study was designed to compare between two different approaches of preoperative radiotherapy, either short course or long course radiotherapy. The primary endpoint is to evaluate the local recurrence rate, overall survival (OS) and disease free survival (DFS). The secondary endpoint is to evaluate down staging, treatment toxicity and ability to do sphincter sparing procedure (SSP), aiming at helping in the choice of the optimal treatment modality. Patients and Methods: This is a prospective randomized study of patients with resectable rectal cancer who presented to the department of Clinical Oncology and Nuclear Medicine, Mansoura University during the time period between June 2007 and September 2009. These patients received preoperative radiotherapy and were randomized into two arms: Arm 1, short course (SCRT) 25Gy/week/5 fractions followed by surgery within one week, and arm 2, long course preoperative radiotherapy (LCRT) 45Gy/5 weeks/25 fractions followed by surgery after 4-6 weeks. Adjuvant chemotherapy was given 4-6 weeks after surgery according to the postoperative pathology. Results: After a median follow-up of 18 months (range 6 to 28 months), we studied the patterns of recurrence. Three patients experienced local recurrence (LR), two out of 14 (14.2%) in arm 1 and one out of 15 patients (6.7%) in arm 2, (p=0.598). Three patients developed distant metastases [two in arm 1 (14.2%) and one in arm 2 (6.7%), p=0.598]. Two-year OS rate was 64±3% and 66±2%, (p= 0.389), and the 2-year DFS rate was 61±2% and 83±2% for arms 1 and 2, respectively (p=0.83). Tumor (T) downstaging was more achieved in LCRT arm with a statistically significant difference, but did not reach statistical significance in node (N) down-staging. SSP was more available in LCRT but with no
Directory of Open Access Journals (Sweden)
Ning-Cong Xiao
2013-12-01
Full Text Available In this paper the combinations of maximum entropy method and Bayesian inference for reliability assessment of deteriorating system is proposed. Due to various uncertainties, less data and incomplete information, system parameters usually cannot be determined precisely. These uncertainty parameters can be modeled by fuzzy sets theory and the Bayesian inference which have been proved to be useful for deteriorating systems under small sample sizes. The maximum entropy approach can be used to calculate the maximum entropy density function of uncertainty parameters more accurately for it does not need any additional information and assumptions. Finally, two optimization models are presented which can be used to determine the lower and upper bounds of systems probability of failure under vague environment conditions. Two numerical examples are investigated to demonstrate the proposed method.
Are Flow Injection-based Approaches Suitable for Automated Handling of Solid Samples?
DEFF Research Database (Denmark)
Miró, Manuel; Hansen, Elo Harald; Cerdà, Victor
Flow-based approaches were originally conceived for liquid-phase analysis, implying that constituents in solid samples generally had to be transferred into the liquid state, via appropriate batch pretreatment procedures, prior to analysis. Yet, in recent years, much effort has been focused...... electrolytic or aqueous leaching, on-line dialysis/microdialysis, in-line filtration, and pervaporation-based procedures have been successfully implemented in continuous flow/flow injection systems. In this communication, the new generation of flow analysis, including sequential injection, multicommutated flow.......g., soils, sediments, sludges), and thus, ascertaining the potential mobility, bioavailability and eventual impact of anthropogenic elements on biota [2]. In this context, the principles of sequential injection-microcolumn extraction (SI-MCE) for dynamic fractionation are explained in detail along...
Ethics and law in research with human biological samples: a new approach.
Petrini, Carlo
2014-01-01
During the last century a large number of documents (regulations, ethical codes, treatises, declarations, conventions) were published on the subject of ethics and clinical trials, many of them focusing on the protection of research participants. More recently various proposals have been put forward to relax some of the constraints imposed on research by these documents and regulations. It is important to distinguish between risks deriving from direct interventions on human subjects and other types of risk. In Italy the Data Protection Authority has acted in the question of research using previously collected health data and biological samples to simplify the procedures regarding informed consent. The new approach may be of help to other researchers working outside Italy.
van Leth, Frank; den Heijer, Casper; Beerepoot, Mariëlle; Stobberingh, Ellen; Geerlings, Suzanne; Schultsz, Constance
2017-04-01
Increasing antimicrobial resistance (AMR) requires rapid surveillance tools, such as Lot Quality Assurance Sampling (LQAS). LQAS classifies AMR as high or low based on set parameters. We compared classifications with the underlying true AMR prevalence using data on 1335 Escherichia coli isolates from surveys of community-acquired urinary tract infection in women, by assessing operating curves, sensitivity and specificity. Sensitivity and specificity of any set of LQAS parameters was above 99% and between 79 and 90%, respectively. Operating curves showed high concordance of the LQAS classification with true AMR prevalence estimates. LQAS-based AMR surveillance is a feasible approach that provides timely and locally relevant estimates, and the necessary information to formulate and evaluate guidelines for empirical treatment.
A Bayesian approach to assess data from radionuclide activity analyses in environmental samples
International Nuclear Information System (INIS)
Barrera, Manuel; Lourdes Romero, M.; Nunez-Lagos, Rafael; Bernardo, Jose M.
2007-01-01
A Bayesian statistical approach is introduced to assess experimental data from the analyses of radionuclide activity concentration in environmental samples (low activities). A theoretical model has been developed that allows the use of known prior information about the value of the measurand (activity), together with the experimental value determined through the measurement. The model has been applied to data of the Inter-laboratory Proficiency Test organised periodically among Spanish environmental radioactivity laboratories that are producing the radiochemical results for the Spanish radioactive monitoring network. A global improvement of laboratories performance is produced when this prior information is taken into account. The prior information used in this methodology is an interval within which the activity is known to be contained, but it could be extended to any other experimental quantity with a different type of prior information available
Castellano, Sergio; Cermelli, Paolo
2011-04-07
Mate choice depends on mating preferences and on the manner in which mate-quality information is acquired and used to make decisions. We present a model that describes how these two components of mating decision interact with each other during a comparative evaluation of prospective mates. The model, with its well-explored precedents in psychology and neurophysiology, assumes that decisions are made by the integration over time of noisy information until a stopping-rule criterion is reached. Due to this informational approach, the model builds a coherent theoretical framework for developing an integrated view of functions and mechanisms of mating decisions. From a functional point of view, the model allows us to investigate speed-accuracy tradeoffs in mating decision at both population and individual levels. It shows that, under strong time constraints, decision makers are expected to make fast and frugal decisions and to optimally trade off population-sampling accuracy (i.e. the number of sampled males) against individual-assessment accuracy (i.e. the time spent for evaluating each mate). From the proximate-mechanism point of view, the model makes testable predictions on the interactions of mating preferences and choosiness in different contexts and it might be of compelling empirical utility for a context-independent description of mating preference strength. Copyright © 2011 Elsevier Ltd. All rights reserved.
A novel four-dimensional analytical approach for analysis of complex samples.
Stephan, Susanne; Jakob, Cornelia; Hippler, Jörg; Schmitz, Oliver J
2016-05-01
A two-dimensional LC (2D-LC) method, based on the work of Erni and Frei in 1978, was developed and coupled to an ion mobility-high-resolution mass spectrometer (IM-MS), which enabled the separation of complex samples in four dimensions (2D-LC, ion mobility spectrometry (IMS), and mass spectrometry (MS)). This approach works as a continuous multiheart-cutting LC system, using a long modulation time of 4 min, which allows the complete transfer of most of the first - dimension peaks to the second - dimension column without fractionation, in comparison to comprehensive two-dimensional liquid chromatography. Hence, each compound delivers only one peak in the second dimension, which simplifies the data handling even when ion mobility spectrometry as a third and mass spectrometry as a fourth dimension are introduced. The analysis of a plant extract from Ginkgo biloba shows the separation power of this four-dimensional separation method with a calculated total peak capacity of more than 8700. Furthermore, the advantage of ion mobility for characterizing unknown compounds by their collision cross section (CCS) and accurate mass in a non-target approach is shown for different matrices like plant extracts and coffee. Graphical abstract Principle of the four-dimensional separation.
A non-iterative sampling approach using noise subspace projection for EIT
International Nuclear Information System (INIS)
Bellis, Cédric; Constantinescu, Andrei; Coquet, Thomas; Jaravel, Thomas; Lechleiter, Armin
2012-01-01
This study concerns the problem of the reconstruction of inclusions embedded in a conductive medium in the context of electrical impedance tomography (EIT), which is investigated within the framework of a non-iterative sampling approach. This type of identification strategy relies on the construction of a special indicator function that takes, roughly speaking, small values outside the inclusion and large values inside. Such a function is constructed in this paper from the projection of a fundamental singular solution onto the space spanned by the singular vectors associated with some of the smallest singular values of the data-to-measurement operator. The behavior of the novel indicator function is analyzed. For a subsequent implementation in a discrete setting, the quality of classical finite-dimensional approximations of the measurement operator is discussed. The robustness of this approach is also analyzed when only noisy spectral information is available. Finally, this identification method is implemented numerically and experimentally, and its efficiency is discussed on a set of, partly experimental, examples. (paper)
Kazem Alavipanah, Seyed
There are some problems in soil salinity studies based upon remotely sensed data: 1-spectral world is full of ambiguity and therefore soil reflectance can not be attributed to a single soil property such as salinity, 2) soil surface conditions as a function of time and space is a complex phenomena, 3) vegetation with a dynamic biological nature may create some problems in the study of soil salinity. Due to these problems the first question which may arise is how to overcome or minimise these problems. In this study we hypothesised that different sources of data, well established sampling plan and optimum approach could be useful. In order to choose representative training sites in the Iranian playa margins, to define the spectral and informational classes and to overcome some problems encountered in the variation within the field, the following attempts were made: 1) Principal Component Analysis (PCA) in order: a) to determine the most important variables, b) to understand the Landsat satellite images and the most informative components, 2) the photomorphic unit (PMU) consideration and interpretation; 3) study of salt accumulation and salt distribution in the soil profile, 4) use of several forms of field data, such as geologic, geomorphologic and soil information; 6) confirmation of field data and land cover types with farmers and the members of the team. The results led us to find at suitable approaches with a high and acceptable image classification accuracy and image interpretation. KEY WORDS; Photo Morphic Unit, Pprincipal Ccomponent Analysis, Soil Salinity, Field Work, Remote Sensing
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
A Quantitative Proteomics Approach to Clinical Research with Non-Traditional Samples
Directory of Open Access Journals (Sweden)
Rígel Licier
2016-10-01
Full Text Available The proper handling of samples to be analyzed by mass spectrometry (MS can guarantee excellent results and a greater depth of analysis when working in quantitative proteomics. This is critical when trying to assess non-traditional sources such as ear wax, saliva, vitreous humor, aqueous humor, tears, nipple aspirate fluid, breast milk/colostrum, cervical-vaginal fluid, nasal secretions, bronco-alveolar lavage fluid, and stools. We intend to provide the investigator with relevant aspects of quantitative proteomics and to recognize the most recent clinical research work conducted with atypical samples and analyzed by quantitative proteomics. Having as reference the most recent and different approaches used with non-traditional sources allows us to compare new strategies in the development of novel experimental models. On the other hand, these references help us to contribute significantly to the understanding of the proportions of proteins in different proteomes of clinical interest and may lead to potential advances in the emerging field of precision medicine.
A Quantitative Proteomics Approach to Clinical Research with Non-Traditional Samples.
Licier, Rígel; Miranda, Eric; Serrano, Horacio
2016-10-17
The proper handling of samples to be analyzed by mass spectrometry (MS) can guarantee excellent results and a greater depth of analysis when working in quantitative proteomics. This is critical when trying to assess non-traditional sources such as ear wax, saliva, vitreous humor, aqueous humor, tears, nipple aspirate fluid, breast milk/colostrum, cervical-vaginal fluid, nasal secretions, bronco-alveolar lavage fluid, and stools. We intend to provide the investigator with relevant aspects of quantitative proteomics and to recognize the most recent clinical research work conducted with atypical samples and analyzed by quantitative proteomics. Having as reference the most recent and different approaches used with non-traditional sources allows us to compare new strategies in the development of novel experimental models. On the other hand, these references help us to contribute significantly to the understanding of the proportions of proteins in different proteomes of clinical interest and may lead to potential advances in the emerging field of precision medicine.
Colletes, T C; Garcia, P T; Campanha, R B; Abdelnur, P V; Romão, W; Coltro, W K T; Vaz, B G
2016-03-07
The analytical performance for paper spray (PS) using a new insert sample approach based on paper with paraffin barriers (PS-PB) is presented. The paraffin barrier is made using a simple, fast and cheap method based on the stamping of paraffin onto a paper surface. Typical operation conditions of paper spray such as the solvent volume applied on the paper surface, and the paper substrate type are evaluated. A paper substrate with paraffin barriers shows better performance on analysis of a range of typical analytes when compared to the conventional PS-MS using normal paper (PS-NP) and PS-MS using paper with two rounded corners (PS-RC). PS-PB was applied to detect sugars and their inhibitors in sugarcane bagasse liquors from a second generation ethanol process. Moreover, the PS-PB proved to be excellent, showing results for the quantification of glucose in hydrolysis liquors with excellent linearity (R(2) = 0.99), limits of detection (2.77 mmol L(-1)) and quantification (9.27 mmol L(-1)). The results are better than for PS-NP and PS-RC. The PS-PB was also excellent in performance when compared with the HPLC-UV method for glucose quantification on hydrolysis of liquor samples.
Methodologies for the Extraction of Phenolic Compounds from Environmental Samples: New Approaches
Directory of Open Access Journals (Sweden)
Cristina Mahugo Santana
2009-01-01
Full Text Available Phenolic derivatives are among the most important contaminants present in the environment. These compounds are used in several industrial processes to manufacture chemicals such as pesticides, explosives, drugs and dyes. They also are used in the bleaching process of paper manufacturing. Apart from these sources, phenolic compounds have substantial applications in agriculture as herbicides, insecticides and fungicides. However, phenolic compounds are not only generated by human activity, but they are also formed naturally, e.g., during the decomposition of leaves or wood. As a result of these applications, they are found in soils and sediments and this often leads to wastewater and ground water contamination. Owing to their high toxicity and persistence in the environment, both, the US Environmental Protection Agency (EPA and the European Union have included some of them in their lists of priority pollutants. Current standard methods of phenolic compounds analysis in water samples are based on liquidÃ¢Â€Â“liquid extraction (LLE while Soxhlet extraction is the most used technique for isolating phenols from solid matrices. However, these techniques require extensive cleanup procedures that are time-intensive and involve expensive and hazardous organic solvents, which are undesirable for health and disposal reasons. In the last years, the use of news methodologies such as solid-phase extraction (SPE and solid-phase microextraction (SPME have increased for the extraction of phenolic compounds from liquid samples. In the case of solid samples, microwave assisted extraction (MAE is demonstrated to be an efficient technique for the extraction of these compounds. In this work we review the developed methods in the extraction and determination of phenolic derivatives in different types of environmental matrices such as water, sediments and soils. Moreover, we present the new approach in the use of micellar media coupled with SPME process for the
Saccone, Gabriele; Caissutti, Claudia; Khalifeh, Adeeb; Meltzer, Sara; Scifres, Christina; Simhan, Hyagriv N; Kelekci, Sefa; Sevket, Osman; Berghella, Vincenzo
2017-12-03
To compare both the prevalence of gestational diabetes mellitus (GDM) as well as maternal and neonatal outcomes by either the one-step or the two-step approaches. Electronic databases were searched from their inception until June 2017. We included all randomized controlled trials (RCTs) comparing the one-step with the two-step approaches for the screening and diagnosis of GDM. The primary outcome was the incidence of GDM. Three RCTs (n = 2333 participants) were included in the meta-analysis. 910 were randomized to the one step approach (75 g, 2 hrs), and 1423 to the two step approach. No significant difference in the incidence of GDM was found comparing the one step versus the two step approaches (8.4 versus 4.3%; relative risk (RR) 1.64, 95%CI 0.77-3.48). Women screened with the one step approach had a significantly lower risk of preterm birth (PTB) (3.7 versus 7.6%; RR 0.49, 95%CI 0.27-0.88), cesarean delivery (16.3 versus 22.0%; RR 0.74, 95%CI 0.56-0.99), macrosomia (2.9 versus 6.9%; RR 0.43, 95%CI 0.22-0.82), neonatal hypoglycemia (1.7 versus 4.5%; RR 0.38, 95%CI 0.16-0.90), and admission to neonatal intensive care unit (NICU) (4.4 versus 9.0%; RR 0.49, 95%CI 0.29-0.84), compared to those randomized to screening with the two step approach. The one and the two step approaches were not associated with a significant difference in the incidence of GDM. However, the one step approach was associated with better maternal and perinatal outcomes.
Goenka, Ajit H; Remer, Erick M; Veniero, Joseph C; Thupili, Chakradhar R; Klein, Eric A
2015-09-01
The objective of our study was to review our experience with CT-guided transgluteal prostate biopsy in patients without rectal access. Twenty-one CT-guided transgluteal prostate biopsy procedures were performed in 16 men (mean age, 68 years; age range, 60-78 years) who were under conscious sedation. The mean prostate-specific antigen (PSA) value was 11.4 ng/mL (range, 2.3-39.4 ng/mL). Six had seven prior unsuccessful transperineal or transurethral biopsies. Biopsy results, complications, sedation time, and radiation dose were recorded. The mean PSA values and number of core specimens were compared between patients with malignant results and patients with nonmalignant results using the Student t test. The average procedural sedation time was 50.6 minutes (range, 15-90 minutes) (n = 20), and the mean effective radiation dose was 8.2 mSv (median, 6.6 mSv; range 3.6-19.3 mSv) (n = 13). Twenty of the 21 (95%) procedures were technically successful. The only complication was a single episode of gross hematuria and penile pain in one patient, which resolved spontaneously. Of 20 successful biopsies, 8 (40%) yielded adenocarcinoma (Gleason score: mean, 8; range, 7-9). Twelve biopsies yielded nonmalignant results (60%): high-grade prostatic intraepithelial neoplasia (n = 3) or benign prostatic tissue with or without inflammation (n = 9). Three patients had carcinoma diagnosed on subsequent biopsies (second biopsy, n = 2 patients; third biopsy, n = 1 patient). A malignant biopsy result was not significantly associated with the number of core specimens (p = 0.3) or the mean PSA value (p = 0.1). CT-guided transgluteal prostate biopsy is a safe and reliable technique for the systematic random sampling of the prostate in patients without a rectal access. In patients with initial negative biopsy results, repeat biopsy should be considered if there is a persistent rise in the PSA value.
Axley, Page; Kodali, Sudha; Kuo, Yong-Fang; Ravi, Sujan; Seay, Toni; Parikh, Nina M; Singal, Ashwani K
2018-05-01
Nonalcoholic fatty liver disease (NAFLD) is emerging as the most common liver disease. The only effective treatment is 7%-10% weight loss. Mobile technology is increasingly used in weight management. This study was performed to evaluate the effects of text messaging intervention on weight loss in patients with NAFLD. Thirty well-defined NAFLD patients (mean age 52 years, 67% females, mean BMI 38) were randomized 1:1 to control group: counselling on healthy diet and exercise, or intervention group: text messages in addition to healthy life style counselling. NAFLD text messaging program sent weekly messages for 22 weeks on healthy life style education. Primary outcome was change in weight. Secondary outcomes were changes in liver enzymes and lipid profile. Intervention group lost an average of 6.9 lbs. (P = .03) compared to gain of 1.8 lbs. in the control group (P = .45). Intervention group also showed a decrease in ALT level (-12.5 IU/L, P = .035) and improvement in serum triglycerides (-28 mg/dL, P = .048). There were no changes in the control group on serum ALT level (-6.1 IU/L, P = .46) and on serum triglycerides (-20.3 mg/dL P = .27). Using one-way analysis of variance, change in outcomes in intervention group compared to control group was significant for weight (P = .02) and BMI (P = .02). Text messaging on healthy life style is associated with reduction in weight in NAFLD patients. Larger studies are suggested to examine benefits on liver histology, and assess long-term impact of this approach in patients with NAFLD. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Adaptive importance sampling for probabilistic validation of advanced driver assistance systems
Gietelink, O.J.; Schutter, B. de; Verhaegen, M.
2006-01-01
We present an approach for validation of advanced driver assistance systems, based on randomized algorithms. The new method consists of an iterative randomized simulation using adaptive importance sampling. The randomized algorithm is more efficient than conventional simulation techniques. The
Berton, Paula; Lana, Nerina B; Ríos, Juan M; García-Reyes, Juan F; Altamirano, Jorgelina C
2016-01-28
Green chemistry principles for developing methodologies have gained attention in analytical chemistry in recent decades. A growing number of analytical techniques have been proposed for determination of organic persistent pollutants in environmental and biological samples. In this light, the current review aims to present state-of-the-art sample preparation approaches based on green analytical principles proposed for the determination of polybrominated diphenyl ethers (PBDEs) and metabolites (OH-PBDEs and MeO-PBDEs) in environmental and biological samples. Approaches to lower the solvent consumption and accelerate the extraction, such as pressurized liquid extraction, microwave-assisted extraction, and ultrasound-assisted extraction, are discussed in this review. Special attention is paid to miniaturized sample preparation methodologies and strategies proposed to reduce organic solvent consumption. Additionally, extraction techniques based on alternative solvents (surfactants, supercritical fluids, or ionic liquids) are also commented in this work, even though these are scarcely used for determination of PBDEs. In addition to liquid-based extraction techniques, solid-based analytical techniques are also addressed. The development of greener, faster and simpler sample preparation approaches has increased in recent years (2003-2013). Among green extraction techniques, those based on the liquid phase predominate over those based on the solid phase (71% vs. 29%, respectively). For solid samples, solvent assisted extraction techniques are preferred for leaching of PBDEs, and liquid phase microextraction techniques are mostly used for liquid samples. Likewise, green characteristics of the instrumental analysis used after the extraction and clean-up steps are briefly discussed. Copyright © 2015 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Makepeace, C.E.
1981-01-01
Sampling strategies for the monitoring of deleterious agents present in uranium mine air in underground and surface mining areas are described. These methods are designed to prevent overexposure of the lining of the respiratory system of uranium miners to ionizing radiation from radon and radon daughters, and whole body overexposure to external gamma radiation. A detailed description is provided of stratified random sampling monitoring methodology for obtaining baseline data to be used as a reference for subsequent compliance assessment
Boezen, H M; Schouten, J. P.; Postma, D S; Rijcken, B
1994-01-01
Peak expiratory flow (PEF) variability can be considered as an index of bronchial lability. Population studies on PEF variability are few. The purpose of the current paper is to describe the distribution of PEF variability in a random population sample of adults with a wide age range (20-70 yrs),
Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino
2012-01-01
Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...
DEFF Research Database (Denmark)
Ruban, Andrei; Simak, S.I.; Shallcross, S.
2003-01-01
We present a simple effective tetrahedron model for local lattice relaxation effects in random metallic alloys on simple primitive lattices. A comparison with direct ab initio calculations for supercells representing random Ni0.50Pt0.50 and Cu0.25Au0.75 alloys as well as the dilute limit of Au-ri......-rich CuAu alloys shows that the model yields a quantitatively accurate description of the relaxtion energies in these systems. Finally, we discuss the bond length distribution in random alloys....
Simuta-Champo, R.; Herrera-Zamarrón, G. S.
2010-01-01
The Monte Carlo technique provides a natural method for evaluating uncertainties. The uncertainty is represented by a probability distribution or by related quantities such as statistical moments. When the groundwater flow and transport governing equations are solved and the hydraulic conductivity field is treated as a random spatial function, the hydraulic head, velocities and concentrations also become random spatial functions. When that is the case, for the stochastic simulation of groundw...
Nourbakhsh, Mohammad Reza; Fearon, Frank J.
Objective: To investigate the effect of noxious level electrical stimulation on pain, grip strength and functional abilities in subjects with chronic lateral epicondylitis. Design: Randomized, placebo-control, double-blinded study. Setting: Physical Therapy Department, North Georgia College and
Directory of Open Access Journals (Sweden)
Ming-Hung Chien
Full Text Available Falls are common in older people and may lead to functional decline, disability, and death. Many risk factors have been identified, but studies evaluating effects of nutritional status are limited. To determine whether nutritional status is a predictor of falls in older people living in the community, we analyzed data collected through the Survey of Health and Living Status of the Elderly in Taiwan (SHLSET.SHLSET include a series of interview surveys conducted by the government on a random sample of people living in community dwellings in the nation. We included participants who received nutritional status assessment using the Mini Nutritional Assessment Taiwan Version 2 (MNA-T2 in the 1999 survey when they were 53 years or older and followed up on the cumulative incidence of falls in the one-year period before the interview in the 2003 survey.At the beginning of follow-up, the 4440 participants had a mean age of 69.5 (standard deviation= 9.1 years, and 467 participants were "not well-nourished," which was defined as having an MNA-T2 score of 23 or less. In the one-year study period, 659 participants reported having at least one fall. After adjusting for other risk factors, we found the associated odds ratio for falls was 1.73 (95% confidence interval, 1.23, 2.42 for "not well-nourished," 1.57 (1.30, 1.90 for female gender, 1.03 (1.02, 1.04 for one-year older, 1.55 (1.22, 1.98 for history of falls, 1.34 (1.05, 1.72 for hospital stay during the past 12 months, 1.66 (1.07, 2.58 for difficulties in activities of daily living, and 1.53 (1.23, 1.91 for difficulties in instrumental activities of daily living.Nutritional status is an independent predictor of falls in older people living in the community. Further studies are warranted to identify nutritional interventions that can help prevent falls in the elderly.
Subtraction of random coincidences in γ-ray spectroscopy: A new approach
International Nuclear Information System (INIS)
Pattabiraman, N.S.; Ghugre, S.S.; Basu, S.K.; Garg, U.; Ray, S.; Sinha, A.K.; Zhu, S.
2006-01-01
A new analytical method for estimation and subsequent subtraction of random coincidences has been developed. It utilizes the knowledge of the counts in the main diagonal of a background-subtracted symmetric data set for the estimation of the events originating from random coincidences. This procedure has been successfully applied to several data sets. It could be a valuable tool for low-fold data sets, especially for low-cross-section events
Chandrasekar, A; Rakkiyappan, R; Cao, Jinde
2015-10-01
This paper studies the impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach. The array of neural networks are coupled in a random fashion which is governed by Bernoulli random variable. The aim of this paper is to obtain the synchronization criteria, which is suitable for both exactly known and partly unknown transition probabilities such that the coupled neural network is synchronized with mixed time-delay. The considered impulsive effects can be synchronized at partly unknown transition probabilities. Besides, a multiple integral approach is also proposed to strengthen the Markovian jumping randomly coupled neural networks with partly unknown transition probabilities. By making use of Kronecker product and some useful integral inequalities, a novel Lyapunov-Krasovskii functional was designed for handling the coupled neural network with mixed delay and then impulsive synchronization criteria are solvable in a set of linear matrix inequalities. Finally, numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Apollo Lunar Sample Integration into Google Moon: A New Approach to Digitization
Dawson, Melissa D.; Todd, nancy S.; Lofgren, Gary E.
2011-01-01
The Google Moon Apollo Lunar Sample Data Integration project is part of a larger, LASER-funded 4-year lunar rock photo restoration project by NASA s Acquisition and Curation Office [1]. The objective of this project is to enhance the Apollo mission data already available on Google Moon with information about the lunar samples collected during the Apollo missions. To this end, we have combined rock sample data from various sources, including Curation databases, mission documentation and lunar sample catalogs, with newly available digital photography of rock samples to create a user-friendly, interactive tool for learning about the Apollo Moon samples
Hsieh, Yu-Wei; Wu, Ching-Yi; Wang, Wei-En; Lin, Keh-Chung; Chang, Ku-Chou; Chen, Chih-Chi; Liu, Chien-Ting
2017-02-01
To investigate the treatment effects of bilateral robotic priming combined with the task-oriented approach on motor impairment, disability, daily function, and quality of life in patients with subacute stroke. A randomized controlled trial. Occupational therapy clinics in medical centers. Thirty-one subacute stroke patients were recruited. Participants were randomly assigned to receive bilateral priming combined with the task-oriented approach (i.e., primed group) or to the task-oriented approach alone (i.e., unprimed group) for 90 minutes/day, 5 days/week for 4 weeks. The primed group began with the bilateral priming technique by using a bimanual robot-aided device. Motor impairments were assessed by the Fugal-Meyer Assessment, grip strength, and the Box and Block Test. Disability and daily function were measured by the modified Rankin Scale, the Functional Independence Measure, and actigraphy. Quality of life was examined by the Stroke Impact Scale. The primed and unprimed groups improved significantly on most outcomes over time. The primed group demonstrated significantly better improvement on the Stroke Impact Scale strength subscale ( p = 0.012) and a trend for greater improvement on the modified Rankin Scale ( p = 0.065) than the unprimed group. Bilateral priming combined with the task-oriented approach elicited more improvements in self-reported strength and disability degrees than the task-oriented approach by itself. Further large-scale research with at least 31 participants in each intervention group is suggested to confirm the study findings.
Directory of Open Access Journals (Sweden)
Serge Clotaire Billong
2016-11-01
Full Text Available Abstract Background Retention on lifelong antiretroviral therapy (ART is essential in sustaining treatment success while preventing HIV drug resistance (HIVDR, especially in resource-limited settings (RLS. In an era of rising numbers of patients on ART, mastering patients in care is becoming more strategic for programmatic interventions. Due to lapses and uncertainty with the current WHO sampling approach in Cameroon, we thus aimed to ascertain the national performance of, and determinants in, retention on ART at 12 months. Methods Using a systematic random sampling, a survey was conducted in the ten regions (56 sites of Cameroon, within the “reporting period” of October 2013–November 2014, enrolling 5005 eligible adults and children. Performance in retention on ART at 12 months was interpreted following the definition of HIVDR early warning indicator: excellent (>85%, fair (85–75%, poor (<75; and factors with p-value < 0.01 were considered statistically significant. Results Majority (74.4% of patients were in urban settings, and 50.9% were managed in reference treatment centres. Nationwide, retention on ART at 12 months was 60.4% (2023/3349; only six sites and one region achieved acceptable performances. Retention performance varied in reference treatment centres (54.2% vs. management units (66.8%, p < 0.0001; male (57.1% vs. women (62.0%, p = 0.007; and with WHO clinical stage I (63.3% vs. other stages (55.6%, p = 0.007; but neither for age (adults [60.3%] vs. children [58.8%], p = 0.730 nor for immune status (CD4351–500 [65.9%] vs. other CD4-staging [59.86%], p = 0.077. Conclusions Poor retention in care, within 12 months of ART initiation, urges active search for lost-to-follow-up targeting preferentially male and symptomatic patients, especially within reference ART clinics. Such sampling strategy could be further strengthened for informed ART monitoring and HIVDR prevention perspectives.
Liu, Xiaofeng
2003-01-01
This article considers optimal sample allocation between the treatment and control condition in multilevel designs when the costs per sampling unit vary due to treatment assignment. Optimal unequal allocation may reduce the cost from that of a balanced design without sacrificing any power. The optimum sample allocation ratio depends only on the…
Meyer, M. Renée Umstattd; Wu, Cindy; Walsh, Shana M.
2016-01-01
Time spent sitting has been associated with an increased risk of diabetes, cancer, obesity, and mental health impairments. However, 75% of Americans spend most of their days sitting, with work-sitting accounting for 63% of total daily sitting time. Little research examining theory-based antecedents of standing or sitting has been conducted. This lack of solid groundwork makes it difficult to design effective intervention strategies to decrease sitting behaviors. Using the Theory of Planned Behavior (TPB) as our theoretical lens to better understand factors related with beneficial standing behaviors already being practiced, we examined relationships between TPB constructs and time spent standing at work among “positive deviants” (those successful in behavior change). Experience sampling methodology (ESM), 4 times a day (midmorning, before lunch, afternoon, and before leaving work) for 5 consecutive workdays (Monday to Friday), was used to assess employees' standing time. TPB scales assessing attitude (α = 0.81–0.84), norms (α = 0.83), perceived behavioral control (α = 0.77), and intention (α = 0.78) were developed using recommended methods and collected once on the Friday before the ESM surveys started. ESM data are hierarchically nested, therefore we tested our hypotheses using multilevel structural equation modeling with Mplus. Hourly full-time university employees (n = 50; 70.6% female, 84.3% white, mean age = 44 (SD = 11), 88.2% in full-time staff positions) with sedentary occupation types (time at desk while working ≥6 hours/day) participated. A total of 871 daily surveys were completed. Only perceived behavioral control (β = 0.45, p deviance approach to enhance perceived behavioral control, in addition to implementing environmental changes like installing standing desks. PMID:29546189
Random walks on a fluctuating lattice: A renormalization group approach applied in one dimension
International Nuclear Information System (INIS)
Levermore, C.D.; Nadler, W.; Stein, D.L.
1995-01-01
We study the problem of a random walk on a lattice in which bonds connecting nearest-neighbor sites open and close randomly in time, a situation often encountered in fluctuating media. We present a simple renormalization group technique to solve for the effective diffusive behavior at long times. For one-dimensional lattices we obtain better quantitative agreement with simulation data than earlier effective medium results. Our technique works in principle in any dimension, although the amount of computation required rises with the dimensionality of the lattice
Randrianalisoa, Jaona; Haussener, Sophia; Baillis, Dominique; Lipiński, Wojciech
2017-11-01
Radiative heat transfer is analyzed in participating media consisting of long cylindrical fibers with a diameter in the limit of geometrical optics. The absorption and scattering coefficients and the scattering phase function of the medium are determined based on the discrete-level medium geometry and optical properties of individual fibers. The fibers are assumed to be randomly oriented and positioned inside the medium. Two approaches are employed: a volume-averaged two-intensity approach referred to as multi-RTE approach and a homogenized single-intensity approach referred to as the single-RTE approach. Both approaches require effective properties, determined using direct Monte Carlo ray tracing techniques. The macroscopic radiative transfer equations (for single intensity or two volume-averaged intensities) with the corresponding effective properties are solved using Monte Carlo techniques and allow for the determination of the radiative flux distribution as well as overall transmittance and reflectance of the medium. The results are compared against predictions by the direct Monte Carlo simulation on the exact morphology. The effects of fiber volume fraction and optical properties on the effective radiative properties and the overall slab radiative characteristics are investigated. The single-RTE approach gives accurate predictions for high porosity fibrous media (porosity about 95%). The multi-RTE approach is recommended for isotropic fibrous media with porosity in the range of 79-95%.
Guetterman, Timothy C.
2015-01-01
Although recommendations exist for determining qualitative sample sizes, the literature appears to contain few instances of research on the topic. Practical guidance is needed for determining sample sizes to conduct rigorous qualitative research, to develop proposals, and to budget resources. The purpose of this article is to describe qualitative sample size and sampling practices within published studies in education and the health sciences by research design: case study, ethnography, ground...
Talamo, Giampaolo; Mir Muhammad, A; Pandey, Manoj K; Zhu, Junjia; Creer, Michael H; Malysz, Jozef
2015-02-11
Measurement of daily proteinuria in patients with amyloidosis is recommended at the time of diagnosis for assessing renal involvement, and for monitoring disease activity. Renal involvement is usually defined by proteinuria >500 mg/day. We evaluated the accuracy of the random urine protein-to-creatinine ratio (Pr/Cr) in predicting 24 hour proteinuria in patient with amyloidosis. We compared results of random urine Pr/Cr ratio and concomitant 24-hour urine collections in 44 patients with amyloidosis. We found a strong correlation (Spearman's ρ=0.874) between the Pr/Cr ratio and the 24 hour urine protein excretion. For predicting renal involvement, the optimal cut-off point of the Pr/Cr ratio was 715 mg/g. The sensitivity and specificity for this point were 91.8% and 95.5%, respectively, and the area under the curve value was 97.4%. We conclude that the random urine Pr/Cr ratio could be useful in the screening of renal involvement in patients with amyloidosis. If validated in a prospective study, the random urine Pr/Cr ratio could replace the 24 hour urine collection for the assessment of daily proteinuria and presence of nephrotic syndrome in patients with amyloidosis.
Directory of Open Access Journals (Sweden)
Giampaolo Talamo
2015-02-01
Full Text Available Measurement of daily proteinuria in patients with amyloidosis is recommended at the time of diagnosis for assessing renal involvement, and for monitoring disease activity. Renal involvement is usually defined by proteinuria >500 mg/day. We evaluated the accuracy of the random urine protein-to-creatinine ratio (Pr/Cr in predicting 24 hour proteinuria in patient with amyloidosis. We com- pared results of random urine Pr/Cr ratio and concomitant 24-hour urine collections in 44 patients with amyloidosis. We found a strong correlation (Spearman’s ρ=0.874 between the Pr/Cr ratio and the 24 hour urine protein excretion. For predicting renal involvement, the optimal cut-off point of the Pr/Cr ratio was 715 mg/g. The sensitivity and specificity for this point were 91.8% and 95.5%, respectively, and the area under the curve value was 97.4%. We conclude that the random urine Pr/Cr ratio could be useful in the screening of renal involvement in patients with amyloidosis. If validated in a prospective study, the random urine Pr/Cr ratio could replace the 24 hour urine collection for the assessment of daily proteinuria and presence of nephrotic syndrome in patients with amyloidosis.
Equilibrium sampling of hydrophobic organic chemicals in sediments: challenges and new approaches
DEFF Research Database (Denmark)
Schaefer, S.; Mayer, Philipp; Becker, B.
2015-01-01
) are considered to be the effective concentrations for diffusive uptake and partitioning, and they can be measured by equilibrium sampling. We have thus applied glass jars with multiple coating thicknesses for equilibrium sampling of HOCs in sediment samples from various sites in different German rivers...
Lusiana, Evellin Dewi
2017-12-01
The parameters of binary probit regression model are commonly estimated by using Maximum Likelihood Estimation (MLE) method. However, MLE method has limitation if the binary data contains separation. Separation is the condition where there are one or several independent variables that exactly grouped the categories in binary response. It will result the estimators of MLE method become non-convergent, so that they cannot be used in modeling. One of the effort to resolve the separation is using Firths approach instead. This research has two aims. First, to identify the chance of separation occurrence in binary probit regression model between MLE method and Firths approach. Second, to compare the performance of binary probit regression model estimator that obtained by MLE method and Firths approach using RMSE criteria. Those are performed using simulation method and under different sample size. The results showed that the chance of separation occurrence in MLE method for small sample size is higher than Firths approach. On the other hand, for larger sample size, the probability decreased and relatively identic between MLE method and Firths approach. Meanwhile, Firths estimators have smaller RMSE than MLEs especially for smaller sample sizes. But for larger sample sizes, the RMSEs are not much different. It means that Firths estimators outperformed MLE estimator.
Quantifying the sensitivity of camera traps:an adapted distance sampling approach
Rowcliffe, M.; Carbone, C.; Jansen, P.A.; Kays, R.W.; Kranstauber, B.
2011-01-01
1. Abundance estimation is a pervasive goal in ecology. The rate of detection by motion-sensitive camera traps can, in principle, provide information on the abundance of many species of terrestrial vertebrates that are otherwise difficult to survey. The random encounter model (REM, Rowcliffe et al.
A zero-one programming approach to Gulliksen's matched random subtests method
van der Linden, Willem J.; Boekkooi-Timminga, Ellen
1988-01-01
Gulliksen’s matched random subtests method is a graphical method to split a test into parallel test halves. The method has practical relevance because it maximizes coefficient α as a lower bound to the classical test reliability coefficient. In this paper the same problem is formulated as a zero-one
Sums and Products of Jointly Distributed Random Variables: A Simplified Approach
Stein, Sheldon H.
2005-01-01
Three basic theorems concerning expected values and variances of sums and products of random variables play an important role in mathematical statistics and its applications in education, business, the social sciences, and the natural sciences. A solid understanding of these theorems requires that students be familiar with the proofs of these…
A message-passing approach to random constraint satisfaction problems with growing domains
International Nuclear Information System (INIS)
Zhao, Chunyan; Zheng, Zhiming; Zhou, Haijun; Xu, Ke
2011-01-01
Message-passing algorithms based on belief propagation (BP) are implemented on a random constraint satisfaction problem (CSP) referred to as model RB, which is a prototype of hard random CSPs with growing domain size. In model RB, the number of candidate discrete values (the domain size) of each variable increases polynomially with the variable number N of the problem formula. Although the satisfiability threshold of model RB is exactly known, finding solutions for a single problem formula is quite challenging and attempts have been limited to cases of N ∼ 10 2 . In this paper, we propose two different kinds of message-passing algorithms guided by BP for this problem. Numerical simulations demonstrate that these algorithms allow us to find a solution for random formulas of model RB with constraint tightness slightly less than p cr , the threshold value for the satisfiability phase transition. To evaluate the performance of these algorithms, we also provide a local search algorithm (random walk) as a comparison. Besides this, the simulated time dependence of the problem size N and the entropy of the variables for growing domain size are discussed
A zero-one programming approach to Gulliksen's matched random subtests method
van der Linden, Willem J.; Boekkooi-Timminga, Ellen
1986-01-01
In order to estimate the classical coefficient of test reliability, parallel measurements are needed. H. Gulliksen's matched random subtests method, which is a graphical method for splitting a test into parallel test halves, has practical relevance because it maximizes the alpha coefficient as a
Meyer, M Renée Umstattd; Wu, Cindy; Walsh, Shana M
2016-01-01
Time spent sitting has been associated with an increased risk of diabetes, cancer, obesity, and mental health impairments. However, 75% of Americans spend most of their days sitting, with work-sitting accounting for 63% of total daily sitting time. Little research examining theory-based antecedents of standing or sitting has been conducted. This lack of solid groundwork makes it difficult to design effective intervention strategies to decrease sitting behaviors. Using the Theory of Planned Behavior (TPB) as our theoretical lens to better understand factors related with beneficial standing behaviors already being practiced, we examined relationships between TPB constructs and time spent standing at work among "positive deviants" (those successful in behavior change). Experience sampling methodology (ESM), 4 times a day (midmorning, before lunch, afternoon, and before leaving work) for 5 consecutive workdays (Monday to Friday), was used to assess employees' standing time. TPB scales assessing attitude (α = 0.81-0.84), norms (α = 0.83), perceived behavioral control (α = 0.77), and intention (α = 0.78) were developed using recommended methods and collected once on the Friday before the ESM surveys started. ESM data are hierarchically nested, therefore we tested our hypotheses using multilevel structural equation modeling with Mplus. Hourly full-time university employees (n = 50; 70.6% female, 84.3% white, mean age = 44 (SD = 11), 88.2% in full-time staff positions) with sedentary occupation types (time at desk while working ≥6 hours/day) participated. A total of 871 daily surveys were completed. Only perceived behavioral control (β = 0.45, p work-standing at the event-level (model fit: just fit); mediation through intention was not supported. This is the first study to examine theoretical antecedents of real-time work-standing in a naturalistic field setting among positive deviants. These relationships should be further examined, and behavioral intervention
Directory of Open Access Journals (Sweden)
M. Renée Umstattd Meyer
2016-09-01
Full Text Available Time spent sitting has been associated with an increased risk of diabetes, cancer, obesity, and mental health impairments. However, 75% of Americans spend most of their days sitting, with work-sitting accounting for 63% of total daily sitting time. Little research examining theory-based antecedents of standing or sitting has been conducted. This lack of solid groundwork makes it difficult to design effective intervention strategies to decrease sitting behaviors. Using the Theory of Planned Behavior (TPB as our theoretical lens to better understand factors related with beneficial standing behaviors already being practiced, we examined relationships between TPB constructs and time spent standing at work among “positive deviants” (those successful in behavior change. Experience sampling methodology (ESM, 4 times a day (midmorning, before lunch, afternoon, and before leaving work for 5 consecutive workdays (Monday to Friday, was used to assess employees’ standing time. TPB scales assessing attitude (α = 0.81–0.84, norms (α = 0.83, perceived behavioral control (α = 0.77, and intention (α = 0.78 were developed using recommended methods and collected once on the Friday before the ESM surveys started. ESM data are hierarchically nested, therefore we tested our hypotheses using multilevel structural equation modeling with Mplus. Hourly full-time university employees (n = 50; 70.6% female, 84.3% white, mean age = 44 (SD = 11, 88.2%in full-time staff positions with sedentary occupation types (time at desk while working ≥6 hours/day participated. A total of 871 daily surveys were completed. Only perceived behavioral control (β = 0.45, p < 0.05 was related with work-standing at the event-level (model fit: just fit; mediation through intention was not supported. This is the first study to examine theoretical antecedents of real-time work-standing in a naturalistic field setting among positive deviants. These relationships should be further
Connor, Thomas H; Smith, Jerome P
2016-09-01
At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic. Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Current recommendations are that all healthc a re settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use.
International Nuclear Information System (INIS)
Lorenzana, J.; Grynberg, M.D.; Yu, L.; Yonemitsu, K.; Bishop, A.R.
1992-11-01
The ground state energy, and static and dynamic correlation functions are investigated in the inhomogeneous Hartree-Fock (HF) plus random phase approximation (RPA) approach applied to a one-dimensional spinless fermion model showing self-trapped doping states at the mean field level. Results are compared with homogeneous HF and exact diagonalization. RPA fluctuations added to the generally inhomogeneous HF ground state allows the computation of dynamical correlation functions that compare well with exact diagonalization results. The RPA correction to the ground state energy agrees well with the exact results at strong and weak coupling limits. We also compare it with a related quasi-boson approach. The instability towards self-trapped behaviour is signaled by a RPA mode with frequency approaching zero. (author). 21 refs, 10 figs
Brus, D.J.; Saby, N.P.A.
2016-01-01
In France like in many other countries, the soil is monitored at the locations of a regular, square grid thus forming a systematic sample (SY). This sampling design leads to good spatial coverage, enhancing the precision of design-based estimates of spatial means and totals. Design-based
Directory of Open Access Journals (Sweden)
Richard C. Zangar
2004-01-01
Full Text Available Identifying useful markers of cancer can be problematic due to limited amounts of sample. Some samples such as nipple aspirate fluid (NAF or early-stage tumors are inherently small. Other samples such as serum are collected in larger volumes but archives of these samples are very valuable and only small amounts of each sample may be available for a single study. Also, given the diverse nature of cancer and the inherent variability in individual protein levels, it seems likely that the best approach to screen for cancer will be to determine the profile of a battery of proteins. As a result, a major challenge in identifying protein markers of disease is the ability to screen many proteins using very small amounts of sample. In this review, we outline some technological advances in proteomics that greatly advance this capability. Specifically, we propose a strategy for identifying markers of breast cancer in NAF that utilizes mass spectrometry (MS to simultaneously screen hundreds or thousands of proteins in each sample. The best potential markers identified by the MS analysis can then be extensively characterized using an ELISA microarray assay. Because the microarray analysis is quantitative and large numbers of samples can be efficiently analyzed, this approach offers the ability to rapidly assess a battery of selected proteins in a manner that is directly relevant to traditional clinical assays.
International Nuclear Information System (INIS)
Wandiga, S.O.; Jumba, I.O.
1982-01-01
An intercomparative analysis of the concentration of heavy metals:zinc, cadmium, lead, copper, mercury, iron and calcium in head hair of a randomly selected sample of Kenyan people using the techniques of atomic absorption spectrophotometry (AAS) and differential pulse anodic stripping voltammetry (DPAS) has been undertaken. The percent relative standard deviation for each sample analysed using either of the techniques show good sensitivity and correlation between the techniques. The DPAS was found to be slightly sensitive than the AAs instrument used. The recalculated body burden rations of Cd to Zn, Pb to Fe reveal no unusual health impairement symptoms and suggest a relatively clean environment in Kenya.(author)
Rasmita Panigrahi; Trilochan Rout
2012-01-01
Classifying nodes in a network is a task with wide range of applications .it can be particularly useful in epidemics detection .Many resources are invested in the task of epidemics and precisely allow human investigators to work more efficiently. This work creates random and scale- free graphs the simulations with varying relative infectiousness and graph size performed. By using computer simulations it should be possible to model such epidemic Phenomena and to better understand the role play...
Catallo, Cristina; Jack, Susan M.; Ciliska, Donna; MacMillan, Harriet L.
2013-01-01
Little is known about how to systematically integrate complex qualitative studies within the context of randomized controlled trials. A two-phase sequential explanatory mixed methods study was conducted in Canada to understand how women decide to disclose intimate partner violence in emergency department settings. Mixing a RCT (with a subanalysis of data) with a grounded theory approach required methodological modifications to maintain the overall rigour of this mixed methods study. Modifications were made to the following areas of the grounded theory approach to support the overall integrity of the mixed methods study design: recruitment of participants, maximum variation and negative case sampling, data collection, and analysis methods. Recommendations for future studies include: (1) planning at the outset to incorporate a qualitative approach with a RCT and to determine logical points during the RCT to integrate the qualitative component and (2) consideration for the time needed to carry out a RCT and a grounded theory approach, especially to support recruitment, data collection, and analysis. Data mixing strategies should be considered during early stages of the study, so that appropriate measures can be developed and used in the RCT to support initial coding structures and data analysis needs of the grounded theory phase. PMID:23577245
A Spectral Approach for Quenched Limit Theorems for Random Expanding Dynamical Systems
Dragičević, D.; Froyland, G.; González-Tokman, C.; Vaienti, S.
2018-01-01
We prove quenched versions of (i) a large deviations principle (LDP), (ii) a central limit theorem (CLT), and (iii) a local central limit theorem for non-autonomous dynamical systems. A key advance is the extension of the spectral method, commonly used in limit laws for deterministic maps, to the general random setting. We achieve this via multiplicative ergodic theory and the development of a general framework to control the regularity of Lyapunov exponents of twisted transfer operator cocycles with respect to a twist parameter. While some versions of the LDP and CLT have previously been proved with other techniques, the local central limit theorem is, to our knowledge, a completely new result, and one that demonstrates the strength of our method. Applications include non-autonomous (piecewise) expanding maps, defined by random compositions of the form {T_{σ^{n-1} ω} circ\\cdotscirc T_{σω}circ T_ω} . An important aspect of our results is that we only assume ergodicity and invertibility of the random driving {σ:Ω\\toΩ} ; in particular no expansivity or mixing properties are required.
Kelly, Heath; Riddell, Michaela A; Gidding, Heather F; Nolan, Terry; Gilbert, Gwendolyn L
2002-08-19
We compared estimates of the age-specific population immunity to measles, mumps, rubella, hepatitis B and varicella zoster viruses in Victorian school children obtained by a national sero-survey, using a convenience sample of residual sera from diagnostic laboratories throughout Australia, with those from a three-stage random cluster survey. When grouped according to school age (primary or secondary school) there was no significant difference in the estimates of immunity to measles, mumps, hepatitis B or varicella. Compared with the convenience sample, the random cluster survey estimated higher immunity to rubella in samples from both primary (98.7% versus 93.6%, P = 0.002) and secondary school students (98.4% versus 93.2%, P = 0.03). Despite some limitations, this study suggests that the collection of a convenience sample of sera from diagnostic laboratories is an appropriate sampling strategy to provide population immunity data that will inform Australia's current and future immunisation policies. Copyright 2002 Elsevier Science Ltd.
Zheng, Lianqing; Chen, Mengen; Yang, Wei
2009-06-21
To overcome the pseudoergodicity problem, conformational sampling can be accelerated via generalized ensemble methods, e.g., through the realization of random walks along prechosen collective variables, such as spatial order parameters, energy scaling parameters, or even system temperatures or pressures, etc. As usually observed, in generalized ensemble simulations, hidden barriers are likely to exist in the space perpendicular to the collective variable direction and these residual free energy barriers could greatly abolish the sampling efficiency. This sampling issue is particularly severe when the collective variable is defined in a low-dimension subset of the target system; then the "Hamiltonian lagging" problem, which reveals the fact that necessary structural relaxation falls behind the move of the collective variable, may be likely to occur. To overcome this problem in equilibrium conformational sampling, we adopted the orthogonal space random walk (OSRW) strategy, which was originally developed in the context of free energy simulation [L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008)]. Thereby, generalized ensemble simulations can simultaneously escape both the explicit barriers along the collective variable direction and the hidden barriers that are strongly coupled with the collective variable move. As demonstrated in our model studies, the present OSRW based generalized ensemble treatments show improved sampling capability over the corresponding classical generalized ensemble treatments.
Sample Entropy-Based Approach to Evaluate the Stability of Double-Wire Pulsed MIG Welding
Directory of Open Access Journals (Sweden)
Ping Yao
2014-01-01
Full Text Available According to the sample entropy, this paper deals with a quantitative method to evaluate the current stability in double-wire pulsed MIG welding. Firstly, the sample entropy of current signals with different stability but the same parameters is calculated. The results show that the more stable the current, the smaller the value and the standard deviation of sample entropy. Secondly, four parameters, which are pulse width, peak current, base current, and frequency, are selected for four-level three-factor orthogonal experiment. The calculation and analysis of desired signals indicate that sample entropy values are affected by welding current parameters. Then, a quantitative method based on sample entropy is proposed. The experiment results show that the method can preferably quantify the welding current stability.
Varekar, Vikas; Karmakar, Subhankar; Jha, Ramakar
2016-02-01
The design of surface water quality sampling location is a crucial decision-making process for rationalization of monitoring network. The quantity, quality, and types of available dataset (watershed characteristics and water quality data) may affect the selection of appropriate design methodology. The modified Sanders approach and multivariate statistical techniques [particularly factor analysis (FA)/principal component analysis (PCA)] are well-accepted and widely used techniques for design of sampling locations. However, their performance may vary significantly with quantity, quality, and types of available dataset. In this paper, an attempt has been made to evaluate performance of these techniques by accounting the effect of seasonal variation, under a situation of limited water quality data but extensive watershed characteristics information, as continuous and consistent river water quality data is usually difficult to obtain, whereas watershed information may be made available through application of geospatial techniques. A case study of Kali River, Western Uttar Pradesh, India, is selected for the analysis. The monitoring was carried out at 16 sampling locations. The discrete and diffuse pollution loads at different sampling sites were estimated and accounted using modified Sanders approach, whereas the monitored physical and chemical water quality parameters were utilized as inputs for FA/PCA. The designed optimum number of sampling locations for monsoon and non-monsoon seasons by modified Sanders approach are eight and seven while that for FA/PCA are eleven and nine, respectively. Less variation in the number and locations of designed sampling sites were obtained by both techniques, which shows stability of results. A geospatial analysis has also been carried out to check the significance of designed sampling location with respect to river basin characteristics and land use of the study area. Both methods are equally efficient; however, modified Sanders
Kashdan, Todd B; Farmer, Antonina S
2014-06-01
The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning.
Kashdan, Todd B.; Farmer, Antonina S.
2014-01-01
The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning. PMID:24512246
Liu, Xiao; Liu, An; Zhang, Xiangliang; Li, Zhixu; Liu, Guanfeng; Zhao, Lei; Zhou, Xiaofang
2017-01-01
result. However, none is designed for both hiding users’ private data and preventing privacy inference. To achieve this goal, we propose in this paper a hybrid approach for privacy-preserving recommender systems by combining differential privacy (DP
Bottiglione, F; Carbone, G
2015-01-14
The apparent contact angle of large 2D drops with randomly rough self-affine profiles is numerically investigated. The numerical approach is based upon the assumption of large separation of length scales, i.e. it is assumed that the roughness length scales are much smaller than the drop size, thus making it possible to treat the problem through a mean-field like approach relying on the large-separation of scales. The apparent contact angle at equilibrium is calculated in all wetting regimes from full wetting (Wenzel state) to partial wetting (Cassie state). It was found that for very large values of the roughness Wenzel parameter (r(W) > -1/ cos θ(Y), where θ(Y) is the Young's contact angle), the interface approaches the perfect non-wetting condition and the apparent contact angle is almost equal to 180°. The results are compared with the case of roughness on one single scale (sinusoidal surface) and it is found that, given the same value of the Wenzel roughness parameter rW, the apparent contact angle is much larger for the case of a randomly rough surface, proving that the multi-scale character of randomly rough surfaces is a key factor to enhance superhydrophobicity. Moreover, it is shown that for millimetre-sized drops, the actual drop pressure at static equilibrium weakly affects the wetting regime, which instead seems to be dominated by the roughness parameter. For this reason a methodology to estimate the apparent contact angle is proposed, which relies only upon the micro-scale properties of the rough surface.
Denny, Lynette; Kuhn, Louise; De Souza, Michelle; Pollack, Amy E; Dupree, William; Wright, Thomas C
2005-11-02
Non-cytology-based screen-and-treat approaches for cervical cancer prevention have been developed for low-resource settings, but few have directly addressed efficacy. To determine the safety and efficacy of 2 screen-and-treat approaches for cervical cancer prevention that were designed to be more resource-appropriate than conventional cytology-based screening programs. Randomized clinical trial of 6555 nonpregnant women, aged 35 to 65 years, recruited through community outreach and conducted between June 2000 and December 2002 at ambulatory women's health clinics in Khayelitsha, South Africa. All patients were screened using human papillomavirus (HPV) DNA testing and visual inspection with acetic acid (VIA). Women were subsequently randomized to 1 of 3 groups: cryotherapy if she had a positive HPV DNA test result; cryotherapy if she had a positive VIA test result; or to delayed evaluation. Biopsy-confirmed high-grade cervical cancer precursor lesions and cancer at 6 and 12 months in the HPV DNA and VIA groups compared with the delayed evaluation (control) group; complications after cryotherapy. The prevalence of high-grade cervical intraepithelial neoplasia and cancer (CIN 2+) was significantly lower in the 2 screen-and-treat groups at 6 months after randomization than in the delayed evaluation group. At 6 months, CIN 2+ was diagnosed in 0.80% (95% confidence interval [CI], 0.40%-1.20%) of the women in the HPV DNA group and 2.23% (95% CI, 1.57%-2.89%) in the VIA group compared with 3.55% (95% CI, 2.71%-4.39%) in the delayed evaluation group (Pcryotherapy, major complications were rare. Both screen-and-treat approaches are safe and result in a lower prevalence of high-grade cervical cancer precursor lesions compared with delayed evaluation at both 6 and 12 months. Trial Registration http://clinicaltrials.gov Identifier: NCT00233727.
Dynamic approach to space and habitat use based on biased random bridges.
Directory of Open Access Journals (Sweden)
Simon Benhamou
Full Text Available BACKGROUND: Although habitat use reflects a dynamic process, most studies assess habitat use statically as if an animal's successively recorded locations reflected a point rather than a movement process. By relying on the activity time between successive locations instead of the local density of individual locations, movement-based methods can substantially improve the biological relevance of utilization distribution (UD estimates (i.e. the relative frequencies with which an animal uses the various areas of its home range, HR. One such method rests on Brownian bridges (BBs. Its theoretical foundation (purely and constantly diffusive movements is paradoxically inconsistent with both HR settlement and habitat selection. An alternative involves movement-based kernel density estimation (MKDE through location interpolation, which may be applied to various movement behaviours but lacks a sound theoretical basis. METHODOLOGY/PRINCIPAL FINDINGS: I introduce the concept of a biased random (advective-diffusive bridge (BRB and show that the MKDE method is a practical means to estimate UDs based on simplified (isotropically diffusive BRBs. The equation governing BRBs is constrained by the maximum delay between successive relocations warranting constant within-bridge advection (allowed to vary between bridges but remains otherwise similar to the BB equation. Despite its theoretical inconsistencies, the BB method can therefore be applied to animals that regularly reorientate within their HRs and adapt their movements to the habitats crossed, provided that they were relocated with a high enough frequency. CONCLUSIONS/SIGNIFICANCE: Biased random walks can approximate various movement types at short times from a given relocation. Their simplified form constitutes an effective trade-off between too simple, unrealistic movement models, such as Brownian motion, and more sophisticated and realistic ones, such as biased correlated random walks (BCRWs, which are too
A random matrix approach to the crossover of energy-level statistics from Wigner to Poisson
International Nuclear Information System (INIS)
Datta, Nilanjana; Kunz, Herve
2004-01-01
We analyze a class of parametrized random matrix models, introduced by Rosenzweig and Porter, which is expected to describe the energy level statistics of quantum systems whose classical dynamics varies from regular to chaotic as a function of a parameter. We compute the generating function for the correlations of energy levels, in the limit of infinite matrix size. The crossover between Poisson and Wigner statistics is measured by a renormalized coupling constant. The model is exactly solved in the sense that, in the limit of infinite matrix size, the energy-level correlation functions and their generating function are given in terms of a finite set of integrals
Energy Technology Data Exchange (ETDEWEB)
Hsiao, C.; Mountain, D.C.; Chan, M.W.L.; Tsui, K.Y. (University of Southern California, Los Angeles (USA) McMaster Univ., Hamilton, ON (Canada) Chinese Univ. of Hong Kong, Shatin)
1989-12-01
In examining the municipal peak and kilowatt-hour demand for electricity in Ontario, the issue of homogeneity across geographic regions is explored. A common model across municipalities and geographic regions cannot be supported by the data. Considered are various procedures which deal with this heterogeneity and yet reduce the multicollinearity problems associated with regional specific demand formulations. The recommended model controls for regional differences assuming that the coefficients of regional-seasonal specific factors are fixed and different while the coefficients of economic and weather variables are random draws from a common population for any one municipality by combining the information on all municipalities through a Bayes procedure. 8 tabs., 41 refs.
An alternative approach for eliciting willingness-to-pay: A randomized Internet trial
Laura J. Damschroder; Peter A. Ubel; Jason Riis; Dylan M. Smith
2007-01-01
Open-ended methods that elicit willingness-to-pay (WTP) in terms of absolute dollars often result in high rates of questionable and highly skewed responses, insensitivity to changes in health state, and raise an ethical issue related to its association with personal income. We conducted a 2x2 randomized trial over the Internet to test 4 WTP formats: 1) WTP in dollars; 2) WTP as a percentage of financial resources; 3) WTP in terms of monthly payments; and 4) WTP as a single lump-sum amount. WT...
A semi-empirical approach to calculate gamma activities in environmental samples
International Nuclear Information System (INIS)
Palacios, D.; Barros, H.; Alfonso, J.; Perez, K.; Trujillo, M.; Losada, M.
2006-01-01
We propose a semi-empirical method to calculate radionuclide concentrations in environmental samples without the use of reference material and avoiding the typical complexity of Monte-Carlo codes. The calculation of total efficiencies was carried out from a relative efficiency curve (obtained from the gamma spectra data), and the geometric (simulated by Monte-Carlo), absorption, sample and intrinsic efficiencies at energies between 130 and 3000 keV. The absorption and sample efficiencies were determined from the mass absorption coefficients, obtained by the web program XCOM. Deviations between computed results and measured efficiencies for the RGTh-1 reference material are mostly within 10%. Radionuclide activities in marine sediment samples calculated by the proposed method and by the experimental relative method were in satisfactory agreement. The developed method can be used for routine environmental monitoring when efficiency uncertainties of 10% can be sufficient.(Author)
Simulated Job Samples: A Student-Centered Approach to Vocational Exploration and Evaluation.
Richter-Stein, Caryn; Stodden, Robert A.
1981-01-01
Incorporating simulated job samples into the junior high school curriculum can provide vocational exploration opportunities as well as assessment data on special needs students. Students can participate as active learners and decision makers. (CL)
Directory of Open Access Journals (Sweden)
Anne H Berman
Full Text Available The KIDSCREEN-27 is a measure of child and adolescent quality of life (QoL, with excellent psychometric properties, available in child-report and parent-rating versions in 38 languages. This study provides child-reported and parent-rated norms for the KIDSCREEN-27 among Swedish 11-16 year-olds, as well as child-parent agreement. Sociodemographic correlates of self-reported wellbeing and parent-rated wellbeing were also measured.A random population sample consisting of 600 children aged 11-16, 100 per age group and one of their parents (N = 1200, were approached for response to self-reported and parent-rated versions of the KIDSCREEN-27. Parents were also asked about their education, employment status and their own QoL based on the 26-item WHOQOL-Bref. Based on the final sampling pool of 1158 persons, a 34.8% response rate of 403 individuals was obtained, including 175 child-parent pairs, 27 child singleton responders and 26 parent singletons. Gender and age differences for parent ratings and child-reported data were analyzed using t-tests and the Mann-Whitney U-test. Post-hoc Dunn tests were conducted for pairwise comparisons when the p-value for specific subscales was 0.05 or lower. Child-parent agreement was tested item-by-item, using the Prevalence- and Bias-Adjusted Kappa (PABAK coefficient for ordinal data (PABAK-OS; dimensional and total score agreement was evaluated based on dichotomous cut-offs for lower well-being, using the PABAK and total, continuous scores were evaluated using Bland-Altman plots.Compared to European norms, Swedish children in this sample scored lower on Physical wellbeing (48.8 SE/49.94 EU but higher on the other KIDSCREEN-27 dimensions: Psychological wellbeing (53.4/49.77, Parent relations and autonomy (55.1/49.99, Social Support and peers (54.1/49.94 and School (55.8/50.01. Older children self-reported lower wellbeing than younger children. No significant self-reported gender differences occurred and parent ratings
Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem
Energy Technology Data Exchange (ETDEWEB)
Reer, B
2004-03-01
The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)
Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem
International Nuclear Information System (INIS)
Reer, B.
2004-01-01
The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)
2011-10-20
..., this 14th day of October 2011. Kevin Shea, Acting Administrator, Animal and Plant Health Inspection... DEPARTMENT OF AGRICULTURE Animal and Plant Health Inspection Service [Docket No. APHIS-2011-0092] Importation of Plants for Planting; Risk-Based Sampling and Inspection Approach and Propagative Monitoring and...
Choi, Michael K.
2017-01-01
An innovative thermal design concept to maintain comet surface samples cold (for example, 263 degrees Kelvin, 243 degrees Kelvin or 223 degrees Kelvin) from Earth approach through retrieval is presented. It uses paraffin phase change material (PCM), Cryogel insulation and thermoelectric cooler (TEC), which are commercially available.
International Nuclear Information System (INIS)
Wang, Jian-Xun; Sun, Rui; Xiao, Heng
2016-01-01
Highlights: • Compared physics-based and random matrix methods to quantify RANS model uncertainty. • Demonstrated applications of both methods in channel ow over periodic hills. • Examined the amount of information introduced in the physics-based approach. • Discussed implications to modeling turbulence in both near-wall and separated regions. - Abstract: Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows, e.g., those with non-parallel shear layers or strong mean flow curvature. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in the turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. This method has better mathematical rigorousness and provides the most non-committal prior distributions without introducing artificial constraints. On the other hand, the physics-based approach has the advantages of being more flexible to incorporate available physical insights. In this work, we compare and discuss the advantages and disadvantages of the two approaches on model-form uncertainty quantification. In addition, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in the physics-based approach. The comparison is conducted through a test case using a canonical flow, the flow past
Petty, J.D.; Huckins, J.N.; Alvarez, D.A.; Brumbaugh, W. G.; Cranor, W.L.; Gale, R.W.; Rastall, A.C.; Jones-Lepp, T. L.; Leiker, T.J.; Rostad, C. E.; Furlong, E.T.
2004-01-01
As an integral part of our continuing research in environmental quality assessment approaches, we have developed a variety of passive integrative sampling devices widely applicable for use in defining the presence and potential impacts of a broad array of contaminants. The semipermeable membrane device has gained widespread use for sampling hydrophobic chemicals from water and air, the polar organic chemical integrative sampler is applicable for sequestering waterborne hydrophilic organic chemicals, the stabilized liquid membrane device is used to integratively sample waterborne ionic metals, and the passive integrative mercury sampler is applicable for sampling vapor phase or dissolved neutral mercury species. This suite of integrative samplers forms the basis for a new passive sampling approach for assessing the presence and potential toxicological significance of a broad spectrum of environmental contaminants. In a proof-of-concept study, three of our four passive integrative samplers were used to assess the presence of a wide variety of contaminants in the waters of a constructed wetland, and to determine the effectiveness of the constructed wetland in removing contaminants. The wetland is used for final polishing of secondary-treatment municipal wastewater and the effluent is used as a source of water for a state wildlife area. Numerous contaminants, including organochlorine pesticides, polycyclic aromatic hydrocarbons, organophosphate pesticides, and pharmaceutical chemicals (e.g., ibuprofen, oxindole, etc.) were detected in the wastewater. Herein we summarize the results of the analysis of the field-deployed samplers and demonstrate the utility of this holistic approach.
Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach
Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar
2010-10-01
To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.
Sadhukhan, B.; Nayak, A.; Mookerjee, A.
2017-12-01
In this communication we present together four distinct techniques for the study of electronic structure of solids: the tight-binding linear muffin-tin orbitals, the real space and augmented space recursions and the modified exchange-correlation. Using this we investigate the effect of random vacancies on the electronic properties of the carbon hexagonal allotrope, graphene, and the non-hexagonal allotrope, planar T graphene. We have inserted random vacancies at different concentrations, to simulate disorder in pristine graphene and planar T graphene sheets. The resulting disorder, both on-site (diagonal disorder) as well as in the hopping integrals (off-diagonal disorder), introduces sharp peaks in the vicinity of the Dirac point built up from localized states for both hexagonal and non-hexagonal structures. These peaks become resonances with increasing vacancy concentration. We find that in presence of vacancies, graphene-like linear dispersion appears in planar T graphene and the cross points form a loop in the first Brillouin zone similar to buckled T graphene that originates from π and π* bands without regular hexagonal symmetry. We also calculate the single-particle relaxation time, τ (ěc {q}) of ěc {q} labeled quantum electronic states which originates from scattering due to presence of vacancies, causing quantum level broadening.
A two-hypothesis approach to establishing a life detection/biohazard protocol for planetary samples
Conley, Catharine; Steele, Andrew
2016-07-01
The COSPAR policy on performing a biohazard assessment on samples brought from Mars to Earth is framed in the context of a concern for false-positive results. However, as noted during the 2012 Workshop for Life Detection in Samples from Mars (ref. Kminek et al., 2014), a more significant concern for planetary samples brought to Earth is false-negative results, because an undetected biohazard could increase risk to the Earth. This is the reason that stringent contamination control must be a high priority for all Category V Restricted Earth Return missions. A useful conceptual framework for addressing these concerns involves two complementary 'null' hypotheses: testing both of them, together, would allow statistical and community confidence to be developed regarding one or the other conclusion. As noted above, false negatives are of primary concern for safety of the Earth, so the 'Earth Safety null hypothesis' -- that must be disproved to assure low risk to the Earth from samples introduced by Category V Restricted Earth Return missions -- is 'There is native life in these samples.' False positives are of primary concern for Astrobiology, so the 'Astrobiology null hypothesis' -- that must be disproved in order to demonstrate the existence of extraterrestrial life is 'There is no life in these samples.' The presence of Earth contamination would render both of these hypotheses more difficult to disprove. Both these hypotheses can be tested following a strict science protocol; analyse, interprete, test the hypotheses and repeat. The science measurements undertaken are then done in an iterative fashion that responds to discovery with both hypotheses testable from interpretation of the scientific data. This is a robust, community involved activity that ensures maximum science return with minimal sample use.
Ghetti, Claire M
2013-01-01
Individuals undergoing cardiac catheterization are likely to experience elevated anxiety periprocedurally, with highest anxiety levels occurring immediately prior to the procedure. Elevated anxiety has the potential to negatively impact these individuals psychologically and physiologically in ways that may influence the subsequent procedure. This study evaluated the use of music therapy, with a specific emphasis on emotional-approach coping, immediately prior to cardiac catheterization to impact periprocedural outcomes. The randomized, pretest/posttest control group design consisted of two experimental groups--the Music Therapy with Emotional-Approach Coping group [MT/EAC] (n = 13), and a talk-based Emotional-Approach Coping group (n = 14), compared with a standard care Control group (n = 10). MT/EAC led to improved positive affective states in adults awaiting elective cardiac catheterization, whereas a talk-based emphasis on emotional-approach coping or standard care did not. All groups demonstrated a significant overall decrease in negative affect. The MT/EAC group demonstrated a statistically significant, but not clinically significant, increase in systolic blood pressure most likely due to active engagement in music making. The MT/EAC group trended toward shortest procedure length and least amount of anxiolytic required during the procedure, while the EAC group trended toward least amount of analgesic required during the procedure, but these differences were not statistically significant. Actively engaging in a session of music therapy with an emphasis on emotional-approach coping can improve the well-being of adults awaiting cardiac catheterization procedures.
Woodall, W Gill; Delaney, Harold D; Kunitz, Stephen J; Westerberg, Verner S; Zhao, Hongwei
2007-06-01
Randomized trial evidence on the effectiveness of incarceration and treatment of first-time driving while intoxicated (DWI) offenders who are primarily American Indian has yet to be reported in the literature on DWI prevention. Further, research has confirmed the association of antisocial personality disorder (ASPD) with problems with alcohol including DWI. A randomized clinical trial was conducted, in conjunction with 28 days of incarceration, of a treatment program incorporating motivational interviewing principles for first-time DWI offenders. The sample of 305 offenders including 52 diagnosed as ASPD by the Diagnostic Interview Schedule were assessed before assignment to conditions and at 6, 12, and 24 months after discharge. Self-reported frequency of drinking and driving as well as various measures of drinking over the preceding 90 days were available at all assessments for 244 participants. Further, DWI rearrest data for 274 participants were available for analysis. Participants randomized to receive the first offender incarceration and treatment program reported greater reductions in alcohol consumption from baseline levels when compared with participants who were only incarcerated. Antisocial personality disorder participants reported heavier and more frequent drinking but showed significantly greater declines in drinking from intake to posttreatment assessments. Further, the treatment resulted in larger effects relative to the control on ASPD than non-ASPD participants. Nonconfrontational treatment may significantly enhance outcomes for DWI offenders with ASPD when delivered in an incarcerated setting, and in the present study, such effects were found in a primarily American-Indian sample.
Burt, Richard D; Thiede, Hanne
2014-11-01
Respondent-driven sampling (RDS) is a form of peer-based study recruitment and analysis that incorporates features designed to limit and adjust for biases in traditional snowball sampling. It is being widely used in studies of hidden populations. We report an empirical evaluation of RDS's consistency and variability, comparing groups recruited contemporaneously, by identical methods and using identical survey instruments. We randomized recruitment chains from the RDS-based 2012 National HIV Behavioral Surveillance survey of injection drug users in the Seattle area into two groups and compared them in terms of sociodemographic characteristics, drug-associated risk behaviors, sexual risk behaviors, human immunodeficiency virus (HIV) status and HIV testing frequency. The two groups differed in five of the 18 variables examined (P ≤ .001): race (e.g., 60% white vs. 47%), gender (52% male vs. 67%), area of residence (32% downtown Seattle vs. 44%), an HIV test in the previous 12 months (51% vs. 38%). The difference in serologic HIV status was particularly pronounced (4% positive vs. 18%). In four further randomizations, differences in one to five variables attained this level of significance, although the specific variables involved differed. We found some material differences between the randomized groups. Although the variability of the present study was less than has been reported in serial RDS surveys, these findings indicate caution in the interpretation of RDS results. Copyright © 2014 Elsevier Inc. All rights reserved.
Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach
Ballal, Tarig
2014-01-01
This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.
Energy Technology Data Exchange (ETDEWEB)
Muetzell, S. (Univ. Hospital of Uppsala (Sweden). Dept. of Family Medicine)
1992-01-01
Computed tomography (CT) of the brain was performed in a random sample of a total of 195 men and 211 male alcoholic patients admitted for the first time during a period of two years from the same geographically limited area of Greater Stockholm as the sample. Laboratory tests were performed, including liver and pancreatic tests. Toxicological screening was performed and the consumption of hepatotoxic drugs was also investigated. The groups were then subdivided with respect to alcohol consumption and use of hepatotoxic drugs: group IA, men from the random sample with low or moderate alcohol consumption and no use of hepatotoxic drugs; IB, men from the random sample with low or moderate alcohol consumption with use of hepatotoxic drugs; IIA, alcoholic inpatients with use of alcohol and no drugs; and IIB, alcoholic inpatients with use of alcohol and drugs. Group IIB was found to have a higher incidence of cortical and subcortical changes than group IA. Group IB had a higher incidence of subcortical changes than group IA, and they differed only in drug use. Groups IIN and IIA only differed in drug use, and IIB had a higher incidence of brian damage except for anterior horn index and wide cerebellar sulci indicating vermian atrophy. Significantly higher serum levels of bilirubin, GGT, ASAT, ALAT, CK LD, and amylase were found in IIB. The results indicate that drug use influences the incidence of cortical and subcortical aberrations, except anterior horn index. It is concluded that the groups with alcohol abuse who used hepatotoxic drugs showed a picture of cortical changes (wide transport sulci and clear-cut of high-grade cortical changes) and also of subcortical aberrations, expressed as an increased widening on the third ventricle.
International Nuclear Information System (INIS)
Muetzell, S.
1992-01-01
Computed tomography (CT) of the brain was performed in a random sample of a total of 195 men and 211 male alcoholic patients admitted for the first time during a period of two years from the same geographically limited area of Greater Stockholm as the sample. Laboratory tests were performed, including liver and pancreatic tests. Toxicological screening was performed and the consumption of hepatotoxic drugs was also investigated. The groups were then subdivided with respect to alcohol consumption and use of hepatotoxic drugs: group IA, men from the random sample with low or moderate alcohol consumption and no use of hepatotoxic drugs; IB, men from the random sample with low or moderate alcohol consumption with use of hepatotoxic drugs; IIA, alcoholic inpatients with use of alcohol and no drugs; and IIB, alcoholic inpatients with use of alcohol and drugs. Group IIB was found to have a higher incidence of cortical and subcortical changes than group IA. Group IB had a higher incidence of subcortical changes than group IA, and they differed only in drug use. Groups IIN and IIA only differed in drug use, and IIB had a higher incidence of brian damage except for anterior horn index and wide cerebellar sulci indicating vermian atrophy. Significantly higher serum levels of bilirubin, GGT, ASAT, ALAT, CK LD, and amylase were found in IIB. The results indicate that drug use influences the incidence of cortical and subcortical aberrations, except anterior horn index. It is concluded that the groups with alcohol abuse who used hepatotoxic drugs showed a picture of cortical changes (wide transport sulci and clear-cut of high-grade cortical changes) and also of subcortical aberrations, expressed as an increased widening on the third ventricle
Jackson, George L.; Weinberger, Morris; Kirshner, Miriam A.; Stechuchak, Karen M.; Melnyk, Stephanie D.; Bosworth, Hayden B.; Coffman, Cynthia J.; Neelon, Brian; Van Houtven, Courtney; Gentry, Pamela W.; Morris, Isis J.; Rose, Cynthia M.; Taylor, Jennifer P.; May, Carrie L.; Han, Byungjoo; Wainwright, Christi; Alkon, Aviel; Powell, Lesa; Edelman, David
2016-01-01
Despite the availability of efficacious treatments, only half of patients with hypertension achieve adequate blood pressure (BP) control. This paper describes the protocol and baseline subject characteristics of a 2-arm, 18-month randomized clinical trial of titrated disease management (TDM) for patients with pharmaceutically-treated hypertension for whom systolic blood pressure (SBP) is not controlled (≥140mmHg for non-diabetic or ≥130mmHg for diabetic patients). The trial is being conducted among patients of four clinic locations associated with a Veterans Affairs Medical Center. An intervention arm has a TDM strategy in which patients' hypertension control at baseline, 6, and 12 months determines the resource intensity of disease management. Intensity levels include: a low-intensity strategy utilizing a licensed practical nurse to provide bi-monthly, non-tailored behavioral support calls to patients whose SBP comes under control; medium-intensity strategy utilizing a registered nurse to provide monthly tailored behavioral support telephone calls plus home BP monitoring; and high-intensity strategy utilizing a pharmacist to provide monthly tailored behavioral support telephone calls, home BP monitoring, and pharmacist-directed medication management. Control arm patients receive the low-intensity strategy regardless of BP control. The primary outcome is SBP. There are 385 randomized (192 intervention; 193 control) veterans that are predominately older (mean age 63.5 years) men (92.5%). 61.8% are African American, and the mean baseline SBP for all subjects is 143.6mmHg. This trial will determine if a disease management program that is titrated by matching the intensity of resources to patients' BP control leads to superior outcomes compared to a low-intensity management strategy. PMID:27417982
Energy Technology Data Exchange (ETDEWEB)
Shi, Cindy
2015-07-17
The interactions among different microbial populations in a community could play more important roles in determining ecosystem functioning than species numbers and their abundances, but very little is known about such network interactions at a community level. The goal of this project is to develop novel framework approaches and associated software tools to characterize the network interactions in microbial communities based on high throughput, large scale high-throughput metagenomics data and apply these approaches to understand the impacts of environmental changes (e.g., climate change, contamination) on network interactions among different nitrifying populations and associated microbial communities.
Garcia-Santiago, C. A.; Del Ser, J.; Upton, C.; Quilligan, F.; Gil-Lopez, S.; Salcedo-Sanz, S.
2015-11-01
When seeking near-optimal solutions for complex scheduling problems, meta-heuristics demonstrate good performance with affordable computational effort. This has resulted in a gravitation towards these approaches when researching industrial use-cases such as energy-efficient production planning. However, much of the previous research makes assumptions about softer constraints that affect planning strategies and about how human planners interact with the algorithm in a live production environment. This article describes a job-shop problem that focuses on minimizing energy consumption across a production facility of shared resources. The application scenario is based on real facilities made available by the Irish Center for Manufacturing Research. The formulated problem is tackled via harmony search heuristics with random keys encoding. Simulation results are compared to a genetic algorithm, a simulated annealing approach and a first-come-first-served scheduling. The superior performance obtained by the proposed scheduler paves the way towards its practical implementation over industrial production chains.
International Nuclear Information System (INIS)
Yonemitsu, K.; Bishop, A.R.
1992-01-01
As a convenient qualitative approach to strongly correlated electronic systems, an inhomogeneous Hartree-Fock plus random-phase approximation is applied to response functions for the two-dimensional multiband Hubbard model for cuprate superconductors. A comparison of the results with those obtained by exact diagonalization by Wagner, Hanke, and Scalapino [Phys. Rev. B 43, 10 517 (1991)] shows that overall structures in optical and magnetic particle-hole excitation spectra are well reproduced by this method. This approach is computationally simple, retains conceptual clarity, and can be calibrated by comparison with exact results on small systems. Most importantly, it is easily extended to larger systems and straightforward to incorporate additional terms in the Hamiltonian, such as electron-phonon interactions, which may play a crucial role in high-temperature superconductivity
DEFF Research Database (Denmark)
Puri, Rajesh; Vilmann, Peter; Saftoiu, Adrian
2009-01-01
). The samples were characterized for cellularity and bloodiness, with a final cytology diagnosis established blindly. The final diagnosis was reached either by EUS-FNA if malignancy was definite, or by surgery and/or clinical follow-up of a minimum of 6 months in the cases of non-specific benign lesions...
A random walk approach to the diffusion of positrons in gaseous media
International Nuclear Information System (INIS)
Girardi-Schappo, M.; Tenfen, W.; Arretche, F.
2013-01-01
In this work, we present a random walk model to study the positron diffusion in gaseous media. The positron-atom interaction is described through positron-target cross sections. The main idea is to obtain how much energy a positron transfer to the environment atoms, through ionizations and electronic excitations until its annihilation, taking the ratio between each energetically available collision channel to the total one as the probability for each process to occur. As a first application, we studied how the positron diffuse in gases of helium, neon, argon and their mixtures. To characterize the positron dynamics in each system, we calculated the radiation profile generated from the annihilation, their diffusion profiles and the most probable distances for excitation and ionization. (authors)
The determinants of cost efficiency of hydroelectric generating plants: A random frontier approach
International Nuclear Information System (INIS)
Barros, Carlos P.; Peypoch, Nicolas
2007-01-01
This paper analyses the technical efficiency in the hydroelectric generating plants of a main Portuguese electricity enterprise EDP (Electricity of Portugal) between 1994 and 2004, investigating the role played by increase in competition and regulation. A random cost frontier method is adopted. A translog frontier model is used and the maximum likelihood estimation technique is employed to estimate the empirical model. We estimate the efficiency scores and decompose the exogenous variables into homogeneous and heterogeneous. It is concluded that production and capacity are heterogeneous, signifying that the hydroelectric generating plants are very distinct and therefore any energy policy should take into account this heterogeneity. It is also concluded that competition, rather than regulation, plays the key role in increasing hydroelectric plant efficiency
Online games: a novel approach to explore how partial information influences human random searches
Martínez-García, Ricardo; Calabrese, Justin M.; López, Cristóbal
2017-01-01
Many natural processes rely on optimizing the success ratio of a search process. We use an experimental setup consisting of a simple online game in which players have to find a target hidden on a board, to investigate how the rounds are influenced by the detection of cues. We focus on the search duration and the statistics of the trajectories traced on the board. The experimental data are explained by a family of random-walk-based models and probabilistic analytical approximations. If no initial information is given to the players, the search is optimized for cues that cover an intermediate spatial scale. In addition, initial information about the extension of the cues results, in general, in faster searches. Finally, strategies used by informed players turn into non-stationary processes in which the length of e ach displacement evolves to show a well-defined characteristic scale that is not found in non-informed searches.
Estimating the demand for drop-off recycling sites: a random utility travel cost approach.
Sidique, Shaufique F; Lupi, Frank; Joshi, Satish V
2013-09-30
Drop-off recycling is one of the most widely adopted recycling programs in the United States. Despite its wide implementation, relatively little literature addresses the demand for drop-off recycling. This study examines the demand for drop-off recycling sites as a function of travel costs and various site characteristics using the random utility model (RUM). The findings of this study indicate that increased travel costs significantly reduce the frequency of visits to drop-off sites implying that the usage pattern of a site is influenced by its location relative to where people live. This study also demonstrates that site specific characteristics such as hours of operation, the number of recyclables accepted, acceptance of commingled recyclables, and acceptance of yard-waste affect the frequency of visits to drop-off sites. Copyright © 2013 Elsevier Ltd. All rights reserved.
Network trending; leadership, followership and neutrality among companies: A random matrix approach
Mobarhan, N. S. Safavi; Saeedi, A.; Roodposhti, F. Rahnamay; Jafari, G. R.
2016-11-01
In this article, we analyze the cross-correlation between returns of different stocks to answer the following important questions. The first one is: If there exists collective behavior in a financial market, how could we detect it? And the second question is: Is there a particular company among the companies of a market as the leader of the collective behavior? Or is there no specified leadership governing the system similar to some complex systems? We use the method of random matrix theory to answer the mentioned questions. Cross-correlation matrix of index returns of four different markets is analyzed. The participation ratio quantity related to each matrices' eigenvectors and the eigenvalue spectrum is calculated. We introduce shuffled-matrix created of cross correlation matrix in such a way that the elements of the later one are displaced randomly. Comparing the participation ratio quantities obtained from a correlation matrix of a market and its related shuffled-one, on the bulk distribution region of the eigenvalues, we detect a meaningful deviation between the mentioned quantities indicating the collective behavior of the companies forming the market. By calculating the relative deviation of participation ratios, we obtain a measure to compare the markets according to their collective behavior. Answering the second question, we show there are three groups of companies: The first group having higher impact on the market trend called leaders, the second group is followers and the third one is the companies who have not a considerable role in the trend. The results can be utilized in portfolio construction.
Bieg, Madeleine; Goetz, Thomas; Sticca, Fabio; Brunner, Esther; Becker, Eva; Morger, Vinzenz; Hubbard, Kyle
2017-01-01
Various theoretical approaches propose that emotions in the classroom are elicited by appraisal antecedents, with subjective experiences of control playing a crucial role in this context. Perceptions of control, in turn, are expected to be influenced by the classroom social environment, which can include the teaching methods being employed (e.g.,…
DEFF Research Database (Denmark)
Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard
2012-01-01
In this study a stochastic approach is conducted to obtain the horizontal and rotational stiffness of an offshore monopile foundation. A nonlinear stochastic p-y curve is integrated into a finite element scheme for calculation of the monopile response in over-consolidated clay having spatial...
Sampling Practices and Social Spaces: Exploring a Hip-Hop Approach to Higher Education
Petchauer, Emery
2010-01-01
Much more than a musical genre, hip-hop culture exists as an animating force in the lives of many young adults. This article looks beyond the moral concerns often associated with rap music to explore how hip-hop as a larger set of expressions and practices implicates the educational experiences, activities, and approaches for students. The article…
Functional approximations to posterior densities: a neural network approach to efficient sampling
L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)
2002-01-01
textabstractThe performance of Monte Carlo integration methods like importance sampling or Markov Chain Monte Carlo procedures greatly depends on the choice of the importance or candidate density. Usually, such a density has to be "close" to the target density in order to yield numerically accurate
The 4-vessel Sampling Approach to Integrative Studies of Human Placental Physiology In Vivo.
Holme, Ane M; Holm, Maia B; Roland, Marie C P; Horne, Hildegunn; Michelsen, Trond M; Haugen, Guttorm; Henriksen, Tore
2017-08-02
The human placenta is highly inaccessible for research while still in utero. The current understanding of human placental physiology in vivo is therefore largely based on animal studies, despite the high diversity among species in placental anatomy, hemodynamics and duration of the pregnancy. The vast majority of human placenta studies are ex vivo perfusion studies or in vitro trophoblast studies. Although in vitro studies and animal models are essential, extrapolation of the results from such studies to the human placenta in vivo is uncertain. We aimed to study human placenta physiology in vivo at term, and present a detailed protocol of the method. Exploiting the intraabdominal access to the uterine vein just before the uterine incision during planned cesarean section, we collect blood samples from the incoming and outgoing vessels on the maternal and fetal sides of the placenta. When combining concentration measurements from blood samples with volume blood flow measurements, we are able to quantify placental and fetal uptake and release of any compound. Furthermore, placental tissue samples from the same mother-fetus pairs can provide measurements of transporter density and activity and other aspects of placental functions in vivo. Through this integrative use of the 4-vessel sampling method we are able to test some of the current concepts of placental nutrient transfer and metabolism in vivo, both in normal and pathological pregnancies. Furthermore, this method enables the identification of substances secreted by the placenta to the maternal circulation, which could be an important contribution to the search for biomarkers of placenta dysfunction.
An approach for measuring the {sup 129}I/{sup 127}I ratio in fish samples
Energy Technology Data Exchange (ETDEWEB)
Kusuno, Haruka, E-mail: kusuno@um.u-tokyo.ac.jp [The University Museum, The University of Tokyo, 3-7-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Matsuzaki, Hiroyuki [The University Museum, The University of Tokyo, 3-7-1 Hongo, Bunkyo-ku, Tokyo 113-0033 (Japan); Nagata, Toshi; Miyairi, Yosuke; Yokoyama, Yusuke [Atmosphere and Ocean Research Institute, The University of Tokyo, 5-1-5, Kashiwanoha, Kashiwa-shi, Chiba 277-8564 (Japan); Ohkouchi, Naohiko [Japan Agency for Marine-Earth Science and Technology, 2-15, Natsushima-cho, Yokosuka-city, Kanagawa 237-0061 (Japan)
2015-10-15
The {sup 129}I/{sup 127}I ratio in marine fish samples was measured employing accelerator mass spectrometry. The measurement was successful because of the low experimental background of {sup 129}I. Pyrohydrolysis was applied to extract iodine from fish samples. The experimental background of pyrohydrolysis was checked carefully and evaluated as 10{sup 4}–10{sup 5} atoms {sup 129}I/combustion. The methodology employed in the present study thus required only 0.05–0.2 g of dried fish samples. The methodology was then applied to obtain the {sup 129}I/{sup 127}I ratio of marine fish samples collected from the Western Pacific Ocean as (0.63–1.2) × 10{sup −10}. These values were similar to the ratio for the surface seawater collected at the same station, 0.4 × 10{sup −10}. The {sup 129}I/{sup 127}I ratio of IAEA-414, which was a mix of fish from the Irish Sea and the North Sea, was also measured and determined as 1.82 × 10{sup −7}. Consequently, fish from the Western Pacific Ocean and the North Sea were distinguished by their {sup 129}I/{sup 127}I ratios. The {sup 129}I/{sup 127}I ratio is thus a direct indicator of the area of habitat of fish.
Gender Wage Gap : A Semi-Parametric Approach With Sample Selection Correction
Picchio, M.; Mussida, C.
2010-01-01
Sizeable gender differences in employment rates are observed in many countries. Sample selection into the workforce might therefore be a relevant issue when estimating gender wage gaps. This paper proposes a new semi-parametric estimator of densities in the presence of covariates which incorporates
van Leth, Frank; den Heijer, Casper; Beerepoot, Marielle; Stobberingh, Ellen; Geerlings, Suzanne; Schultsz, Constance
2017-01-01
Increasing antimicrobial resistance (AMR) requires rapid surveillance tools, such as Lot Quality Assurance Sampling (LQAS). LQAS classifies AMR as high or low based on set parameters. We compared classifications with the underlying true AMR prevalence using data on 1335 Escherichia coli isolates
Modeling the magnitude and distribution of estuarine sediment contamination by pollutants of historic (e.g. PCB) and emerging concern (e.g., personal care products, PCP) is often limited by incomplete site knowledge and inadequate sediment contamination sampling. We tested a mode...
Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment cont...
Directory of Open Access Journals (Sweden)
Gertraud Gradl-Dietsch
2016-11-01
Full Text Available Abstract Background The objectives of this prospective randomized trial were to assess the impact of Peyton’s four-step approach on the acquisition of complex psychomotor skills and to examine the influence of gender on learning outcomes. Methods We randomly assigned 95 third to fifth year medical students to an intervention group which received instructions according to Peyton (PG or a control group, which received conventional teaching (CG. Both groups attended four sessions on the principles of manual therapy and specific manipulative and diagnostic techniques for the spine. We assessed differences in theoretical knowledge (multiple choice (MC exam and practical skills (Objective Structured Practical Examination (OSPE with respect to type of intervention and gender. Participants took a second OSPE 6 months after completion of the course. Results There were no differences between groups with respect to the MC exam. Students in the PG group scored significantly higher in the OSPE. Gender had no additional impact. Results of the second OSPE showed a significant decline in competency regardless of gender and type of intervention. Conclusions Peyton’s approach is superior to standard instruction for teaching complex spinal manipulation skills regardless of gender. Skills retention was equally low for both techniques.
Vroland-Nordstrand, Kristina; Eliasson, Ann-Christin; Jacobsson, Helén; Johansson, Ulla; Krumlinde-Sundholm, Lena
2016-06-01
The efficacy of two different goal-setting approaches (children's self-identified goals and goals identified by parents) were compared on a goal-directed, task-oriented intervention. In this assessor-blinded parallel randomized trial, 34 children with disabilities (13 males, 21 females; mean age 9y, SD 1y 4mo) were randomized using concealed allocation to one of two 8-week, goal-directed, task-oriented intervention groups with different goal-setting approaches: (1) children's self-identified goals (n=18) using the Perceived Efficacy and Goal-Setting System, or (2) goals identified by parents (n=16) using the Canadian Occupational Performance Measure (COPM). Participants were recruited through eight paediatric rehabilitation centres and randomized between October 2011 and May 2013. The primary outcome measure was the Goal Attainment Scaling and the secondary measure, the COPM performance scale (COPM-P). Data were collected pre- and post-intervention and at the 5-month follow-up. There was no evidence of a difference in mean characteristics at baseline between groups. There was evidence of an increase in mean goal attainment (mean T score) in both groups after intervention (child-goal group: estimated mean difference [EMD] 27.84, 95% CI 22.93-32.76; parent-goal group: EMD 21.42, 95% CI 16.16-26.67). There was no evidence of a difference in the mean T scores post-intervention between the two groups (EMD 6.42, 95% CI -0.80 to 13.65). These results were sustained at the 5-month follow-up. Children's self-identified goals are achievable to the same extent as parent-identified goals and remain stable over time. Thus children can be trusted to identify their own goals for intervention, thereby influencing their involvement in their intervention programmes. © 2015 Mac Keith Press.
Binder, Alexandra M; Michels, Karin B
2013-12-04
Investigation of the biological mechanism by which folate acts to affect fetal development can inform appraisal of expected benefits and risk management. This research is ethically imperative given the ubiquity of folic acid fortified products in the US. Considering that folate is an essential component in the one-carbon metabolism pathway that provides methyl groups for DNA methylation, epigenetic modifications provide a putative molecular mechanism mediating the effect of folic acid supplementation on neonatal and pediatric outcomes. In this study we use a Mendelian Randomization Unnecessary approach to assess the effect of red blood cell (RBC) folate on genome-wide DNA methylation in cord blood. Site-specific CpG methylation within the proximal promoter regions of approximately 14,500 genes was analyzed using the Illumina Infinium Human Methylation27 Bead Chip for 50 infants from the Epigenetic Birth Cohort at Brigham and Women's Hospital in Boston. Using methylenetetrahydrofolate reductase genotype as the instrument, the Mendelian Randomization approach identified 7 CpG loci with a significant (mostly positive) association between RBC folate and methylation level. Among the genes in closest proximity to this significant subset of CpG loci, several enriched biologic processes were involved in nucleic acid transport and metabolic processing. Compared to the standard ordinary least squares regression method, our estimates were demonstrated to be more robust to unmeasured confounding. To the authors' knowledge, this is the largest genome-wide analysis of the effects of folate on methylation pattern, and the first to employ Mendelian Randomization to assess the effects of an exposure on epigenetic modifications. These results can help guide future analyses of the causal effects of periconceptional folate levels on candidate pathways.
New approach of a transient ICP-MS measurement method for samples with high salinity.
Hein, Christina; Sander, Jonas Michael; Kautenburger, Ralf
2017-03-01
In the near future it is necessary to establish a disposal for high level nuclear waste (HLW) in deep and stable geological formations. In Germany typical host rocks are salt or claystone. Suitable clay formations exist in the south and in the north of Germany. The geochemical conditions of these clay formations show a strong difference. In the northern ionic strengths of the pore water up to 5M are observed. The determination of parameters like K d values during sorption experiments of metal ions like uranium or europium as homologues for trivalent actinides onto clay stones are very important for long term safety analysis. The measurement of the low concentrated, not sorbed analytes commonly takes place by inductively coupled plasma mass spectrometry (ICP-MS). A direct measurement of high saline samples like seawater with more than 1% total dissolved salt content is not possible. Alternatives like sample clean up, preconcentration or strong dilution have more disadvantages than advantages for example more preparation steps or additional and expensive components. With a small modification of the ICP-MS sample introduction system and a home-made reprogramming of the autosampler a transient analysing method was developed which is suitable for measuring metal ions like europium and uranium in high saline sample matrices up to 5M (NaCl). Comparisons at low ionic strength between the default and the transient measurement show the latter performs similarly well to the default measurement. Additionally no time consuming sample clean-up or expensive online dilution or matrix removal systems are necessary and the analysation shows a high sensitivity due to the data processing based on the peak area. Copyright © 2016 Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Karunamuni Nandini
2008-12-01
Full Text Available Abstract Background Aerobic physical activity (PA and resistance training are paramount in the treatment and management of type 2 diabetes (T2D, but few studies have examined the determinants of both types of exercise in the same sample. Objective The primary purpose was to investigate the utility of the Theory of Planned Behavior (TPB in explaining aerobic PA and resistance training in a population sample of T2D adults. Methods A total of 244 individuals were recruited through a random national sample which was created by generating a random list of household phone numbers. The list was proportionate to the actual number of household telephone numbers for each Canadian province (with the exception of Quebec. These individuals completed self-report TPB constructs of attitude, subjective norm, perceived behavioral control and intention, and a 3-month follow-up that assessed aerobic PA and resistance training. Results TPB explained 10% and 8% of the variance respectively for aerobic PA and resistance training; and accounted for 39% and 45% of the variance respectively for aerobic PA and resistance training intentions. Conclusion These results may guide the development of appropriate PA interventions for aerobic PA and resistance training based on the TPB.
Howe, Michael
2014-05-01
Much of the digital geological information on the composition, properties and dynamics of the subsurface is based ultimately on physical samples, many of which are archived to provide a basis for the information. Online metadata catalogues of these collections have now been available for many years. Many of these are institutional and tightly focussed, with UK examples including the British Geological Survey's (BGS) palaeontological samples database, PalaeoSaurus (http://www.bgs.ac.uk/palaeosaurus/), and mineralogical and petrological sample database, Britrocks (http://www.bgs.ac.uk/data/britrocks.html) . There are now a growing number of international sample metadata databases, including The Palaeobiology Database (http://paleobiodb.org/) and SESAR, the IGSN (International Geo Sample Number) database (http://www.geosamples.org/catalogsearch/ ). More recently the emphasis has moved beyond metadata (locality, identification, age, citations, etc) to digital imagery, with the intention of providing the user with at least enough information to determine whether viewing the sample would be worthwhile. Recent BGS examples include high resolution (e.g. 7216 x 5412 pixel) hydrocarbon well core images (http://www.bgs.ac.uk/data/offshoreWells/wells.cfc?method=searchWells) , high resolution rock thin section images (e.g. http://www.largeimages.bgs.ac.uk/iip/britrocks.html?id=290000/291739 ) and building stone images (http://geoscenic.bgs.ac.uk/asset-bank/action/browseItems?categoryId=1547&categoryTypeId=1) . This has been developed further with high resolution stereo images. The Jisc funded GB3D type fossils online project delivers these as red-cyan anaglyphs (http://www.3d-fossils.ac.uk/). More innovatively, the GB3D type fossils project has laser scanned several thousand type fossils and the resulting 3d-digital models are now being delivered through the online portal. Importantly, this project also represents collaboration between the BGS, Oxford and Cambridge Universities
Interactive Fuzzy Goal Programming approach in multi-response stratified sample surveys
Directory of Open Access Journals (Sweden)
Gupta Neha
2016-01-01
Full Text Available In this paper, we applied an Interactive Fuzzy Goal Programming (IFGP approach with linear, exponential and hyperbolic membership functions, which focuses on maximizing the minimum membership values to determine the preferred compromise solution for the multi-response stratified surveys problem, formulated as a Multi- Objective Non Linear Programming Problem (MONLPP, and by linearizing the nonlinear objective functions at their individual optimum solution, the problem is approximated to an Integer Linear Programming Problem (ILPP. A numerical example based on real data is given, and comparison with some existing allocations viz. Cochran’s compromise allocation, Chatterjee’s compromise allocation and Khowaja’s compromise allocation is made to demonstrate the utility of the approach.
A New Approach for Predicting the Variance of Random Decrement Functions
DEFF Research Database (Denmark)
Asmussen, J. C.; Brincker, Rune
mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...
A New Approach for Predicting the Variance of Random Decrement Functions
DEFF Research Database (Denmark)
Asmussen, J. C.; Brincker, Rune
1998-01-01
mean Gaussian distributed processes the RD functions are proportional to the correlation functions of the processes. If a linear structur is loaded by Gaussian white noise the modal parameters can be extracted from the correlation funtions of the response, only. One of the weaknesses of the RD...... technique is that no consistent approach to estimate the variance of the RD functions is known. Only approximate relations are available, which can only be used under special conditions. The variance of teh RD functions contains valuable information about accuracy of the estimates. Furthermore, the variance...... can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...
Lee, Stella Juhyun; Brennan, Emily; Gibson, Laura Anne; Tan, Andy S. L.; Kybert-Momjian, Ani; Liu, Jiaying; Hornik, Robert
2016-01-01
Several message topic selection approaches propose that messages based on beliefs pretested and found to be more strongly associated with intentions will be more effective in changing population intentions and behaviors when used in a campaign. This study aimed to validate the underlying causal assumption of these approaches which rely on cross-sectional belief–intention associations. We experimentally tested whether messages addressing promising themes as identified by the above criterion were more persuasive than messages addressing less promising themes. Contrary to expectations, all messages increased intentions. Interestingly, mediation analyses showed that while messages deemed promising affected intentions through changes in targeted promising beliefs, messages deemed less promising also achieved persuasion by influencing nontargeted promising beliefs. Implications for message topic selection are discussed. PMID:27867218
Han, L. F; Plummer, Niel
2016-01-01
Numerous methods have been proposed to estimate the pre-nuclear-detonation 14C content of dissolved inorganic carbon (DIC) recharged to groundwater that has been corrected/adjusted for geochemical processes in the absence of radioactive decay (14C0) - a quantity that is essential for estimation of radiocarbon age of DIC in groundwater. The models/approaches most commonly used are grouped as follows: (1) single-sample-based models, (2) a statistical approach based on the observed (curved) relationship between 14C and δ13C data for the aquifer, and (3) the geochemical mass-balance approach that constructs adjustment models accounting for all the geochemical reactions known to occur along a groundwater flow path. This review discusses first the geochemical processes behind each of the single-sample-based models, followed by discussions of the statistical approach and the geochemical mass-balance approach. Finally, the applications, advantages and limitations of the three groups of models/approaches are discussed.The single-sample-based models constitute the prevailing use of 14C data in hydrogeology and hydrological studies. This is in part because the models are applied to an individual water sample to estimate the 14C age, therefore the measurement data are easily available. These models have been shown to provide realistic radiocarbon ages in many studies. However, they usually are limited to simple carbonate aquifers and selection of model may have significant effects on 14C0 often resulting in a wide range of estimates of 14C ages.Of the single-sample-based models, four are recommended for the estimation of 14C0 of DIC in groundwater: Pearson's model, (Ingerson and Pearson, 1964; Pearson and White, 1967), Han & Plummer's model (Han and Plummer, 2013), the IAEA model (Gonfiantini, 1972; Salem et al., 1980), and Oeschger's model (Geyh, 2000). These four models include all processes considered in single-sample-based models, and can be used in different ranges of
On the foundations of the random lattice approach to quantum gravity
International Nuclear Information System (INIS)
Levin, A.; Morozov, A.
1990-01-01
We discuss the problem which can arise in the identification of conventional 2D quantum gravity, involving the sum over Riemann surfaces, with the results of the lattice approach, based on the enumeration of the Feynman graphs of matrix models. A potential difficulty is related to the (hypothetical) fact that the arithmetic curves are badly distributed in the module spaces for high enough genera (at least for g≥17). (orig.)
Inference for Local Distributions at High Sampling Frequencies: A Bootstrap Approach
DEFF Research Database (Denmark)
Hounyo, Ulrich; Varneskov, Rasmus T.
of "large" jumps. Our locally dependent wild bootstrap (LDWB) accommodate issues related to the stochastic scale and jumps as well as account for a special block-wise dependence structure induced by sampling errors. We show that the LDWB replicates first and second-order limit theory from the usual...... empirical process and the stochastic scale estimate, respectively, as well as an asymptotic bias. Moreover, we design the LDWB sufficiently general to establish asymptotic equivalence between it and and a nonparametric local block bootstrap, also introduced here, up to second-order distribution theory....... Finally, we introduce LDWB-aided Kolmogorov-Smirnov tests for local Gaussianity as well as local von-Mises statistics, with and without bootstrap inference, and establish their asymptotic validity using the second-order distribution theory. The finite sample performance of CLT and LDWB-aided local...
Ravenell, Joseph; Leighton-Herrmann, Ellyn; Abel-Bey, Amparo; DeSorbo, Alexandra; Teresi, Jeanne; Valdez, Lenfis; Gordillo, Madeleine; Gerin, William; Hecht, Michael; Ramirez, Mildred; Noble, James; Cohn, Elizabeth; Jean-Louis, Giardin; Spruill, Tanya; Waddy, Salina; Ogedegbe, Gbenga; Williams, Olajide
2015-04-19
Stroke is a leading cause of adult disability and mortality. Intravenous thrombolysis can minimize disability when patients present to the emergency department for treatment within the 3 - 4½ h of symptom onset. Blacks and Hispanics are more likely to die and suffer disability from stroke than whites, due in part to delayed hospital arrival and ineligibility for intravenous thrombolysis for acute stroke. Low stroke literacy (poor knowledge of stroke symptoms and when to call 911) among Blacks and Hispanics compared to whites may contribute to disparities in acute stroke treatment and outcomes. Improving stroke literacy may be a critical step along the pathway to reducing stroke disparities. The aim of the current study is to test a novel intervention to increase stroke literacy in minority populations in New York City. In a two-arm cluster randomized trial, we will evaluate the effectiveness of two culturally tailored stroke education films - one in English and one in Spanish - on changing behavioral intent to call 911 for suspected stroke, compared to usual care. These films will target knowledge of stroke symptoms, the range of severity of symptoms and the therapeutic benefit of calling 911, as well as address barriers to timely presentation to the hospital. Given the success of previous church-based programs targeting behavior change in minority populations, this trial will be conducted with 250 congregants across 14 churches (125 intervention; 125 control). Our proposed outcomes are (1) recognition of stroke symptoms and (2) behavioral intent to call 911 for suspected stroke, measured using the Stroke Action Test at the 6-month and 1-year follow-up. This is the first randomized trial of a church-placed narrative intervention to improve stroke outcomes in urban Black and Hispanic populations. A film intervention has the potential to make a significant public health impact, as film is a highly scalable and disseminable medium. Since there is at least one
Energy Technology Data Exchange (ETDEWEB)
Chandonia, John-Marc; Brenner, Steven E.
2004-07-14
The structural genomics project is an international effort to determine the three-dimensional shapes of all important biological macromolecules, with a primary focus on proteins. Target proteins should be selected according to a strategy which is medically and biologically relevant, of good value, and tractable. As an option to consider, we present the Pfam5000 strategy, which involves selecting the 5000 most important families from the Pfam database as sources for targets. We compare the Pfam5000 strategy to several other proposed strategies that would require similar numbers of targets. These include including complete solution of several small to moderately sized bacterial proteomes, partial coverage of the human proteome, and random selection of approximately 5000 targets from sequenced genomes. We measure the impact that successful implementation of these strategies would have upon structural interpretation of the proteins in Swiss-Prot, TrEMBL, and 131 complete proteomes (including 10 of eukaryotes) from the Proteome Analysis database at EBI. Solving the structures of proteins from the 5000 largest Pfam families would allow accurate fold assignment for approximately 68 percent of all prokaryotic proteins (covering 59 percent of residues) and 61 percent of eukaryotic proteins (40 percent of residues). More fine-grained coverage which would allow accurate modeling of these proteins would require an order of magnitude more targets. The Pfam5000 strategy may be modified in several ways, for example to focus on larger families, bacterial sequences, or eukaryotic sequences; as