A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models
Shieh, Gwowen
2007-01-01
The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…
Bergh, Daniel
2015-01-01
Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.
Sample size calculations for pilot randomized trials: a confidence interval approach.
Cocks, Kim; Torgerson, David J
2013-02-01
To describe a method using confidence intervals (CIs) to estimate the sample size for a pilot randomized trial. Using one-sided CIs and the estimated effect size that would be sought in a large trial, we calculated the sample size needed for pilot trials. Using an 80% one-sided CI, we estimated that a pilot trial should have at least 9% of the sample size of the main planned trial. Using the estimated effect size difference for the main trial and using a one-sided CI, this allows us to calculate a sample size for a pilot trial, which will make its results more useful than at present. Copyright © 2013 Elsevier Inc. All rights reserved.
Multivariate Multi-Objective Allocation in Stratified Random Sampling: A Game Theoretic Approach.
Muhammad, Yousaf Shad; Hussain, Ijaz; Shoukry, Alaa Mohamd
2016-01-01
We consider the problem of multivariate multi-objective allocation where no or limited information is available within the stratum variance. Results show that a game theoretic approach (based on weighted goal programming) can be applied to sample size allocation problems. We use simulation technique to determine payoff matrix and to solve a minimax game.
Sulaiman, Nabil; Albadawi, Salah; Abusnana, Salah; Fikri, Mahmoud; Madani, Abdulrazzag; Mairghani, Maisoon; Alawadi, Fatheya; Zimmet, Paul; Shaw, Jonathan
2015-09-01
The prevalence of diabetes has risen rapidly in the Middle East, particularly in the Gulf Region. However, some prevalence estimates have not fully accounted for large migrant worker populations and have focused on minority indigenous populations. The objectives of the UAE National Diabetes and Lifestyle Study are to: (i) define the prevalence of, and risk factors for, T2DM; (ii) describe the distribution and determinants of T2DM risk factors; (iii) study health knowledge, attitudes, and (iv) identify gene-environment interactions; and (v) develop baseline data for evaluation of future intervention programs. Given the high burden of diabetes in the region and the absence of accurate data on non-UAE nationals in the UAE, a representative sample of the non-UAE nationals was essential. We used an innovative methodology in which non-UAE nationals were sampled when attending the mandatory biannual health check that is required for visa renewal. Such an approach could also be used in other countries in the region. Complete data were available for 2719 eligible non-UAE nationals (25.9% Arabs, 70.7% Asian non-Arabs, 1.1% African non-Arabs, and 2.3% Westerners). Most were men service and sales, and unskilled occupations. Most (37.4%) had completed high school and 4.1% had a postgraduate degree. This novel methodology could provide insights for epidemiological studies in the UAE and other Gulf States, particularly for expatriates. © 2015 Ruijin Hospital, Shanghai Jiaotong University School of Medicine and Wiley Publishing Asia Pty Ltd.
k-Means: Random Sampling Procedure
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.
Nicklas, Jacinda M; Skurnik, Geraldine; Zera, Chloe A; Reforma, Liberty G; Levkoff, Sue E; Seely, Ellen W
2016-02-01
The postpartum period is a window of opportunity for diabetes prevention in women with recent gestational diabetes (GDM), but recruitment for clinical trials during this period of life is a major challenge. We adapted a social-ecologic model to develop a multi-level recruitment strategy at the macro (high or institutional level), meso (mid or provider level), and micro (individual) levels. Our goal was to recruit 100 women with recent GDM into the Balance after Baby randomized controlled trial over a 17-month period. Participants were asked to attend three in-person study visits at 6 weeks, 6, and 12 months postpartum. They were randomized into a control arm or a web-based intervention arm at the end of the baseline visit at six weeks postpartum. At the end of the recruitment period, we compared population characteristics of our enrolled subjects to the entire population of women with GDM delivering at Brigham and Women's Hospital (BWH). We successfully recruited 107 of 156 (69 %) women assessed for eligibility, with the majority (92) recruited during pregnancy at a mean 30 (SD ± 5) weeks of gestation, and 15 recruited postpartum, at a mean 2 (SD ± 3) weeks postpartum. 78 subjects attended the initial baseline visit, and 75 subjects were randomized into the trial at a mean 7 (SD ± 2) weeks postpartum. The recruited subjects were similar in age and race/ethnicity to the total population of 538 GDM deliveries at BWH over the 17-month recruitment period. Our multilevel approach allowed us to successfully meet our recruitment goal and recruit a representative sample of women with recent GDM. We believe that our most successful strategies included using a dedicated in-person recruiter, integrating recruitment into clinical flow, allowing for flexibility in recruitment, minimizing barriers to participation, and using an opt-out strategy with providers. Although the majority of women were recruited while pregnant, women recruited in the early postpartum period were
K-Median: Random Sampling Procedure
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. K-Median: Random Sampling Procedure. Sample a set of 1/ + 1 points from P. Let Q = first 1/ points, p = last point. Let T = Avg. 1-Median cost of P, c=1-Median. Let B1 = B(c,T/ 2), B2 = B(p, T). Let P' = points in B1.
Optimum allocation in multivariate stratified random sampling: Stochastic matrix optimisation
Diaz-Garcia, Jose A.; Ramos-Quiroga, Rogelio
2011-01-01
The allocation problem for multivariate stratified random sampling as a problem of stochastic matrix integer mathematical programming is considered. With these aims the asymptotic normality of sample covariance matrices for each strata is established. Some alternative approaches are suggested for its solution. An example is solved by applying the proposed techniques.
Methods for sample size determination in cluster randomized trials.
Rutterford, Clare; Copas, Andrew; Eldridge, Sandra
2015-06-01
The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. © The Author 2015. Published by Oxford University Press on behalf of the International Epidemiological Association.
Sequential time interleaved random equivalent sampling for repetitive signal
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
Spatial Random Sampling: A Structure-Preserving Data Sketching Tool
Rahmani, Mostafa; Atia, George K.
2017-09-01
Random column sampling is not guaranteed to yield data sketches that preserve the underlying structures of the data and may not sample sufficiently from less-populated data clusters. Also, adaptive sampling can often provide accurate low rank approximations, yet may fall short of producing descriptive data sketches, especially when the cluster centers are linearly dependent. Motivated by that, this paper introduces a novel randomized column sampling tool dubbed Spatial Random Sampling (SRS), in which data points are sampled based on their proximity to randomly sampled points on the unit sphere. The most compelling feature of SRS is that the corresponding probability of sampling from a given data cluster is proportional to the surface area the cluster occupies on the unit sphere, independently from the size of the cluster population. Although it is fully randomized, SRS is shown to provide descriptive and balanced data representations. The proposed idea addresses a pressing need in data science and holds potential to inspire many novel approaches for analysis of big data.
Acceptance sampling using judgmental and randomly selected samples
Energy Technology Data Exchange (ETDEWEB)
Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl
2010-09-01
We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.
GSAMPLE: Stata module to draw a random sample
Jann, Ben
2006-01-01
gsample draws a random sample from the data in memory. Simple random sampling (SRS) is supported, as well as unequal probability sampling (UPS), of which sampling with probabilities proportional to size (PPS) is a special case. Both methods, SRS and UPS/PPS, provide sampling with replacement and sampling without replacement. Furthermore, stratified sampling and cluster sampling is supported.
Generation and Analysis of Constrained Random Sampling Patterns
DEFF Research Database (Denmark)
Pierzchlewski, Jacek; Arildsen, Thomas
2016-01-01
Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, whi...
Sampling Polymorphs of Ionic Solids using Random Superlattices.
Stevanović, Vladan
2016-02-19
Polymorphism offers rich and virtually unexplored space for discovering novel functional materials. To harness this potential approaches capable of both exploring the space of polymorphs and assessing their realizability are needed. One such approach devised for partially ionic solids is presented. The structure prediction part is carried out by performing local density functional theory relaxations on a large set of random supperlattices (RSLs) with atoms distributed randomly over different planes in a way that favors cation-anion coordination. Applying the RSL sampling on MgO, ZnO, and SnO_{2} reveals that the resulting probability of occurrence of a given structure offers a measure of its realizability explaining fully the experimentally observed, metastable polymorphs in these three systems.
Optimizing sampling approaches along ecological gradients
DEFF Research Database (Denmark)
Schweiger, Andreas; Irl, Severin D. H.; Steinbauer, Manuel
2016-01-01
1. Natural scientists and especially ecologists use manipulative experiments or field observations along gradients to differentiate patterns driven by processes from those caused by random noise. A well-conceived sampling design is essential for identifying, analysing and reporting underlying...
Decompounding random sums: A nonparametric approach
DEFF Research Database (Denmark)
Hansen, Martin Bøgsted; Pitts, Susan M.
Observations from sums of random variables with a random number of summands, known as random, compound or stopped sums arise within many areas of engineering and science. Quite often it is desirable to infer properties of the distribution of the terms in the random sum. In the present paper we...... review a number of applications and consider the nonlinear inverse problem of inferring the cumulative distribution function of the components in the random sum. We review the existing literature on non-parametric approaches to the problem. The models amenable to the analysis are generalized considerably...
A random spatial sampling method in a rural developing nation
Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas
2014-01-01
Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...
Ucar Zennure; Pete Bettinger; Krista Merry; Jacek Siry; J.M. Bowker
2016-01-01
Two different sampling approaches for estimating urban tree canopy cover were applied to two medium-sized cities in the United States, in conjunction with two freely available remotely sensed imagery products. A random point-based sampling approach, which involved 1000 sample points, was compared against a plot/grid sampling (cluster sampling) approach that involved a...
Analysis of a global random stratified sample of nurse legislation.
Benton, D C; Fernández-Fernández, M P; González-Jurado, M A; Beneit-Montesinos, J V
2015-06-01
To identify, compare and contrast the major component parts of heterogeneous stratified sample of nursing legislation. Nursing legislation varies from one jurisdiction to another. Up until now no research exists into whether the variations of such legislation are random or if variations are related to a set of key attributes. This mixed method study used a random stratified sample of legislation to map through documentary analysis the content of 14 nursing acts and then explored, using quantitative techniques, whether the material contained relates to a number of key attributes. These attributes include: legal tradition of the jurisdiction; model of regulation; administrative approach; area of the world; and the economic status of the jurisdiction. Twelve component parts of nursing legislation were identified. These were remarkably similar irrespective of attributes of interest. However, not all component parts were specified in the same level of detail and the manner by which the elements were addressed did vary. A number of potential relationships between the structure of the legislation and the key attributes of interest were identified. This study generated a comprehensive and integrated map of a global sample of nursing legislation. It provides a set of descriptors to be used to undertake further quantitative work and provides an important policy tool to facilitate dialogue between regulatory bodies. At the individual nurse level it offers insights that can help nurses pursue recognition of credentials across jurisdictions. © 2015 International Council of Nurses.
Power Spectrum Estimation of Randomly Sampled Signals
DEFF Research Database (Denmark)
Velte, Clara M.; Buchhave, Preben; K. George, William
2014-01-01
with high data rate and low inherent bias, respectively, while residence time weighting provides non-biased estimates regardless of setting. The free-running processor was also tested and compared to residence time weighting using actual LDA measurements in a turbulent round jet. Power spectra from...... of alternative methods attempting to produce correct power spectra have been invented andtested. The objective of the current study is to create a simple computer generated signal for baseline testing of residence time weighting and some of the most commonly proposed algorithms (or algorithms which most...... modernalgorithms ultimately are based on), sample-and-hold and the direct spectral estimator without residence time weighting, and compare how they perform in relation to power spectra based on the equidistantly sampled reference signal. The computer generated signal is a Poisson process with a sample rate...
Random constraint sampling and duality for convex optimization
Haskell, William B.; Pengqian, Yu
2016-01-01
We are interested in solving convex optimization problems with large numbers of constraints. Randomized algorithms, such as random constraint sampling, have been very successful in giving nearly optimal solutions to such problems. In this paper, we combine random constraint sampling with the classical primal-dual algorithm for convex optimization problems with large numbers of constraints, and we give a convergence rate analysis. We then report numerical experiments that verify the effectiven...
Random number datasets generated from statistical analysis of randomly sampled GSM recharge cards.
Okagbue, Hilary I; Opanuga, Abiodun A; Oguntunde, Pelumi E; Ugwoke, Paulinus O
2017-02-01
In this article, a random number of datasets was generated from random samples of used GSM (Global Systems for Mobile Communications) recharge cards. Statistical analyses were performed to refine the raw data to random number datasets arranged in table. A detailed description of the method and relevant tests of randomness were also discussed.
Power Spectrum Estimation of Randomly Sampled Signals
DEFF Research Database (Denmark)
Velte, C. M.; Buchhave, P.; K. George, W.
. Residence time weighting provides non-biased estimates regardless of setting. The free-running processor was also tested and compared to residence time weighting using actual LDA measurements in a turbulent round jet. Power spectra from measurements on the jet centerline and the outer part of the jet...... sine waves. The primary signal and the corresponding power spectrum are shown in Figure 1. The conventional spectrum shows multiple erroneous mixing frequencies and the peak values are too low. The residence time weighted spectrum is correct. The sample-and-hold spectrum has lower power than...... the correct spectrum, and the f -2-filtering effect appearing for low data densities is evident (Adrian and Yao 1987). The remaining tests also show that sample-and-hold and the free-running processor perform well only under very particular circumstances with high data rate and low inherent bias, respectively...
Biro, Peter A
2013-02-01
Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.
SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS
Directory of Open Access Journals (Sweden)
Sampath Sundaram
2010-09-01
Full Text Available In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957for various types of systematic sampling schemes available in literature, namely(i Balanced Systematic Sampling (BSS of Sethi (1965 and (ii Modified Systematic Sampling (MSS of Singh, Jindal, and Garg (1968. Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic sampling (LSS with two random starts using appropriate super population models with the help of R package for statistical computing.
Efficient sampling of complex network with modified random walk strategies
Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei
2018-02-01
We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.
A Table-Based Random Sampling Simulation for Bioluminescence Tomography
Directory of Open Access Journals (Sweden)
Xiaomeng Zhang
2006-01-01
Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.
SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS
Sampath Sundaram; Ammani Sivaraman
2010-01-01
In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i) Balanced Systematic Sampling (BSS) of Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...
Sampling large random knots in a confined space
Energy Technology Data Exchange (ETDEWEB)
Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)
2007-09-28
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
A comparison of methods for representing sparsely sampled random quantities.
Energy Technology Data Exchange (ETDEWEB)
Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua
2013-09-01
This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.
Performance of Random Effects Model Estimators under Complex Sampling Designs
Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan
2011-01-01
In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…
A random spatial sampling method in a rural developing nation.
Kondo, Michelle C; Bream, Kent D W; Barg, Frances K; Branas, Charles C
2014-04-10
Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available.
Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah
2014-01-01
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
Random sampling and validation of covariance matrices of resonance parameters
Plevnik, Lucijan; Zerovnik, Gašper
2017-09-01
Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.
Generalized and synthetic regression estimators for randomized branch sampling
David L. R. Affleck; Timothy G. Gregoire
2015-01-01
In felled-tree studies, ratio and regression estimators are commonly used to convert more readily measured branch characteristics to dry crown mass estimates. In some cases, data from multiple trees are pooled to form these estimates. This research evaluates the utility of both tactics in the estimation of crown biomass following randomized branch sampling (...
Effective sampling of random surfaces by baby universe surgery
Ambjørn, J.; Białas, P.; Jurkiewicz, J.; Burda, Z.; Petersson, B.
1994-01-01
We propose a new, very efficient algorithm for sampling of random surfaces in the Monte Carlo simulations, based on so-called baby universe surgery, i.e. cutting and pasting of baby universe. It drastically reduces slowing down as compared to the standard local flip algorithm, thereby allowing
Small sample approach, and statistical and epidemiological aspects.
Offringa, Martin; van der Lee, Hanneke
2011-01-01
In this chapter, the design of pharmacokinetic studies and phase III trials in children is discussed. Classical approaches and relatively novel approaches, which may be more useful in the context of drug research in children, are discussed. The burden of repeated blood sampling in pediatric pharmacokinetic studies may be overcome by the population pharmacokinetics approach using nonlinear mixed effect modeling as the statistical solution to sparse data. Indications and contraindications for phase III trials are discussed: only when there is true "equipoise" in the medical scientific community, it is ethical to conduct a randomized clinical trial. The many reasons why a pediatric trial may fail are illustrated with examples. Inadequate sample sizes lead to inconclusive results. Twelve classical strategies to minimize sample sizes are discussed followed by an introduction to group sequential design, boundaries design, and adaptive design. The evidence that these designs reduce sample sized between 35 and 70% is reviewed. The advantages and disadvantages of the different approaches are highlighted to give the reader a broad idea of the design types that can be considered. Finally, working with DMCs during the conduct of trials is introduced. The evidence regarding DMC activities, interim analysis results, and early termination of pediatric trials is presented. So far reporting is incomplete and heterogeneous, and users of trial reports may be misled by the results. A proposal for a checklist for the reporting of DMC issues, interim analyses, and early stopping is presented.
Random matrix approach to categorical data analysis
Patil, Aashay; Santhanam, M. S.
2015-09-01
Correlation and similarity measures are widely used in all the areas of sciences and social sciences. Often the variables are not numbers but are instead qualitative descriptors called categorical data. We define and study similarity matrix, as a measure of similarity, for the case of categorical data. This is of interest due to a deluge of categorical data, such as movie ratings, top-10 rankings, and data from social media, in the public domain that require analysis. We show that the statistical properties of the spectra of similarity matrices, constructed from categorical data, follow random matrix predictions with the dominant eigenvalue being an exception. We demonstrate this approach by applying it to the data for Indian general elections and sea level pressures in the North Atlantic ocean.
Sampling versus Random Binning for Multiple Descriptions of a Bandlimited Source
DEFF Research Database (Denmark)
Mashiach, Adam; Østergaard, Jan; Zamir, Ram
2013-01-01
Random binning is an efficient, yet complex, coding technique for the symmetric L-description source coding problem. We propose an alternative approach, that uses the quantized samples of a bandlimited source as "descriptions". By the Nyquist condition, the source can be reconstructed if enough s...
Random sampling and validation of covariance matrices of resonance parameters
Directory of Open Access Journals (Sweden)
Plevnik Lucijan
2017-01-01
Full Text Available Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.
A random matrix approach to language acquisition
Nicolaidis, A.; Kosmidis, Kosmas; Argyrakis, Panos
2009-12-01
Since language is tied to cognition, we expect the linguistic structures to reflect patterns that we encounter in nature and are analyzed by physics. Within this realm we investigate the process of lexicon acquisition, using analytical and tractable methods developed within physics. A lexicon is a mapping between sounds and referents of the perceived world. This mapping is represented by a matrix and the linguistic interaction among individuals is described by a random matrix model. There are two essential parameters in our approach. The strength of the linguistic interaction β, which is considered as a genetically determined ability, and the number N of sounds employed (the lexicon size). Our model of linguistic interaction is analytically studied using methods of statistical physics and simulated by Monte Carlo techniques. The analysis reveals an intricate relationship between the innate propensity for language acquisition β and the lexicon size N, N~exp(β). Thus a small increase of the genetically determined β may lead to an incredible lexical explosion. Our approximate scheme offers an explanation for the biological affinity of different species and their simultaneous linguistic disparity.
Random sampling of elementary flux modes in large-scale metabolic networks.
Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel
2012-09-15
The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.
Sampling of Complex Networks: A Datamining Approach
Loecher, Markus; Dohrmann, Jakob; Bauer, Gernot
2007-03-01
Efficient and accurate sampling of big complex networks is still an unsolved problem. As the degree distribution is one of the most commonly used attributes to characterize a network, there have been many attempts in recent papers to derive the original degree distribution from the data obtained during a traceroute- like sampling process. This talk describes a strategy for predicting the original degree of a node using the data obtained from a network by traceroute-like sampling making use of datamining techniques. Only local quantities (the sampled degree k, the redundancy of node detection r, the time of the first discovery of a node t and the distance to the sampling source d) are used as input for the datamining models. Global properties like the betweenness centrality are ignored. These local quantities are examined theoretically and in simulations to increase their value for the predictions. The accuracy of the models is discussed as a function of the number of sources used in the sampling process and the underlying topology of the network. The purpose of this work is to introduce the techniques of the relatively young field of datamining to the discussion on network sampling.
Randomly Sampled-Data Control Systems. Ph.D. Thesis
Han, Kuoruey
1990-01-01
The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.
Sampling approaches for extensive surveys in nematology.
Prot, J C; Ferris, H
1992-12-01
Extensive surveys of the frequency and abundance of plant-parasitic nematodes over large geographic areas provide useful data of unknown reliability. Time, cost, and logistical constraints may limit the sampling intensity that can be invested at any survey site. We developed a computer program to evaluate the probability of detection and the reliability of population estimates obtained by different strategies for collecting one sample of 10 cores from a field. We used data from two fields that had been sampled systematically and extensively as the basis for our analyses. Our analyses indicate that, at least for those two fields, it is possible to have a high probability of detecting the presence of nematode species and to reliably estimate abundance, with a single 10-core soil sample from a field. When species were rare or not uniformly distributed in a field, the probability of detection and reliability of the population estimate were correlated with the distance between core removal sites. Increasing the prescribed distance between cores resulted in the composite sample representing a wider range of microenvironments in the field.
Random field estimation approach to robot dynamics
Rodriguez, Guillermo
1990-01-01
The difference equations of Kalman filtering and smoothing recursively factor and invert the covariance of the output of a linear state-space system driven by a white-noise process. Here it is shown that similar recursive techniques factor and invert the inertia matrix of a multibody robot system. The random field models are based on the assumption that all of the inertial (D'Alembert) forces in the system are represented by a spatially distributed white-noise model. They are easier to describe than the models based on classical mechanics, which typically require extensive derivation and manipulation of equations of motion for complex mechanical systems. With the spatially random models, more primitive locally specified computations result in a global collective system behavior equivalent to that obtained with deterministic models. The primary goal of applying random field estimation is to provide a concise analytical foundation for solving robot control and motion planning problems.
A Random Matrix Approach to Credit Risk
Guhr, Thomas
2014-01-01
We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided. PMID:24853864
A random matrix approach to credit risk.
Directory of Open Access Journals (Sweden)
Michael C Münnix
Full Text Available We estimate generic statistical properties of a structural credit risk model by considering an ensemble of correlation matrices. This ensemble is set up by Random Matrix Theory. We demonstrate analytically that the presence of correlations severely limits the effect of diversification in a credit portfolio if the correlations are not identically zero. The existence of correlations alters the tails of the loss distribution considerably, even if their average is zero. Under the assumption of randomly fluctuating correlations, a lower bound for the estimation of the loss distribution is provided.
Power and sample size calculations for Mendelian randomization studies using one genetic instrument.
Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary
2013-08-01
Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size.
A random approach to the Lebesgue integral
Grahl, Jack
2008-04-01
We construct an integral of a measurable real function using randomly chosen Riemann sums and show that it converges in probability to the Lebesgue integral where this exists. We then prove some conditions for the almost sure convergence of this integral.
LOD score exclusion analyses for candidate QTLs using random population samples.
Deng, Hong-Wen
2003-11-01
While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes as putative QTLs using random population samples. Previously, we developed an LOD score exclusion mapping approach for candidate genes for complex diseases. Here, we extend this LOD score approach for exclusion analyses of candidate genes for quantitative traits. Under this approach, specific genetic effects (as reflected by heritability) and inheritance models at candidate QTLs can be analyzed and if an LOD score is < or = -2.0, the locus can be excluded from having a heritability larger than that specified. Simulations show that this approach has high power to exclude a candidate gene from having moderate genetic effects if it is not a QTL and is robust to population admixture. Our exclusion analysis complements association analysis for candidate genes as putative QTLs in random population samples. The approach is applied to test the importance of Vitamin D receptor (VDR) gene as a potential QTL underlying the variation of bone mass, an important determinant of osteoporosis.
Lv, Chao; Zheng, Lianqing; Yang, Wei
2012-01-28
Molecular dynamics sampling can be enhanced via the promoting of potential energy fluctuations, for instance, based on a Hamiltonian modified with the addition of a potential-energy-dependent biasing term. To overcome the diffusion sampling issue, which reveals the fact that enlargement of event-irrelevant energy fluctuations may abolish sampling efficiency, the essential energy space random walk (EESRW) approach was proposed earlier. To more effectively accelerate the sampling of solute conformations in aqueous environment, in the current work, we generalized the EESRW method to a two-dimension-EESRW (2D-EESRW) strategy. Specifically, the essential internal energy component of a focused region and the essential interaction energy component between the focused region and the environmental region are employed to define the two-dimensional essential energy space. This proposal is motivated by the general observation that in different conformational events, the two essential energy components have distinctive interplays. Model studies on the alanine dipeptide and the aspartate-arginine peptide demonstrate sampling improvement over the original one-dimension-EESRW strategy; with the same biasing level, the present generalization allows more effective acceleration of the sampling of conformational transitions in aqueous solution. The 2D-EESRW generalization is readily extended to higher dimension schemes and employed in more advanced enhanced-sampling schemes, such as the recent orthogonal space random walk method. © 2012 American Institute of Physics
A Randomization Approach for Stochastic Workflow Scheduling in Clouds
Directory of Open Access Journals (Sweden)
Wei Zheng
2016-01-01
Full Text Available In cloud systems consisting of heterogeneous distributed resources, scheduling plays a key role to obtain good performance when complex applications are run. However, there is unavoidable error in predicting individual task execution times and data transmission times. When this error is being not negligible, deterministic scheduling approaches (i.e., scheduling based on accurate time prediction may suffer. In this paper, we assume the error in time predictions is modelled in stochastic manner, and a novel randomization approach making use of the properties of random variables is proposed to improve deterministic scheduling. The randomization approach is applied to a classic deterministic scheduling heuristic, but its applicability is not limited to this one heuristic. Evaluation results obtained from extensive simulation show that the randomized scheduling approach can significantly outperform its static counterpart and the extra overhead introduced is not only controllable but also acceptable.
Energy Technology Data Exchange (ETDEWEB)
Piepel, Gregory F.; Matzke, Brett D.; Sego, Landon H.; Amidan, Brett G.
2013-04-27
This report discusses the methodology, formulas, and inputs needed to make characterization and clearance decisions for Bacillus anthracis-contaminated and uncontaminated (or decontaminated) areas using a statistical sampling approach. Specifically, the report includes the methods and formulas for calculating the • number of samples required to achieve a specified confidence in characterization and clearance decisions • confidence in making characterization and clearance decisions for a specified number of samples for two common statistically based environmental sampling approaches. In particular, the report addresses an issue raised by the Government Accountability Office by providing methods and formulas to calculate the confidence that a decision area is uncontaminated (or successfully decontaminated) if all samples collected according to a statistical sampling approach have negative results. Key to addressing this topic is the probability that an individual sample result is a false negative, which is commonly referred to as the false negative rate (FNR). The two statistical sampling approaches currently discussed in this report are 1) hotspot sampling to detect small isolated contaminated locations during the characterization phase, and 2) combined judgment and random (CJR) sampling during the clearance phase. Typically if contamination is widely distributed in a decision area, it will be detectable via judgment sampling during the characterization phrase. Hotspot sampling is appropriate for characterization situations where contamination is not widely distributed and may not be detected by judgment sampling. CJR sampling is appropriate during the clearance phase when it is desired to augment judgment samples with statistical (random) samples. The hotspot and CJR statistical sampling approaches are discussed in the report for four situations: 1. qualitative data (detect and non-detect) when the FNR = 0 or when using statistical sampling methods that account
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactions among parameters. In this paper, we theoretically show that FAST analysis can be used to estimate partial variances contributed by both main effects and interaction effects of model parameters using different sampling approaches (i.e., traditional search-curve based sampling, simple random sampling and random balance design sampling). We also analytically calculate the potential errors and biases in the estimation of partial variances. Hypothesis tests are constructed to reduce the effect of sampling errors on the estimation of partial variances. Our results show that compared to simple random sampling and random balance design sampling, sensitivity indices (ratios of partial variances to variance of a specific model output) estimated by search-curve based sampling generally have higher precision but larger underestimations. Compared to simple random sampling, random balance design sampling generally provides higher estimation precision for partial variances contributed by the main effects of parameters. The theoretical derivation of partial variances contributed by higher-order interactions and the calculation of their corresponding estimation errors in different sampling schemes can help us better understand the FAST method and provide a fundamental basis for FAST applications and further improvements.
Serang, Oliver
2012-01-01
Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics.
Sample size calculations for 3-level cluster randomized trials
Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.
2008-01-01
BACKGROUND: The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health
Sample size calculations for 3-level cluster randomized trials
Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.
2008-01-01
Background The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health
Improved estimator of finite population mean using auxiliary attribute in stratified random sampling
Verma, Hemant K.; Sharma, Prayas; Singh, Rajesh
2014-01-01
The present study discuss the problem of estimating the finite population mean using auxiliary attribute in stratified random sampling. In this paper taking the advantage of point bi-serial correlation between the study variable and auxiliary attribute, we have improved the estimation of population mean in stratified random sampling. The expressions for Bias and Mean square error have been derived under stratified random sampling. In addition, an empirical study has been carried out to examin...
Directory of Open Access Journals (Sweden)
Nadia Mushtaq
2017-03-01
Full Text Available In this article, a combined general family of estimators is proposed for estimating finite population mean of a sensitive variable in stratified random sampling with non-sensitive auxiliary variable based on randomized response technique. Under stratified random sampling without replacement scheme, the expression of bias and mean square error (MSE up to the first-order approximations are derived. Theoretical and empirical results through a simulation study show that the proposed class of estimators is more efficient than the existing estimators, i.e., usual stratified random sample mean estimator, Sousa et al (2014 ratio and regression estimator of the sensitive variable in stratified sampling.
Small sample approach, and statistical and epidemiological aspects
Offringa, Martin; van der Lee, Hanneke
2011-01-01
In this chapter, the design of pharmacokinetic studies and phase III trials in children is discussed. Classical approaches and relatively novel approaches, which may be more useful in the context of drug research in children, are discussed. The burden of repeated blood sampling in pediatric
THE SAMPLING PROCESS IN THE FINANCIAL AUDIT .TECHNICAL PRACTICE APPROACH
Directory of Open Access Journals (Sweden)
GRIGORE MARIAN
2014-07-01
“Audit sampling” (sampling assumes appliancing audit procedures for less than 100% of the elements within an account or a trasaction class balance, such that all the samples will be selected. This will allow the auditor to obtain and to evaluate the audit evidence on some features for the selected elements, in purpose to assist or to express a conclusion regardind the population within the sample was extracted. The sampling in audit can use both a statistical or a non-statistical approach. (THE AUDIT INTERNATIONAl STANDARD 530 –THE SAMPLING IN AUDIT AND OTHER SELECTIVE TESTING PROCEDURES
THE SAMPLING PROCESS IN THE FINANCIAL AUDIT .TECHNICAL PRACTICE APPROACH
Directory of Open Access Journals (Sweden)
Cardos Vasile-Daniel
2014-12-01
“Audit sampling” (sampling assumes appliancing audit procedures for less than 100% of the elements within an account or a trasaction class balance, such that all the samples will be selected. This will allow the auditor to obtain and to evaluate the audit evidence on some features for the selected elements, in purpose to assist or to express a conclusion regardind the population within the sample was extracted. The sampling in audit can use both a statistical or a non-statistical approach. (THE AUDIT INTERNATIONAl STANDARD 530 –THE SAMPLING IN AUDIT AND OTHER SELECTIVE TESTING PROCEDURES
Computer Corner: A Note on Pascal's Triangle and Simple Random Sampling.
Wright, Tommy
1989-01-01
Describes the algorithm used to select a simple random sample of certain size without having to list all possible samples and a justification based on Pascal's triangle. Provides testing results by various computers. (YP)
U.S. Environmental Protection Agency — Figure. This dataset is associated with the following publication: Shah, S., S. Kane, A.M. Erler, and T. Alfaro. Sample Processing Approach for Detection of Ricin in...
Carpenter, Matthew J; Hughes, John R; Gray, Kevin M; Wahlquist, Amy E; Saladin, Michael E; Alberg, Anthony J
2011-11-28
Rates of smoking cessation have not changed in a decade, accentuating the need for novel approaches to prompt quit attempts. Within a nationwide randomized clinical trial (N = 849) to induce further quit attempts and cessation, smokers currently unmotivated to quit were randomized to a practice quit attempt (PQA) alone or to nicotine replacement therapy (hereafter referred to as nicotine therapy), sampling within the context of a PQA. Following a 6-week intervention period, participants were followed up for 6 months to assess outcomes. The PQA intervention was designed to increase motivation, confidence, and coping skills. The combination of a PQA plus nicotine therapy sampling added samples of nicotine lozenges to enhance attitudes toward pharmacotherapy and to promote the use of additional cessation resources. Primary outcomes included the incidence of any ever occurring self-defined quit attempt and 24-hour quit attempt. Secondary measures included 7-day point prevalence abstinence at any time during the study (ie, floating abstinence) and at the final follow-up assessment. Compared with PQA intervention, nicotine therapy sampling was associated with a significantly higher incidence of any quit attempt (49% vs 40%; relative risk [RR], 1.2; 95% CI, 1.1-1.4) and any 24-hour quit attempt (43% vs 34%; 1.3; 1.1-1.5). Nicotine therapy sampling was marginally more likely to promote floating abstinence (19% vs 15%; RR, 1.3; 95% CI, 1.0-1.7); 6-month point prevalence abstinence rates were no different between groups (16% vs 14%; 1.2; 0.9-1.6). Nicotine therapy sampling during a PQA represents a novel strategy to motivate smokers to make a quit attempt. clinicaltrials.gov Identifier: NCT00706979.
An effective Hamiltonian approach to quantum random walk
Indian Academy of Sciences (India)
In this article we present an effective Hamiltonian approach for discrete time quantum random walk. A form of the Hamiltonian ... TARUN KANTI GHOSH2. Inter-University Centre for Astronomy and Astrophysics, Ganeshkhind, Pune 411 007, India; Department of Physics, Indian Institute of Technology, Kanpur 208 016, India ...
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2017-08-01
Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.
Singh, Rajesh; Sharma, Prayas; Smarandache, Florentin
2014-01-01
Singh et al (20009) introduced a family of exponential ratio and product type estimators in stratified random sampling. Under stratified random sampling without replacement scheme, the expressions of bias and mean square error (MSE) of Singh et al (2009) and some other estimators, up to the first- and second-order approximations are derived. Also, the theoretical findings are supported by a numerical example.
Query-Based Sampling: Can we do Better than Random?
Tigelaar, A.S.; Hiemstra, Djoerd
2010-01-01
Many servers on the web offer content that is only accessible via a search interface. These are part of the deep web. Using conventional crawling to index the content of these remote servers is impossible without some form of cooperation. Query-based sampling provides an alternative to crawling
Stratified random sampling plan for an irrigation customer telephone survey
Energy Technology Data Exchange (ETDEWEB)
Johnston, J.W.; Davis, L.J.
1986-05-01
This report describes the procedures used to design and select a sample for a telephone survey of individuals who use electricity in irrigating agricultural cropland in the Pacific Northwest. The survey is intended to gather information on the irrigated agricultural sector that will be useful for conservation assessment, load forecasting, rate design, and other regional power planning activities.
Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster
DEFF Research Database (Denmark)
Schou, Mads Fristrup
2013-01-01
When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column and dimini......When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....
Sampling forest tree regeneration with a transect approach
Directory of Open Access Journals (Sweden)
D. Hessenmöller
2013-05-01
Full Text Available A new transect approach for sampling forest tree regeneration isdeveloped with the aim to minimize the amount of field measurements, and to produce an accurate estimation of tree species composition and density independent of tree height. This approach is based on the “probability proportional to size” (PPS theory to assess heterogeneous vegetation. This new method is compared with other approaches to assess forest regeneration based on simulated and measured, real data. The main result is that the transect approach requires about 50% of the time to assess stand density as compared to the plot approach, due to the fact that only 25% of the tree individuals are measured. In addition, tall members of the regeneration are counted with equal probability as small members. This is not the case in the plot approach. The evenness is 0.1 to 0.2 units larger in the transect by PPS than in the plot approach, which means that the plot approach shows a more homogeneous regeneration layer than the PPS approach, even though the stand densities and height distributions are similar. The species diversity is variable in both approaches and needs further investigations.
Approach-Induced Biases in Human Information Sampling
Hunt, Laurence T.; Rutledge, Robb B.; Malalasekera, W. M. Nishantha; Kennerley, Steven W.; Dolan, Raymond J.
2016-01-01
Information sampling is often biased towards seeking evidence that confirms one’s prior beliefs. Despite such biases being a pervasive feature of human behavior, their underlying causes remain unclear. Many accounts of these biases appeal to limitations of human hypothesis testing and cognition, de facto evoking notions of bounded rationality, but neglect more basic aspects of behavioral control. Here, we investigated a potential role for Pavlovian approach in biasing which information humans will choose to sample. We collected a large novel dataset from 32,445 human subjects, making over 3 million decisions, who played a gambling task designed to measure the latent causes and extent of information-sampling biases. We identified three novel approach-related biases, formalized by comparing subject behavior to a dynamic programming model of optimal information gathering. These biases reflected the amount of information sampled (“positive evidence approach”), the selection of which information to sample (“sampling the favorite”), and the interaction between information sampling and subsequent choices (“rejecting unsampled options”). The prevalence of all three biases was related to a Pavlovian approach-avoid parameter quantified within an entirely independent economic decision task. Our large dataset also revealed that individual differences in the amount of information gathered are a stable trait across multiple gameplays and can be related to demographic measures, including age and educational attainment. As well as revealing limitations in cognitive processing, our findings suggest information sampling biases reflect the expression of primitive, yet potentially ecologically adaptive, behavioral repertoires. One such behavior is sampling from options that will eventually be chosen, even when other sources of information are more pertinent for guiding future action. PMID:27832071
Extension of the Multipole Approach to Random Metamaterials
Directory of Open Access Journals (Sweden)
A. Chipouline
2012-01-01
Full Text Available Influence of the short-range lateral disorder in the meta-atoms positioning on the effective parameters of the metamaterials is investigated theoretically using the multipole approach. Random variation of the near field quasi-static interaction between metaatoms in form of double wires is shown to be the reason for the effective permittivity and permeability changes. The obtained analytical results are compared with the known experimental ones.
Random materials modeling : Statistical approach proposal for recycling materials
Jeong, Jena; Wang, L.; Schmidt, Franziska; LEKLOU, NORDINE; Ramezani, Hamidreza
2015-01-01
The current paper aims to promote the application of demolition waste on civil constructions. To achieve this assaignement, two main physcical properties, i.e. dry density and water absoption of the recycled aggregates have been chosen and studied at the first stage. The materail moduli of the recycled materials, i.e. the Lamé's coefficients, and strongly depend on the porosity. Moreover, the recycling materials should be considered as random materials. As a result, the statistical approach...
Flow in Random Microstructures: a Multilevel Monte Carlo Approach
Icardi, Matteo
2016-01-06
In this work we are interested in the fast estimation of effective parameters of random heterogeneous materials using Multilevel Monte Carlo (MLMC). MLMC is an efficient and flexible solution for the propagation of uncertainties in complex models, where an explicit parametrisation of the input randomness is not available or too expensive. We propose a general-purpose algorithm and computational code for the solution of Partial Differential Equations (PDEs) on random heterogeneous materials. We make use of the key idea of MLMC, based on different discretization levels, extending it in a more general context, making use of a hierarchy of physical resolution scales, solvers, models and other numerical/geometrical discretisation parameters. Modifications of the classical MLMC estimators are proposed to further reduce variance in cases where analytical convergence rates and asymptotic regimes are not available. Spheres, ellipsoids and general convex-shaped grains are placed randomly in the domain with different placing/packing algorithms and the effective properties of the heterogeneous medium are computed. These are, for example, effective diffusivities, conductivities, and reaction rates. The implementation of the Monte-Carlo estimators, the statistical samples and each single solver is done efficiently in parallel. The method is tested and applied for pore-scale simulations of random sphere packings.
A sampling-based approach to probabilistic pursuit evasion
Mahadevan, Aditya
2012-05-01
Probabilistic roadmaps (PRMs) are a sampling-based approach to motion-planning that encodes feasible paths through the environment using a graph created from a subset of valid positions. Prior research has shown that PRMs can be augmented with useful information to model interesting scenarios related to multi-agent interaction and coordination. © 2012 IEEE.
Pu, Xiangke; Gao, Ge; Fan, Yubo; Wang, Mian
2016-01-01
Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.
Directory of Open Access Journals (Sweden)
Xiangke Pu
Full Text Available Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.
Random Model Sampling: Making Craig Interpolation Work When It Should Not
Directory of Open Access Journals (Sweden)
Marat Akhin
2014-01-01
Full Text Available One of the most serious problems when doing program analyses is dealing with function calls. While function inlining is the traditional approach to this problem, it nonetheless suffers from the increase in analysis complexity due to the state space explosion. Craig interpolation has been successfully used in recent years in the context of bounded model checking to do function summarization which allows one to replace the complete function body with its succinct summary and, therefore, reduce the complexity. Unfortunately this technique can be applied only to a pair of unsatisfiable formulae.In this work-in-progress paper we present an approach to function summarization based on Craig interpolation that overcomes its limitation by using random model sampling. It captures interesting input/output relations, strengthening satisfiable formulae into unsatisfiable ones and thus allowing the use of Craig interpolation. Preliminary experiments show the applicability of this approach; in our future work we plan to do a full evaluation on real-world examples.
National Research Council Canada - National Science Library
Nadia Mushtaq; Noor Ul Amin; Muhammad Hanif
2017-01-01
In this article, a combined general family of estimators is proposed for estimating finite population mean of a sensitive variable in stratified random sampling with non-sensitive auxiliary variable...
A random-permutations-based approach to fast read alignment.
Lederman, Roy
2013-01-01
Read alignment is a computational bottleneck in some sequencing projects. Most of the existing software packages for read alignment are based on two algorithmic approaches: prefix-trees and hash-tables. We propose a new approach to read alignment using random permutations of strings. We present a prototype implementation and experiments performed with simulated and real reads of human DNA. Our experiments indicate that this permutations-based prototype is several times faster than comparable programs for fast read alignment and that it aligns more reads correctly. This approach may lead to improved speed, sensitivity, and accuracy in read alignment. The algorithm can also be used for specialized alignment applications and it can be extended to other related problems, such as assembly.More information: http://alignment.commons.yale.edu.
Notes on interval estimation of the generalized odds ratio under stratified random sampling.
Lui, Kung-Jong; Chang, Kuang-Chao
2013-05-01
It is not rare to encounter the patient response on the ordinal scale in a randomized clinical trial (RCT). Under the assumption that the generalized odds ratio (GOR) is homogeneous across strata, we consider four asymptotic interval estimators for the GOR under stratified random sampling. These include the interval estimator using the weighted-least-squares (WLS) approach with the logarithmic transformation (WLSL), the interval estimator using the Mantel-Haenszel (MH) type of estimator with the logarithmic transformation (MHL), the interval estimator using Fieller's theorem with the MH weights (FTMH) and the interval estimator using Fieller's theorem with the WLS weights (FTWLS). We employ Monte Carlo simulation to evaluate the performance of these interval estimators by calculating the coverage probability and the average length. To study the bias of these interval estimators, we also calculate and compare the noncoverage probabilities in the two tails of the resulting confidence intervals. We find that WLSL and MHL can generally perform well, while FTMH and FTWLS can lose either precision or accuracy. We further find that MHL is likely the least biased. Finally, we use the data taken from a study of smoking status and breathing test among workers in certain industrial plants in Houston, Texas, during 1974 to 1975 to illustrate the use of these interval estimators.
A New Estimator For Population Mean Using Two Auxiliary Variables in Stratified random Sampling
Singh, Rajesh; Malik, Sachin
2014-01-01
In this paper, we suggest an estimator using two auxiliary variables in stratified random sampling. The propose estimator has an improvement over mean per unit estimator as well as some other considered estimators. Expressions for bias and MSE of the estimator are derived up to first degree of approximation. Moreover, these theoretical findings are supported by a numerical example with original data. Key words: Study variable, auxiliary variable, stratified random sampling, bias and mean squa...
Conflict-cost based random sampling design for parallel MRI with low rank constraints
Kim, Wan; Zhou, Yihang; Lyu, Jingyuan; Ying, Leslie
2015-05-01
In compressed sensing MRI, it is very important to design sampling pattern for random sampling. For example, SAKE (simultaneous auto-calibrating and k-space estimation) is a parallel MRI reconstruction method using random undersampling. It formulates image reconstruction as a structured low-rank matrix completion problem. Variable density (VD) Poisson discs are typically adopted for 2D random sampling. The basic concept of Poisson disc generation is to guarantee samples are neither too close to nor too far away from each other. However, it is difficult to meet such a condition especially in the high density region. Therefore the sampling becomes inefficient. In this paper, we present an improved random sampling pattern for SAKE reconstruction. The pattern is generated based on a conflict cost with a probability model. The conflict cost measures how many dense samples already assigned are around a target location, while the probability model adopts the generalized Gaussian distribution which includes uniform and Gaussian-like distributions as special cases. Our method preferentially assigns a sample to a k-space location with the least conflict cost on the circle of the highest probability. To evaluate the effectiveness of the proposed random pattern, we compare the performance of SAKEs using both VD Poisson discs and the proposed pattern. Experimental results for brain data show that the proposed pattern yields lower normalized mean square error (NMSE) than VD Poisson discs.
Computerized experience-sampling approach for realtime assessment of stress
Directory of Open Access Journals (Sweden)
S. Serino
2013-03-01
Full Text Available The incredible advancement in the ICT sector has challenged technology developers, designers, and psychologists to reflect on how to develop technologies to promote mental health. Computerized experience-sampling method appears to be a promising assessment approach to investigate the real-time fluctuation of experience in daily life in order to detect stressful events. At this purpose, we developed PsychLog (http://psychlog.com a free open-source mobile experience sampling platform that allows psychophysiological data to be collected, aggregated, visualized and collated into reports. Results showed a good classification of relaxing and stressful events, defining the two groups with psychological analysis and verifying the discrimination with physiological measures. Within the paradigm of Positive Technology, our innovative approach offers for researchers and clinicians new effective opportunities for the assessment and treatment of the psychological stress in daily situations.
Instantaneous GNSS attitude determination: A Monte Carlo sampling approach
Sun, Xiucong; Han, Chao; Chen, Pei
2017-04-01
A novel instantaneous GNSS ambiguity resolution approach which makes use of only single-frequency carrier phase measurements for ultra-short baseline attitude determination is proposed. The Monte Carlo sampling method is employed to obtain the probability density function of ambiguities from a quaternion-based GNSS-attitude model and the LAMBDA method strengthened with a screening mechanism is then utilized to fix the integer values. Experimental results show that 100% success rate could be achieved for ultra-short baselines.
Per-Sample Multiple Kernel Approach for Visual Concept Learning
Directory of Open Access Journals (Sweden)
Tian Yonghong
2010-01-01
Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.
Per-Sample Multiple Kernel Approach for Visual Concept Learning
Directory of Open Access Journals (Sweden)
Ling-Yu Duan
2010-01-01
Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.
Williamson, Graham R
2003-11-01
This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.
Directory of Open Access Journals (Sweden)
Jiang Houlong
2016-01-01
Full Text Available Sampling methods are important factors that can potentially limit the accuracy of predictions of spatial distribution patterns. A 10 ha tobacco-planted field was selected to compared the accuracy in predicting the spatial distribution of soil properties by using ordinary kriging and cross validation methods between grid sampling and simple random sampling scheme (SRS. To achieve this objective, we collected soil samples from the topsoil (0-20 cm in March 2012. Sample numbers of grid sampling and SRS were both 115 points each. Accuracies of spatial interpolation using the two sampling schemes were then evaluated based on validation samples (36 points and deviations of the estimates. The results suggested that soil pH and nitrate-N (NO3-N had low variation, whereas all other soil properties exhibited medium variation. Soil pH, organic matter (OM, total nitrogen (TN, cation exchange capacity (CEC, total phosphorus (TP and available phosphorus (AP matched the spherical model, whereas the remaining variables fit an exponential model with both sampling methods. The interpolation error of soil pH, TP, and AP was the lowest in SRS. The errors of interpolation for OM, CEC, TN, available potassium (AK and total potassium (TK were the lowest for grid sampling. The interpolation precisions of the soil NO3-N showed no significant differences between the two sampling schemes. Considering our data on interpolation precision and the importance of minerals for cultivation of flue-cured tobacco, the grid-sampling scheme should be used in tobacco-planted fields to determine the spatial distribution of soil properties. The grid-sampling method can be applied in a practical and cost-effective manner to facilitate soil sampling in tobacco-planted field.
A coupled well-balanced and random sampling scheme for computing bubble oscillations*
Directory of Open Access Journals (Sweden)
Jung Jonathan
2012-04-01
Full Text Available We propose a finite volume scheme to study the oscillations of a spherical bubble of gas in a liquid phase. Spherical symmetry implies a geometric source term in the Euler equations. Our scheme satisfies the well-balanced property. It is based on the VFRoe approach. In order to avoid spurious pressure oscillations, the well-balanced approach is coupled with an ALE (Arbitrary Lagrangian Eulerian technique at the interface and a random sampling remap. Nous proposons un schéma de volumes finis pour étudier les oscillations d’une bulle sphérique de gaz dans l’eau. La symétrie sphérique fait apparaitre un terme source géométrique dans les équations d’Euler. Notre schéma est basé sur une approche VFRoe et préserve les états stationnaires. Pour éviter les oscillations de pression, l’approche well-balanced est couplée avec une approche ALE (Arbitrary Lagrangian Eulerian, et une étape de projection basée sur un échantillonage aléatoire.
Notes on interval estimation of the gamma correlation under stratified random sampling.
Lui, Kung-Jong; Chang, Kuang-Chao
2012-07-01
We have developed four asymptotic interval estimators in closed forms for the gamma correlation under stratified random sampling, including the confidence interval based on the most commonly used weighted-least-squares (WLS) approach (CIWLS), the confidence interval calculated from the Mantel-Haenszel (MH) type estimator with the Fisher-type transformation (CIMHT), the confidence interval using the fundamental idea of Fieller's Theorem (CIFT) and the confidence interval derived from a monotonic function of the WLS estimator of Agresti's α with the logarithmic transformation (MWLSLR). To evaluate the finite-sample performance of these four interval estimators and note the possible loss of accuracy in application of both Wald's confidence interval and MWLSLR using pooled data without accounting for stratification, we employ Monte Carlo simulation. We use the data taken from a general social survey studying the association between the income level and job satisfaction with strata formed by genders in black Americans published elsewhere to illustrate the practical use of these interval estimators. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Calculating sample sizes for cluster randomized trials: we can keep it simple and efficient !
van Breukelen, Gerard J.P.; Candel, Math J.J.M.
2012-01-01
Objective: Simple guidelines for efficient sample sizes in cluster randomized trials with unknown intraclass correlation and varying cluster sizes. Methods: A simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given
Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne
2018-01-01
Introduction Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. Materials and methods The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. Results The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. Conclusions The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in
Treatment noncompliance in randomized experiments: statistical approaches and design issues.
Sagarin, Brad J; West, Stephen G; Ratnikov, Alexander; Homan, William K; Ritchie, Timothy D; Hansen, Edward J
2014-09-01
Treatment noncompliance in randomized experiments threatens the validity of causal inference and the interpretability of treatment effects. This article provides a nontechnical review of 7 approaches: 3 traditional and 4 newer statistical analysis strategies. Traditional approaches include (a) intention-to-treat analysis (which estimates the effects of treatment assignment irrespective of treatment received), (b) as-treated analysis (which reassigns participants to groups reflecting the treatment they actually received), and (c) per-protocol analysis (which drops participants who did not comply with their assigned treatment). Newer approaches include (d) the complier average causal effect (which estimates the effect of treatment on the subpopulation of those who would comply with their assigned treatment), (e) dose-response estimation (which uses degree of compliance to stratify participants, producing an estimate of a dose-response relationship), (f) propensity score analysis (which uses covariates to estimate the probability that individual participants will comply, enabling estimates of treatment effects at different propensities), and (g) treatment effect bounding (which calculates a range of possible treatment effects applicable to both compliers and noncompliers). The discussion considers the areas of application, the quantity estimated, the underlying assumptions, and the strengths and weaknesses of each approach. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-11-01
When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10^{-4} probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.
A Review of Enhanced Sampling Approaches for Accelerated Molecular Dynamics
Tiwary, Pratyush; van de Walle, Axel
Molecular dynamics (MD) simulations have become a tool of immense use and popularity for simulating a variety of systems. With the advent of massively parallel computer resources, one now routinely sees applications of MD to systems as large as hundreds of thousands to even several million atoms, which is almost the size of most nanomaterials. However, it is not yet possible to reach laboratory timescales of milliseconds and beyond with MD simulations. Due to the essentially sequential nature of time, parallel computers have been of limited use in solving this so-called timescale problem. Instead, over the years a large range of statistical mechanics based enhanced sampling approaches have been proposed for accelerating molecular dynamics, and accessing timescales that are well beyond the reach of the fastest computers. In this review we provide an overview of these approaches, including the underlying theory, typical applications, and publicly available software resources to implement them.
Quantum Ensemble Classification: A Sampling-Based Learning Control Approach.
Chen, Chunlin; Dong, Daoyi; Qi, Bo; Petersen, Ian R; Rabitz, Herschel
2017-06-01
Quantum ensemble classification (QEC) has significant applications in discrimination of atoms (or molecules), separation of isotopes, and quantum information extraction. However, quantum mechanics forbids deterministic discrimination among nonorthogonal states. The classification of inhomogeneous quantum ensembles is very challenging, since there exist variations in the parameters characterizing the members within different classes. In this paper, we recast QEC as a supervised quantum learning problem. A systematic classification methodology is presented by using a sampling-based learning control (SLC) approach for quantum discrimination. The classification task is accomplished via simultaneously steering members belonging to different classes to their corresponding target states (e.g., mutually orthogonal states). First, a new discrimination method is proposed for two similar quantum systems. Then, an SLC method is presented for QEC. Numerical results demonstrate the effectiveness of the proposed approach for the binary classification of two-level quantum ensembles and the multiclass classification of multilevel quantum ensembles.
Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B
2017-04-01
Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.
Stratified random sampling for estimating billing accuracy in health care systems.
Buddhakulsomsiri, Jirachai; Parthanadee, Parthana
2008-03-01
This paper presents a stratified random sampling plan for estimating accuracy of bill processing performance for the health care bills submitted to third party payers in health care systems. Bill processing accuracy is estimated with two measures: percent accuracy and total dollar accuracy. Difficulties in constructing a sampling plan arise when the population strata structure is unknown, and when the two measures require different sampling schemes. To efficiently utilize sample resource, the sampling plan is designed to effectively estimate both measures from the same sample. The sampling plan features a simple but efficient strata construction method, called rectangular method, and two accuracy estimation methods, one for each measure. The sampling plan is tested on actual populations from an insurance company. Accuracy estimates obtained are then used to compare the rectangular method to other potential clustering methods for strata construction, and compare the accuracy estimation methods to other eligible methods. Computational study results show effectiveness of the proposed sampling plan.
SNP selection and classification of genome-wide SNP data using stratified sampling random forests.
Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K
2012-09-01
For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.
Random Walks on Directed Networks: Inference and Respondent-driven Sampling
Malmros, Jens; Britton, Tom
2013-01-01
Respondent driven sampling (RDS) is a method often used to estimate population properties (e.g. sexual risk behavior) in hard-to-reach populations. It combines an effective modified snowball sampling methodology with an estimation procedure that yields unbiased population estimates under the assumption that the sampling process behaves like a random walk on the social network of the population. Current RDS estimation methodology assumes that the social network is undirected, i.e. that all edges are reciprocal. However, empirical social networks in general also have non-reciprocated edges. To account for this fact, we develop a new estimation method for RDS in the presence of directed edges on the basis of random walks on directed networks. We distinguish directed and undirected edges and consider the possibility that the random walk returns to its current position in two steps through an undirected edge. We derive estimators of the selection probabilities of individuals as a function of the number of outgoing...
An integrated sampling and analysis approach for improved biodiversity monitoring
DeWan, Amielle A.; Zipkin, Elise F.
2010-01-01
Successful biodiversity conservation requires high quality monitoring data and analyses to ensure scientifically defensible policy, legislation, and management. Although monitoring is a critical component in assessing population status and trends, many governmental and non-governmental organizations struggle to develop and implement effective sampling protocols and statistical analyses because of the magnitude and diversity of species in conservation concern. In this article we describe a practical and sophisticated data collection and analysis framework for developing a comprehensive wildlife monitoring program that includes multi-species inventory techniques and community-level hierarchical modeling. Compared to monitoring many species individually, the multi-species approach allows for improved estimates of individual species occurrences, including rare species, and an increased understanding of the aggregated response of a community to landscape and habitat heterogeneity. We demonstrate the benefits and practicality of this approach to address challenges associated with monitoring in the context of US state agencies that are legislatively required to monitor and protect species in greatest conservation need. We believe this approach will be useful to regional, national, and international organizations interested in assessing the status of both common and rare species.
Machine learning approach for pooled DNA sample calibration.
Hellicar, Andrew D; Rahman, Ashfaqur; Smith, Daniel V; Henshall, John M
2015-07-09
Despite ongoing reduction in genotyping costs, genomic studies involving large numbers of species with low economic value (such as Black Tiger prawns) remain cost prohibitive. In this scenario DNA pooling is an attractive option to reduce genotyping costs. However, genotyping of pooled samples comprising DNA from many individuals is challenging due to the presence of errors that exceed the allele frequency quantisation size and therefore cannot be simply corrected by clustering techniques. The solution to the calibration problem is a correction to the allele frequency to mitigate errors incurred in the measurement process. We highlight the limitations of the existing calibration solutions such as the fact they impose assumptions on the variation between allele frequencies 0, 0.5, and 1.0, and address a limited set of error types. We propose a novel machine learning method to address the limitations identified. The approach is tested on SNPs genotyped with the Sequenom iPLEX platform and compared to existing state of the art calibration methods. The new method is capable of reducing the mean square error in allele frequency to half that achievable with existing approaches. Furthermore for the first time we demonstrate the importance of carefully considering the choice of training data when using calibration approaches built from pooled data. This paper demonstrates that improvements in pooled allele frequency estimates result if the genotyping platform is characterised at allele frequencies other than the homozygous and heterozygous cases. Techniques capable of incorporating such information are described along with aspects of implementation.
A Random Walk Approach to Query Informative Constraints for Clustering.
Abin, Ahmad Ali
2017-08-09
This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.
A martingale approach for the elephant random walk
Bercu, Bernard
2018-01-01
The purpose of this paper is to establish, via a martingale approach, some refinements on the asymptotic behavior of the one-dimensional elephant random walk (ERW). The asymptotic behavior of the ERW mainly depends on a memory parameter p which lies between zero and one. This behavior is totally different in the diffusive regime 0 ≤slant p <3/4 , the critical regime p=3/4 , and the superdiffusive regime 3/4. In the diffusive and critical regimes, we establish some new results on the almost sure asymptotic behavior of the ERW, such as the quadratic strong law and the law of the iterated logarithm. In the superdiffusive regime, we provide the first rigorous mathematical proof that the limiting distribution of the ERW is not Gaussian.
Schmidt, Jennifer; Martin, Alexandra
2016-09-01
Brain-directed treatment techniques, such as neurofeedback, have recently been proposed as adjuncts in the treatment of eating disorders to improve therapeutic outcomes. In line with this recommendation, a cue exposure EEG-neurofeedback protocol was developed. The present study aimed at the evaluation of the specific efficacy of neurofeedback to reduce subjective binge eating in a female subthreshold sample. A total of 75 subjects were randomized to EEG-neurofeedback, mental imagery with a comparable treatment set-up or a waitlist group. At post-treatment, only EEG-neurofeedback led to a reduced frequency of binge eating (p = .015, g = 0.65). The effects remained stable to a 3-month follow-up. EEG-neurofeedback further showed particular beneficial effects on perceived stress and dietary self-efficacy. Differences in outcomes did not arise from divergent treatment expectations. Because EEG-neurofeedback showed a specific efficacy, it may be a promising brain-directed approach that should be tested as a treatment adjunct in clinical groups with binge eating. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.
Energy Technology Data Exchange (ETDEWEB)
Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM
2008-01-01
Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.
Monte Carlo path sampling approach to modeling aeolian sediment transport
Hardin, E. J.; Mitasova, H.; Mitas, L.
2011-12-01
but evolve the system according to rules that are abstractions of the governing physics. This work presents the Green function solution to the continuity equations that govern sediment transport. The Green function solution is implemented using a path sampling approach whereby sand mass is represented as an ensemble of particles that evolve stochastically according to the Green function. In this approach, particle density is a particle representation that is equivalent to the field representation of elevation. Because aeolian transport is nonlinear, particles must be propagated according to their updated field representation with each iteration. This is achieved using a particle-in-cell technique. The path sampling approach offers a number of advantages. The integral form of the Green function solution makes it robust to discontinuities in complex terrains. Furthermore, this approach is spatially distributed, which can help elucidate the role of complex landscapes in aeolian transport. Finally, path sampling is highly parallelizable, making it ideal for execution on modern clusters and graphics processing units.
Directory of Open Access Journals (Sweden)
Kai Yang
2016-01-01
Full Text Available This work investigates a bioinspired microimmune optimization algorithm to solve a general kind of single-objective nonlinear constrained expected value programming without any prior distribution. In the study of algorithm, two lower bound sample estimates of random variables are theoretically developed to estimate the empirical values of individuals. Two adaptive racing sampling schemes are designed to identify those competitive individuals in a given population, by which high-quality individuals can obtain large sampling size. An immune evolutionary mechanism, along with a local search approach, is constructed to evolve the current population. The comparative experiments have showed that the proposed algorithm can effectively solve higher-dimensional benchmark problems and is of potential for further applications.
Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling
Directory of Open Access Journals (Sweden)
Bo Yu
2015-01-01
Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.
A Novel Approach for Sampling in Approximate Dynamic Programming Based on $F$ -Discrepancy.
Cervellera, Cristiano; Maccio, Danilo
2017-10-01
Approximate dynamic programming (ADP) is the standard tool to solve Markovian decision problems under general hypotheses on the system and the cost equations. It is known that one of the key issues of the procedure is how to generate an efficient sampling of the state space, needed for the approximation of the value function, in order to cope with the well-known phenomenon of the curse of dimensionality. The most common approaches in the literature are either aimed at a uniform covering of the state space or driven by the actual evolution of the system trajectories. Concerning the latter approach, F -discrepancy, a quantity closely related to the Kolmogorov-Smirnov statistic, that measures how strictly a set of random points represents a probability distribution, has been recently proposed for an efficient ADP framework in the finite-horizon case. In this paper, we extend this framework to infinite-horizon discounted problems, providing a constructive algorithm to generate efficient sampling points driven by the system behavior. Then, the algorithm is refined with the aim of acquiring a more balanced covering of the state space, thus addressing possible drawbacks of a pure system-driven sampling approach to obtain, in fact, an efficient hybrid between the latter and the pure uniform design. A theoretical analysis is provided through the introduction of an original notion of the F -discrepancy and the proof of its properties. Simulation tests are provided to showcase the behavior of the proposed sampling method.
Recidivism among Child Sexual Abusers: Initial Results of a 13-Year Longitudinal Random Sample
Patrick, Steven; Marsh, Robert
2009-01-01
In the initial analysis of data from a random sample of all those charged with child sexual abuse in Idaho over a 13-year period, only one predictive variable was found that related to recidivism of those convicted. Variables such as ethnicity, relationship, gender, and age differences did not show a significant or even large association with…
HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA
Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...
Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA
Taylor, Laura; Doehler, Kirsten
2015-01-01
This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…
Combining in silico and in cerebro approaches for virtual screening and pose prediction in SAMPL4
Voet, Arnout R. D.; Kumar, Ashutosh; Berenger, Francois; Zhang, Kam Y. J.
2014-04-01
The SAMPL challenges provide an ideal opportunity for unbiased evaluation and comparison of different approaches used in computational drug design. During the fourth round of this SAMPL challenge, we participated in the virtual screening and binding pose prediction on inhibitors targeting the HIV-1 integrase enzyme. For virtual screening, we used well known and widely used in silico methods combined with personal in cerebro insights and experience. Regular docking only performed slightly better than random selection, but the performance was significantly improved upon incorporation of additional filters based on pharmacophore queries and electrostatic similarities. The best performance was achieved when logical selection was added. For the pose prediction, we utilized a similar consensus approach that amalgamated the results of the Glide-XP docking with structural knowledge and rescoring. The pose prediction results revealed that docking displayed reasonable performance in predicting the binding poses. However, prediction performance can be improved utilizing scientific experience and rescoring approaches. In both the virtual screening and pose prediction challenges, the top performance was achieved by our approaches. Here we describe the methods and strategies used in our approaches and discuss the rationale of their performances.
Combining in silico and in cerebro approaches for virtual screening and pose prediction in SAMPL4.
Voet, Arnout R D; Kumar, Ashutosh; Berenger, Francois; Zhang, Kam Y J
2014-04-01
The SAMPL challenges provide an ideal opportunity for unbiased evaluation and comparison of different approaches used in computational drug design. During the fourth round of this SAMPL challenge, we participated in the virtual screening and binding pose prediction on inhibitors targeting the HIV-1 integrase enzyme. For virtual screening, we used well known and widely used in silico methods combined with personal in cerebro insights and experience. Regular docking only performed slightly better than random selection, but the performance was significantly improved upon incorporation of additional filters based on pharmacophore queries and electrostatic similarities. The best performance was achieved when logical selection was added. For the pose prediction, we utilized a similar consensus approach that amalgamated the results of the Glide-XP docking with structural knowledge and rescoring. The pose prediction results revealed that docking displayed reasonable performance in predicting the binding poses. However, prediction performance can be improved utilizing scientific experience and rescoring approaches. In both the virtual screening and pose prediction challenges, the top performance was achieved by our approaches. Here we describe the methods and strategies used in our approaches and discuss the rationale of their performances.
Flexible sampling large-scale social networks by self-adjustable random walk
Xu, Xiao-Ke; Zhu, Jonathan J. H.
2016-12-01
Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.
Random Vector and Matrix Theories: A Renormalization Group Approach
Zinn-Justin, Jean
2014-09-01
Random matrices in the large N expansion and the so-called double scaling limit can be used as toy models for quantum gravity: 2D quantum gravity coupled to conformal matter. This has generated a tremendous expansion of random matrix theory, tackled with increasingly sophisticated mathematical methods and number of matrix models have been solved exactly. However, the somewhat paradoxical situation is that either models can be solved exactly or little can be said. Since the solved models display critical points and universal properties, it is tempting to use renormalization group ideas to determine universal properties, without solving models explicitly. Initiated by Br\\'ezin and Zinn-Justin, the approach has led to encouraging results, first for matrix integrals and then quantum mechanics with matrices, but has not yet become a universal tool as initially hoped. In particular, general quantum field theories with matrix fields require more detailed investigations. To better understand some of the encountered difficulties, we first apply analogous ideas to the simpler O(N) symmetric vector models, models that can be solved quite generally in the large N limit. Unlike other attempts, our method is a close extension of Br\\'ezin and Zinn-Justin. Discussing vector and matrix models with similar approximation scheme, we notice that in all cases (vector and matrix integrals, vector and matrix path integrals in the local approximation), at leading order, non-trivial fixed points satisfy the same universal algebraic equation, and this is the main result of this work. However, its precise meaning and role have still to be better understood.
Energy Technology Data Exchange (ETDEWEB)
Kovarik, Libor; Stevens, Andrew J.; Liyu, Andrey V.; Browning, Nigel D.
2016-10-17
Aberration correction for scanning transmission electron microscopes (STEM) has dramatically increased spatial image resolution for beam-stable materials, but it is the sample stability rather than the microscope that often limits the practical resolution of STEM images. To extract physical information from images of beam sensitive materials it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce the electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in scan coils, we show that sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by factor of 5x relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. The use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected Z-contrast images of CaCO3, a material that is traditionally difficult to image by TEM/STEM because of dose issues.
Sample size calculations for micro-randomized trials in mHealth.
Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A
2016-05-30
The use and development of mobile interventions are experiencing rapid growth. In "just-in-time" mobile interventions, treatments are provided via a mobile device, and they are intended to help an individual make healthy decisions 'in the moment,' and thus have a proximal, near future impact. Currently, the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a 'micro-randomized' trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.
Occupational position and its relation to mental distress in a random sample of Danish residents
DEFF Research Database (Denmark)
Rugulies, Reiner Ernst; Madsen, Ida E H; Nielsen, Maj Britt D
2010-01-01
somatization symptoms (OR = 6.28, 95% CI = 1.39-28.46). CONCLUSIONS: Unskilled manual workers, the unemployed, and, to a lesser extent, the low-grade self-employed showed an increased level of mental distress. Activities to promote mental health in the Danish population should be directed toward these groups.......PURPOSE: To analyze the distribution of depressive, anxiety, and somatization symptoms across different occupational positions in a random sample of Danish residents. METHODS: The study sample consisted of 591 Danish residents (50% women), aged 20-65, drawn from an age- and gender-stratified random...... sample of the Danish population. Participants filled out a survey that included the 92 item version of the Hopkins Symptom Checklist (SCL-92). We categorized occupational position into seven groups: high- and low-grade non-manual workers, skilled and unskilled manual workers, high- and low-grade self...
Thompson, Steven K
2012-01-01
Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat
Random sampling for a mental health survey in a deprived multi-ethnic area of Berlin.
Mundt, Adrian P; Aichberger, Marion C; Kliewe, Thomas; Ignatyev, Yuriy; Yayla, Seda; Heimann, Hannah; Schouler-Ocak, Meryam; Busch, Markus; Rapp, Michael; Heinz, Andreas; Ströhle, Andreas
2012-12-01
The aim of the study was to assess the response to random sampling for a mental health survey in a deprived multi-ethnic area of Berlin, Germany, with a large Turkish-speaking population. A random list from the registration office with 1,000 persons stratified by age and gender was retrieved from the population registry and these persons were contacted using a three-stage design including written information, telephone calls and personal contact at home. A female bilingual interviewer contacted persons with Turkish names. Of the persons on the list, 202 were not living in the area, one was deceased, 502 did not respond. Of the 295 responders, 152 explicitly refused(51.5%) to participate. We retained a sample of 143 participants(48.5%) representing the rate of multi-ethnicity in the area (52.1% migrants in the sample vs. 53.5% in the population). Turkish migrants were over-represented(28.9% in the sample vs. 18.6% in the population). Polish migrants (2.1 vs. 5.3% in the population) and persons from the former Yugoslavia (1.4 vs. 4.8% in the population)were under-represented. Bilingual contact procedures can improve the response rates of the most common migrant populations to random sampling if migrants of the same origin gate the contact. High non-contact and non-response rates for migrant and non-migrant populations in deprived urban areas remain a challenge for obtaining representative random samples.
Assessment of proteinuria by using protein: creatinine index in random urine sample.
Khan, Dilshad Ahmed; Ahmad, Tariq Mahmood; Qureshil, Ayaz Hussain; Halim, Abdul; Ahmad, Mumtaz; Afzal, Saeed
2005-10-01
To assess the quantitative measurement of proteinuria by using random urine protein:creatinine index/ratio in comparison with 24 hours urinary protein excretion in patients of renal diseases having normal glomerular filtration rate. One hundred and thirty patients, 94 males and 36 females, with an age range of 5 to 60 years; having proteinuria of more than 150 mg/day were included in this study. Qualitative urinary protein estimation was done on random urine specimen by dipstick. Quantitative measurement of protein in the random and 24 hours urine specimens were carried out by a method based on the formation of a red complex of protein with pyrogallal red in acid medium on Micro lab 200 (Merck). Estimation of creatinine was done on Selectra -2 (Merck) by Jaffe's reaction. The urine protein:creatinine index and ratio were calculated by dividing the urine protein concentration (mg/L) by urine creatinine concentration (mmol/L) multilplied by 10 and mg/mg respectively. The protein:creatinine index and ratio of more than 140 and 0.18 respectively in a random urine sample indicated pathological proteinuria. An excellent correlation (r=0.96) was found between random urine protein:creatinine index/ratio and standard 24 hours urinary protein excretion in these patients (pprotein:creatinine index in random urine is a convenient, quick and reliable method of estimation of proteinuria as compared to 24 hours of urinary protein excretion for diagnosis and monitoring of renal diseases in our medical setup.
Wang, Mingjun; Feng, Shaodong; Wu, Jigang
2017-10-06
We report a multilayer lensless in-line holographic microscope (LIHM) with improved imaging resolution by using the pixel super-resolution technique and random sample movement. In our imaging system, a laser beam illuminated the sample and a CMOS imaging sensor located behind the sample recorded the in-line hologram for image reconstruction. During the imaging process, the sample was moved by hand randomly and the in-line holograms were acquired sequentially. Then the sample image was reconstructed from an enhanced-resolution hologram obtained from multiple low-resolution in-line holograms by applying the pixel super-resolution (PSR) technique. We studied the resolution enhancement effects by using the U.S. Air Force (USAF) target as the sample in numerical simulation and experiment. We also showed that multilayer pixel super-resolution images can be obtained by imaging a triple-layer sample made with the filamentous algae on the middle layer and microspheres with diameter of 2 μm on the top and bottom layers. Our pixel super-resolution LIHM provides a compact and low-cost solution for microscopic imaging and is promising for many biomedical applications.
Stemflow estimation in a redwood forest using model-based stratified random sampling
Jack Lewis
2003-01-01
Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...
Giabbanelli, Philippe J; Crutzen, Rik
2014-12-12
Controlling bias is key to successful randomized controlled trials for behaviour change. Bias can be generated at multiple points during a study, for example, when participants are allocated to different groups. Several methods of allocations exist to randomly distribute participants over the groups such that their prognostic factors (e.g., socio-demographic variables) are similar, in an effort to keep participants' outcomes comparable at baseline. Since it is challenging to create such groups when all prognostic factors are taken together, these factors are often balanced in isolation or only the ones deemed most relevant are balanced. However, the complex interactions among prognostic factors may lead to a poor estimate of behaviour, causing unbalanced groups at baseline, which may introduce accidental bias. We present a novel computational approach for allocating participants to different groups. Our approach automatically uses participants' experiences to model (the interactions among) their prognostic factors and infer how their behaviour is expected to change under a given intervention. Participants are then allocated based on their inferred behaviour rather than on selected prognostic factors. In order to assess the potential of our approach, we collected two datasets regarding the behaviour of participants (n = 430 and n = 187). The potential of the approach on larger sample sizes was examined using synthetic data. All three datasets highlighted that our approach could lead to groups with similar expected behavioural changes. The computational approach proposed here can complement existing statistical approaches when behaviours involve numerous complex relationships, and quantitative data is not readily available to model these relationships. The software implementing our approach and commonly used alternatives is provided at no charge to assist practitioners in the design of their own studies and to compare participants' allocations.
Machine Learning Approaches to Rare Events Sampling and Estimation
Elsheikh, A. H.
2014-12-01
Given the severe impacts of rare events, we try to quantitatively answer the following two questions: How can we estimate the probability of a rare event? And what are the factors affecting these probabilities? We utilize machine learning classification methods to define the failure boundary (in the stochastic space) corresponding to a specific threshold of a rare event. The training samples for the classification algorithm are obtained using multilevel splitting and Monte Carlo (MC) simulations. Once the training of the classifier is performed, a full MC simulation can be performed efficiently using the classifier as a reduced order model replacing the full physics simulator.We apply the proposed method on a standard benchmark for CO2 leakage through an abandoned well. In this idealized test case, CO2 is injected into a deep aquifer and then spreads within the aquifer and, upon reaching an abandoned well; it rises to a shallower aquifer. In current study, we try to evaluate the probability of leakage of a pre-defined amount of the injected CO2 given a heavy tailed distribution of the leaky well permeability. We show that machine learning based approaches significantly outperform direct MC and multi-level splitting methods in terms of efficiency and precision. The proposed algorithm's efficiency and reliability enabled us to perform a sensitivity analysis to the different modeling assumptions including the different prior distributions on the probability of CO2 leakage.
Greene, Tom
2015-01-01
Performing well-powered randomized controlled trials is of fundamental importance in clinical research. The goal of sample size calculations is to assure that statistical power is acceptable while maintaining a small probability of a type I error. This chapter overviews the fundamentals of sample size calculation for standard types of outcomes for two-group studies. It considers (1) the problems of determining the size of the treatment effect that the studies will be designed to detect, (2) the modifications to sample size calculations to account for loss to follow-up and nonadherence, (3) the options when initial calculations indicate that the feasible sample size is insufficient to provide adequate power, and (4) the implication of using multiple primary endpoints. Sample size estimates for longitudinal cohort studies must take account of confounding by baseline factors.
Characterization of Electron Microscopes with Binary Pseudo-random Multilayer Test Samples
Energy Technology Data Exchange (ETDEWEB)
V Yashchuk; R Conley; E Anderson; S Barber; N Bouet; W McKinney; P Takacs; D Voronov
2011-12-31
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1] and [2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
Characterization of electron microscopes with binary pseudo-random multilayer test samples
Energy Technology Data Exchange (ETDEWEB)
Yashchuk, Valeriy V., E-mail: VVYashchuk@lbl.gov [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Conley, Raymond [NSLS-II, Brookhaven National Laboratory, Upton, NY 11973 (United States); Anderson, Erik H. [Center for X-ray Optics, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Barber, Samuel K. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Bouet, Nathalie [NSLS-II, Brookhaven National Laboratory, Upton, NY 11973 (United States); McKinney, Wayne R. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Takacs, Peter Z. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Voronov, Dmitriy L. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)
2011-09-01
Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi{sub 2}/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing
2017-11-01
Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.
Suliman, Mohamed
2016-01-01
In this supplementary appendix we provide proofs and additional simulation results that complement the paper (constrained perturbation regularization approach for signal estimation using random matrix theory).
On analysis-based two-step interpolation methods for randomly sampled seismic data
Yang, Pengliang; Gao, Jinghuai; Chen, Wenchao
2013-02-01
Interpolating the missing traces of regularly or irregularly sampled seismic record is an exceedingly important issue in the geophysical community. Many modern acquisition and reconstruction methods are designed to exploit the transform domain sparsity of the few randomly recorded but informative seismic data using thresholding techniques. In this paper, to regularize randomly sampled seismic data, we introduce two accelerated, analysis-based two-step interpolation algorithms, the analysis-based FISTA (fast iterative shrinkage-thresholding algorithm) and the FPOCS (fast projection onto convex sets) algorithm from the IST (iterative shrinkage-thresholding) algorithm and the POCS (projection onto convex sets) algorithm. A MATLAB package is developed for the implementation of these thresholding-related interpolation methods. Based on this package, we compare the reconstruction performance of these algorithms, using synthetic and real seismic data. Combined with several thresholding strategies, the accelerated convergence of the proposed methods is also highlighted.
Hemodynamic and glucometabolic factors fail to predict renal function in a random population sample
DEFF Research Database (Denmark)
Pareek, M.; Nielsen, M.; Olesen, Thomas Bastholm
2015-01-01
Objective: To determine whether baseline hemodynamic and/or glucometabolic risk factors could predict renal function at follow-up, independently of baseline serum creatinine, in survivors from a random population sample. Design and method: We examined associations between baseline serum creatinine...... indices of beta-cell function (HOMA-2B), insulin sensitivity (HOMA-2S), and insulin resistance (HOMA-2IR)), traditional cardiovascular risk factors (age, sex, smoking status, body mass index, diabetes mellitus, total serum cholesterol), and later renal function determined as serum cystatin C in 238 men...... and 7 women aged 38 to 49 years at the time of inclusion, using multivariable linear regression analysis (p-entry 0.05, p-removal 0.20). Study subjects came from a random population based sample and were included 1974-1992, whilst the follow-up with cystatin C measurement was performed 2002...
An inversion method based on random sampling for real-time MEG neuroimaging
Pascarella, Annalisa
2016-01-01
The MagnetoEncephaloGraphy (MEG) has gained great interest in neurorehabilitation training due to its high temporal resolution. The challenge is to localize the active regions of the brain in a fast and accurate way. In this paper we use an inversion method based on random spatial sampling to solve the real-time MEG inverse problem. Several numerical tests on synthetic but realistic data show that the method takes just a few hundredths of a second on a laptop to produce an accurate map of the electric activity inside the brain. Moreover, it requires very little memory storage. For this reasons the random sampling method is particularly attractive in real-time MEG applications.
Özel, Gamze
2015-01-01
In this paper, a new exponential type estimator is developed in the stratified random sampling for the population mean using auxiliary variable information. In order to evaluate efﬁciency of the introduced estimator, we ﬁrst review some estimators and study the optimum property of the suggested strategy. To judge the merits of the suggested class of estimators over others under the optimal condition, simulation study and real data applications are conducted. The results show that the introduc...
Effectiveness of hand hygiene education among a random sample of women from the community
Ubheeram, J.; Biranjia-Hurdoyal, S.D.
2017-01-01
Summary Objective. The effectiveness of hand hygiene education was investigated by studying the hand hygiene awareness and bacterial hand contamination among a random sample of 170 women in the community. Methods. Questionnaire was used to assess the hand hygiene awareness score, followed by swabbing of the dominant hand. Bacterial identification was done by conventional biochemical tests. Results. Better hand hygiene awareness score was significantly associated with age, scarce bacterial gro...
Control Capacity and A Random Sampling Method in Exploring Controllability of Complex Networks
Jia, Tao; Barab?si, Albert-L?szl?
2013-01-01
Controlling complex systems is a fundamental challenge of network science. Recent advances indicate that control over the system can be achieved through a minimum driver node set (MDS). The existence of multiple MDS's suggests that nodes do not participate in control equally, prompting us to quantify their participations. Here we introduce control capacity quantifying the likelihood that a node is a driver node. To efficiently measure this quantity, we develop a random sampling algorithm. Thi...
Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui
2016-01-01
Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential fea...
McGarvey, Richard; Burch, Paul; Matthews, Janet M
2016-01-01
Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with
Willan, Andrew; Kowgier, Matthew
2008-01-01
Traditional sample size calculations for randomized clinical trials depend on somewhat arbitrarily chosen factors, such as Type I and II errors. An effectiveness trial (otherwise known as a pragmatic trial or management trial) is essentially an effort to inform decision-making, i.e., should treatment be adopted over standard? Taking a societal perspective and using Bayesian decision theory, Willan and Pinto (Stat. Med. 2005; 24:1791-1806 and Stat. Med. 2006; 25:720) show how to determine the sample size that maximizes the expected net gain, i.e., the difference between the cost of doing the trial and the value of the information gained from the results. These methods are extended to include multi-stage adaptive designs, with a solution given for a two-stage design. The methods are applied to two examples. As demonstrated by the two examples, substantial increases in the expected net gain (ENG) can be realized by using multi-stage adaptive designs based on expected value of information methods. In addition, the expected sample size and total cost may be reduced. Exact solutions have been provided for the two-stage design. Solutions for higher-order designs may prove to be prohibitively complex and approximate solutions may be required. The use of multi-stage adaptive designs for randomized clinical trials based on expected value of sample information methods leads to substantial gains in the ENG and reductions in the expected sample size and total cost.
Chen, Maggie H; Willan, Andrew R
2013-02-01
Most often, sample size determinations for randomized clinical trials are based on frequentist approaches that depend on somewhat arbitrarily chosen factors, such as type I and II error probabilities and the smallest clinically important difference. As an alternative, many authors have proposed decision-theoretic (full Bayesian) approaches, often referred to as value of information methods that attempt to determine the sample size that maximizes the difference between the trial's expected utility and its expected cost, referred to as the expected net gain. Taking an industry perspective, Willan proposes a solution in which the trial's utility is the increase in expected profit. Furthermore, Willan and Kowgier, taking a societal perspective, show that multistage designs can increase expected net gain. The purpose of this article is to determine the optimal sample size using value of information methods for industry-based, multistage adaptive randomized clinical trials, and to demonstrate the increase in expected net gain realized. At the end of each stage, the trial's sponsor must decide between three actions: continue to the next stage, stop the trial and seek regulatory approval, or stop the trial and abandon the drug. A model for expected total profit is proposed that includes consideration of per-patient profit, disease incidence, time horizon, trial duration, market share, and the relationship between trial results and probability of regulatory approval. The proposed method is extended to include multistage designs with a solution provided for a two-stage design. An example is given. Significant increases in the expected net gain are realized by using multistage designs. The complexity of the solutions increases with the number of stages, although far simpler near-optimal solutions exist. The method relies on the central limit theorem, assuming that the sample size is sufficiently large so that the relevant statistics are normally distributed. From a value of
Sampling Methodologies and Approaches for Ballast Water Management Compliance Monitoring
Stephan Gollasch; Matej David
2011-01-01
The human-mediated transfer of harmful organisms via shipping, especially via ballast water transport, has raised considerable attention especially in the last decade due to the negative associated impacts. Ballast water sampling is important to assess the compliance with ballast water management requirements (i.e. compliance monitoring). The complexity of ballast water sampling is a result of organism diversity and behaviour which may require different sampling strategies, as well as ship de...
Estimating the Size of a Large Network and its Communities from a Random Sample.
Chen, Lin; Karbasi, Amin; Crawford, Forrest W
2016-01-01
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.
A double SIMEX approach for bivariate random-effects meta-analysis of diagnostic accuracy studies
Directory of Open Access Journals (Sweden)
Annamaria Guolo
2017-01-01
Full Text Available Abstract Background Bivariate random-effects models represent a widely accepted and recommended approach for meta-analysis of test accuracy studies. Standard likelihood methods routinely used for inference are prone to several drawbacks. Small sample size can give rise to unreliable inferential conclusions and convergence issues make the approach unappealing. This paper suggests a different methodology to address such difficulties. Methods A SIMEX methodology is proposed. The method is a simulation-based technique originally developed as a correction strategy within the measurement error literature. It suits the meta-analysis framework as the diagnostic accuracy measures provided by each study are prone to measurement error. SIMEX can be straightforwardly adapted to cover different measurement error structures and to deal with covariates. The effortless implementation with standard software is an interesting feature of the method. Results Extensive simulation studies highlight the improvement provided by SIMEX over likelihood approach in terms of empirical coverage probabilities of confidence intervals under different scenarios, independently of the sample size and the values of the correlation between sensitivity and specificity. A remarkable amelioration is obtained in case of deviations from the normality assumption for the random-effects distribution. From a computational point of view, the application of SIMEX is shown to be neither involved nor subject to the convergence issues affecting likelihood-based alternatives. Application of the method to a diagnostic review of the performance of transesophageal echocardiography for assessing ascending aorta atherosclerosis enables overcoming limitations of the likelihood procedure. Conclusions The SIMEX methodology represents an interesting alternative to likelihood-based procedures for inference in meta-analysis of diagnostic accuracy studies. The approach can provide more accurate inferential
Novel Sample-handling Approach for XRD Analysis with Minimal Sample Preparation
Sarrazin, P.; Chipera, S.; Bish, D.; Blake, D.; Feldman, S.; Vaniman, D.; Bryson, C.
2004-01-01
Sample preparation and sample handling are among the most critical operations associated with X-ray diffraction (XRD) analysis. These operations require attention in a laboratory environment, but they become a major constraint in the deployment of XRD instruments for robotic planetary exploration. We are developing a novel sample handling system that dramatically relaxes the constraints on sample preparation by allowing characterization of coarse-grained material that would normally be impossible to analyze with conventional powder-XRD techniques.
Directory of Open Access Journals (Sweden)
Alireza Goli
2015-09-01
Full Text Available Distribution and optimum allocation of emergency resources are the most important tasks, which need to be accomplished during crisis. When a natural disaster such as earthquake, flood, etc. takes place, it is necessary to deliver rescue efforts as quickly as possible. Therefore, it is important to find optimum location and distribution of emergency relief resources. When a natural disaster occurs, it is not possible to reach some damaged areas. In this paper, location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling is investigated. In this study, there is no need to visit all the places and some demand points receive their needs from the nearest possible location. The proposed study is implemented for some randomly generated numbers in different sizes. The preliminary results indicate that the proposed method was capable of reaching desirable solutions in reasonable amount of time.
ESTIMATION OF FINITE POPULATION MEAN USING RANDOM NON–RESPONSE IN SURVEY SAMPLING
Directory of Open Access Journals (Sweden)
Housila P. Singh
2010-12-01
Full Text Available This paper consider the problem of estimating the population mean under three different situations of random non–response envisaged by Singh et al (2000. Some ratio and product type estimators have been proposed and their properties are studied under an assumption that the number of sampling units on which information can not be obtained owing to random non–response follows some distribution. The suggested estimators are compared with the usual ratio and product estimators. An empirical study is carried out to show the performance of the suggested estimators over usual unbiased estimator, ratio and product estimators. A generalized version of the proposed ratio and product estimators is also given.
Kovarik, L.; Stevens, A.; Liyu, A.; Browning, N. D.
2016-10-01
While aberration correction for scanning transmission electron microscopes (STEMs) dramatically increased the spatial resolution obtainable in the images of materials that are stable under the electron beam, the practical resolution of many STEM images is now limited by the sample stability rather than the microscope. To extract physical information from the images of beam sensitive materials, it is becoming clear that there is a critical dose/dose-rate below which the images can be interpreted as representative of the pristine material, while above it the observation is dominated by beam effects. Here, we describe an experimental approach for sparse sampling in the STEM and in-painting image reconstruction in order to reduce the electron dose/dose-rate to the sample during imaging. By characterizing the induction limited rise-time and hysteresis in the scan coils, we show that a sparse line-hopping approach to scan randomization can be implemented that optimizes both the speed of the scan and the amount of the sample that needs to be illuminated by the beam. The dose and acquisition time for the sparse sampling is shown to be effectively decreased by at least a factor of 5× relative to conventional acquisition, permitting imaging of beam sensitive materials to be obtained without changing the microscope operating parameters. The use of sparse line-hopping scan to acquire STEM images is demonstrated with atomic resolution aberration corrected the Z-contrast images of CaCO3, a material that is traditionally difficult to image by TEM/STEM because of dosage issues.
Shee, James; Zhang, Shiwei; Reichman, David R; Friesner, Richard A
2017-06-13
The exact and phaseless variants of auxiliary-field quantum Monte Carlo (AFQMC) have been shown to be capable of producing accurate ground-state energies for a wide variety of systems including those which exhibit substantial electron correlation effects. The computational cost of performing these calculations has to date been relatively high, impeding many important applications of these approaches. Here we present a correlated sampling methodology for AFQMC which relies on error cancellation to dramatically accelerate the calculation of energy differences of relevance to chemical transformations. In particular, we show that our correlated sampling-based AFQMC approach is capable of calculating redox properties, deprotonation free energies, and hydrogen abstraction energies in an efficient manner without sacrificing accuracy. We validate the computational protocol by calculating the ionization potentials and electron affinities of the atoms contained in the G2 test set and then proceed to utilize a composite method, which treats fixed-geometry processes with correlated sampling-based AFQMC and relaxation energies via MP2, to compute the ionization potential, deprotonation free energy, and the O-H bond disocciation energy of methanol, all to within chemical accuracy. We show that the efficiency of correlated sampling relative to uncorrelated calculations increases with system and basis set size and that correlated sampling greatly reduces the required number of random walkers to achieve a target statistical error. This translates to CPU-time speed-up factors of 55, 25, and 24 for the ionization potential of the K atom, the deprotonation of methanol, and hydrogen abstraction from the O-H bond of methanol, respectively. We conclude with a discussion of further efficiency improvements that may open the door to the accurate description of chemical processes in complex systems.
Boston Harbor and approaches samples (WILLETT72 shapefile
U.S. Geological Survey, Department of the Interior — Boston Harbor (and its approaches) is a glacially carved, tidally dominated estuary in western Massachusetts Bay. Characterized by low river discharge and...
Randomized controlled trial on timing and number of sampling for bile aspiration cytology.
Tsuchiya, Tomonori; Yokoyama, Yukihiro; Ebata, Tomoki; Igami, Tsuyoshi; Sugawara, Gen; Kato, Katsuyuki; Shimoyama, Yoshie; Nagino, Masato
2014-06-01
The issue on timing and number of bile sampling for exfoliative bile cytology is still unsettled. A total of 100 patients with cholangiocarcinoma undergoing resection after external biliary drainage were randomized into two groups: a 2-day group where bile was sampled five times per day for 2 days; and a 10-day group where bile was sampled once per day for 10 days (registered University Hospital Medical Information Network/ID 000005983). The outcome of 87 patients who underwent laparotomy was analyzed, 44 in the 2-day group and 43 in the 10-day group. There were no significant differences in patient characteristics between the two groups. Positivity after one sampling session was significantly lower in the 2-day group than in the 10-day group (17.0 ± 3.7% vs. 20.7 ± 3.5%, P = 0.034). However, cumulative positivity curves were similar and overlapped each other between both groups. The final cumulative positivity by the 10th sampling session was 52.3% in the 2-day group and 51.2% in the 10-day group. We observed a small increase in cumulative positivity after the 5th or 6th session in both groups. Bile cytology positivity is unlikely to be affected by sample time. © 2013 Japanese Society of Hepato-Biliary-Pancreatic Surgery.
Estimating the Size of a Large Network and its Communities from a Random Sample
Chen, Lin; Crawford, Forrest W
2016-01-01
Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that correctly estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhausti...
Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data
Sree, David
1992-01-01
Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.
Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël
2016-11-17
Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.
Scholefield, P. A.; Arnscheidt, J.; Jordan, P.; Beven, K.; Heathwaite, L.
2007-12-01
The uncertainties associated with stream nutrient transport estimates are frequently overlooked and the sampling strategy is rarely if ever investigated. Indeed, the impact of sampling strategy and estimation method on the bias and precision of stream phosphorus (P) transport calculations is little understood despite the use of such values in the calibration and testing of models of phosphorus transport. The objectives of this research were to investigate the variability and uncertainty in the estimates of total phosphorus transfers at an intensively monitored agricultural catchment. The Oona Water which is located in the Irish border region, is part of a long term monitoring program focusing on water quality. The Oona Water is a rural river catchment with grassland agriculture and scattered dwelling houses and has been monitored for total phosphorus (TP) at 10 min resolution for several years (Jordan et al, 2007). Concurrent sensitive measurements of discharge are also collected. The water quality and discharge data were provided at 1 hour resolution (averaged) and this meant that a robust estimate of the annual flow weighted concentration could be obtained by simple interpolation between points. A two-strata approach (Kronvang and Bruhn, 1996) was used to estimate flow weighted concentrations using randomly sampled storm events from the 400 identified within the time series and also base flow concentrations. Using a random stratified sampling approach for the selection of events, a series ranging from 10 through to the full 400 were used, each time generating a flow weighted mean using a load-discharge relationship identified through log-log regression and monte-carlo simulation. These values were then compared to the observed total phosphorus concentration for the catchment. Analysis of these results show the impact of sampling strategy, the inherent bias in any estimate of phosphorus concentrations and the uncertainty associated with such estimates. The
Protein/creatinine ratio on random urine samples for prediction of proteinuria in preeclampsia.
Roudsari, F Vahid; Ayati, S; Ayatollahi, H; Shakeri, M T
2012-01-01
To evaluate Protein/Creatinine ratio on random urine samples for prediction of proteinuria in preeclampsia. This study was performed on 150 pregnant women who were hospitalized as preeclampsia in Ghaem Hospital during 2006. At first, a 24-hours urine sample was collected for each patient to determine protein/creatinine ratio. Then, 24-hours urine collection was analyzed for the evaluation of proteinuria. Statistical analysis was performed with SPSS software. A total of 150 patients entered the study. There was a significant relation between the 24-hours urine protein and protein/creatinine ratio (r = 0.659, P < 0.001). Since the measurement of protein/creatinine ratio is more accurate, reliable, and cost-effective, it can be replaced by the method of measurement the 24-hours urine protein.
Short Note An integrated remote sampling approach for aquatic ...
African Journals Online (AJOL)
The study aimed to determine whether the methods and apparatus presented would sample a similar diversity and abundance of macroinvertebrates in comparison with a standard method. A total of 18 aquatic invertebrate families were collected, with no significant differences in diversity between the methods but, using the ...
Constrained optimisation of spatial sampling : a geostatistical approach
Groenigen, van J.W.
1999-01-01
This thesis aims at the development of optimal sampling strategies for geostatistical studies. Special emphasis is on the optimal use of ancillary data, such as co-related imagery, preliminary observations and historic knowledge. Although the object of all studies
A Geostatistical Approach to Indoor Surface Sampling Strategies
DEFF Research Database (Denmark)
Schneider, Thomas; Petersen, Ole Holm; Nielsen, Allan Aasbjerg
1990-01-01
contamination, sampled from small areas on a table, have been used to illustrate the method. First, the spatial correlation is modelled and the parameters estimated from the data. Next, it is shown how the contamination at positions not measured can be estimated with kriging, a minimum mean square error method...
Schultz, WCMW; Gianotten, WL; vanderMeijden, WI; vandeWiel, HBM; Blindeman, L; Chadha, S; Drogendijk, AC
This article describes the outcome of a behavioral approach with or without preceding surgical intervention in 48 women with the vulvar vestibulitis syndrome. In the first part of the study, 14 women with the vulvar vestibulitis syndrome were randomly assigned to one of two treatment programs:
A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling
Directory of Open Access Journals (Sweden)
Ying Yan
2017-01-01
Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.
Directory of Open Access Journals (Sweden)
Thomson Denise
2010-12-01
Full Text Available Abstract Background Randomized controlled trials (RCTs are the gold standard for trials assessing the effects of therapeutic interventions; therefore it is important to understand how they are conducted. Our objectives were to provide an overview of a representative sample of pediatric RCTs published in 2007 and assess the validity of their results. Methods We searched Cochrane Central Register of Controlled Trials using a pediatric filter and randomly selected 300 RCTs published in 2007. We extracted data on trial characteristics; outcomes; methodological quality; reporting; and registration and protocol characteristics. Trial registration and protocol availability were determined for each study based on the publication, an Internet search and an author survey. Results Most studies (83% were efficacy trials, 40% evaluated drugs, and 30% were placebo-controlled. Primary outcomes were specified in 41%; 43% reported on adverse events. At least one statistically significant outcome was reported in 77% of trials; 63% favored the treatment group. Trial registration was declared in 12% of publications and 23% were found through an Internet search. Risk of bias (ROB was high in 59% of trials, unclear in 33%, and low in 8%. Registered trials were more likely to have low ROB than non-registered trials (16% vs. 5%; p = 0.008. Effect sizes tended to be larger for trials at high vs. low ROB (0.28, 95% CI 0.21,0.35 vs. 0.16, 95% CI 0.07,0.25. Among survey respondents (50% response rate, the most common reason for trial registration was a publication requirement and for non-registration, a lack of familiarity with the process. Conclusions More than half of this random sample of pediatric RCTs published in 2007 was at high ROB and three quarters of trials were not registered. There is an urgent need to improve the design, conduct, and reporting of child health research.
Density Functional Approach and Random Matrix Theory in Proteogenesis
Yamanaka, Masanori
2017-02-01
We study the energy-level statistics of amino acids by random matrix theory. The molecular orbital and the Kohn-Sham orbital energies are calculated using ab initio and density-functional formalisms for 20 different amino acids. To generate statistical data, we performed a multipoint calculation on 10000 molecular structures produced via a molecular dynamics simulation. For the valence orbitals, the energy-level statistics exhibit repulsion, but the universality in the random matrix cannot be determined. For the unoccupied orbitals, the energy-level statistics indicate an intermediate distribution between the Gaussian orthogonal ensemble and the semi-Poisson statistics for all 20 different amino acids. These amino acids are considered to be in a type of critical state.
Unit dose sampling and final product performance: an alternative approach.
Geoffroy, J M; Leblond, D; Poska, R; Brinker, D; Hsu, A
2001-08-01
This article documents a proposed plan for validation testing for the content uniformity for final blends and finished solid oral dosage forms (SODFs). The testing logic and statistical justification of the plan are presented. The plan provides good assurance that a passing lot will perforin well against the USP tablet content uniformity test. The operating characteristics of the test and the probability of needing to test for blend sampling bias are reported. A case study is presented.
Directory of Open Access Journals (Sweden)
R Drew Carleton
Full Text Available Estimation of pest density is a basic requirement for integrated pest management in agriculture and forestry, and efficiency in density estimation is a common goal. Sequential sampling techniques promise efficient sampling, but their application can involve cumbersome mathematics and/or intensive warm-up sampling when pests have complex within- or between-site distributions. We provide tools for assessing the efficiency of sequential sampling and of alternative, simpler sampling plans, using computer simulation with "pre-sampling" data. We illustrate our approach using data for balsam gall midge (Paradiplosis tumifex attack in Christmas tree farms. Paradiplosis tumifex proved recalcitrant to sequential sampling techniques. Midge distributions could not be fit by a common negative binomial distribution across sites. Local parameterization, using warm-up samples to estimate the clumping parameter k for each site, performed poorly: k estimates were unreliable even for samples of n ∼ 100 trees. These methods were further confounded by significant within-site spatial autocorrelation. Much simpler sampling schemes, involving random or belt-transect sampling to preset sample sizes, were effective and efficient for P. tumifex. Sampling via belt transects (through the longest dimension of a stand was the most efficient, with sample means converging on true mean density for sample sizes of n ∼ 25-40 trees. Pre-sampling and simulation techniques provide a simple method for assessing sampling strategies for estimating insect infestation. We suspect that many pests will resemble P. tumifex in challenging the assumptions of sequential sampling methods. Our software will allow practitioners to optimize sampling strategies before they are brought to real-world applications, while potentially avoiding the need for the cumbersome calculations required for sequential sampling methods.
Sampling Approaches for Multi-Domain Internet Performance Measurement Infrastructures
Energy Technology Data Exchange (ETDEWEB)
Calyam, Prasad
2014-09-15
The next-generation of high-performance networks being developed in DOE communities are critical for supporting current and emerging data-intensive science applications. The goal of this project is to investigate multi-domain network status sampling techniques and tools to measure/analyze performance, and thereby provide “network awareness” to end-users and network operators in DOE communities. We leverage the infrastructure and datasets available through perfSONAR, which is a multi-domain measurement framework that has been widely deployed in high-performance computing and networking communities; the DOE community is a core developer and the largest adopter of perfSONAR. Our investigations include development of semantic scheduling algorithms, measurement federation policies, and tools to sample multi-domain and multi-layer network status within perfSONAR deployments. We validate our algorithms and policies with end-to-end measurement analysis tools for various monitoring objectives such as network weather forecasting, anomaly detection, and fault-diagnosis. In addition, we develop a multi-domain architecture for an enterprise-specific perfSONAR deployment that can implement monitoring-objective based sampling and that adheres to any domain-specific measurement policies.
A Random Finite Set Approach to Space Junk Tracking and Identification
2014-09-03
Final 3. DATES COVERED (From - To) 31 Jan 13 – 29 Apr 14 4. TITLE AND SUBTITLE A Random Finite Set Approach to Space Junk Tracking and...01-2013 to 29-04-2014 4. TITLE AND SUBTITLE A Random Finite Set Approach to Space Junk Tracking and Identification 5a. CONTRACT NUMBER FA2386-13...Prescribed by ANSI Std Z39-18 A Random Finite Set Approach to Space Junk Tracking and Indentification Ba-Ngu Vo1, Ba-Tuong Vo1, 1Department of
Valero, Antonio; Pasquali, Frédérique; De Cesare, Alessandra; Manfreda, Gerardo
2014-08-01
Current sampling plans assume a random distribution of microorganisms in food. However, food-borne pathogens are estimated to be heterogeneously distributed in powdered foods. This spatial distribution together with very low level of contaminations raises concern of the efficiency of current sampling plans for the detection of food-borne pathogens like Cronobacter and Salmonella in powdered foods such as powdered infant formula or powdered eggs. An alternative approach based on a Poisson distribution of the contaminated part of the lot (Habraken approach) was used in order to evaluate the probability of falsely accepting a contaminated lot of powdered food when different sampling strategies were simulated considering variables such as lot size, sample size, microbial concentration in the contaminated part of the lot and proportion of contaminated lot. The simulated results suggest that a sample size of 100g or more corresponds to the lower number of samples to be tested in comparison with sample sizes of 10 or 1g. Moreover, the number of samples to be tested greatly decrease if the microbial concentration is 1CFU/g instead of 0.1CFU/g or if the proportion of contamination is 0.05 instead of 0.01. Mean contaminations higher than 1CFU/g or proportions higher than 0.05 did not impact on the number of samples. The Habraken approach represents a useful tool for risk management in order to design a fit-for-purpose sampling plan for the detection of low levels of food-borne pathogens in heterogeneously contaminated powdered food. However, it must be outlined that although effective in detecting pathogens, these sampling plans are difficult to be applied since the huge number of samples that needs to be tested. Sampling does not seem an effective measure to control pathogens in powdered food. Copyright © 2014 Elsevier B.V. All rights reserved.
Inflammatory Biomarkers and Risk of Schizophrenia: A 2-Sample Mendelian Randomization Study.
Hartwig, Fernando Pires; Borges, Maria Carolina; Horta, Bernardo Lessa; Bowden, Jack; Davey Smith, George
2017-12-01
Positive associations between inflammatory biomarkers and risk of psychiatric disorders, including schizophrenia, have been reported in observational studies. However, conventional observational studies are prone to bias, such as reverse causation and residual confounding, thus limiting our understanding of the effect (if any) of inflammatory biomarkers on schizophrenia risk. To evaluate whether inflammatory biomarkers have an effect on the risk of developing schizophrenia. Two-sample mendelian randomization study using genetic variants associated with inflammatory biomarkers as instrumental variables to improve inference. Summary association results from large consortia of candidate gene or genome-wide association studies, including several epidemiologic studies with different designs, were used. Gene-inflammatory biomarker associations were estimated in pooled samples ranging from 1645 to more than 80 000 individuals, while gene-schizophrenia associations were estimated in more than 30 000 cases and more than 45 000 ancestry-matched controls. In most studies included in the consortia, participants were of European ancestry, and the prevalence of men was approximately 50%. All studies were conducted in adults, with a wide age range (18 to 80 years). Genetically elevated circulating levels of C-reactive protein (CRP), interleukin-1 receptor antagonist (IL-1Ra), and soluble interleukin-6 receptor (sIL-6R). Risk of developing schizophrenia. Individuals with schizophrenia or schizoaffective disorders were included as cases. Given that many studies contributed to the analyses, different diagnostic procedures were used. The pooled odds ratio estimate using 18 CRP genetic instruments was 0.90 (random effects 95% CI, 0.84-0.97; P = .005) per 2-fold increment in CRP levels; consistent results were obtained using different mendelian randomization methods and a more conservative set of instruments. The odds ratio for sIL-6R was 1.06 (95% CI, 1.01-1.12; P = .02
Planetary Protection Approaches for a Mars Atmospheric Sample Return
Clark, B.; Leshin, L.; Barengoltz, J.
The Sample Collection for Investigation of Mars (SCIM) mission proposes to fly through the upper atmosphere of Mars at hypervelocity to collect airborne dust and gas, and return the material to Earth for detailed analysis in a variety of specialized and sophisticated laboratories. SCIM would accomplish the first low-cost return of martian material, and could provide crucial insights into the poorly understood history of water and weathering processes on Mars. Planetary protection forward contamination can be satisfied by straight-forward, established procedures. The more challenging concern for back-contamination of Earth has been directly addressed through a number of detailed engineering analyses to identify which portions of the spacecraft are susceptible to contamination by surviving organisms, combined with in-space heating to sterilize the aerogel collecting medium after acquisition of samples. Systems for "breaking-the-chain" of back contamination have been designed. Review of established heat sterilization procedures on Earth have provided a rationale for specifying a conservative temperature-time cycle for sterilization onboard the spacecraft. In-flight monitoring of onborad systems will provide the Planetary Protection Office with confirmatory information needed to enable approval for final re-targeting of the trajectory to return to Earth.
Random matrix approach to the distribution of genomic distance.
Alexeev, Nikita; Zograf, Peter
2014-08-01
The cycle graph introduced by Bafna and Pevzner is an important tool for evaluating the distance between two genomes, that is, the minimal number of rearrangements needed to transform one genome into another. We interpret this distance in topological terms and relate it to the random matrix theory. Namely, the number of genomes at a given 2-break distance from a fixed one (the Hultman number) is represented by a coefficient in the genus expansion of a matrix integral over the space of complex matrices with the Gaussian measure. We study generating functions for the Hultman numbers and prove that the two-break distance distribution is asymptotically normal.
Random Matrix Theory Approach to Chaotic Coherent Perfect Absorbers
Li, Huanan; Suwunnarat, Suwun; Fleischmann, Ragnar; Schanz, Holger; Kottos, Tsampikos
2017-01-01
We employ random matrix theory in order to investigate coherent perfect absorption (CPA) in lossy systems with complex internal dynamics. The loss strength γCPA and energy ECPA, for which a CPA occurs, are expressed in terms of the eigenmodes of the isolated cavity—thus carrying over the information about the chaotic nature of the target—and their coupling to a finite number of scattering channels. Our results are tested against numerical calculations using complex networks of resonators and chaotic graphs as CPA cavities.
Spectral rigidity of vehicular streams (random matrix theory approach)
Energy Technology Data Exchange (ETDEWEB)
Krbalek, Milan [Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University, Prague (Czech Republic); Seba, Petr [Doppler Institute for Mathematical Physics and Applied Mathematics, Faculty of Nuclear Sciences and Physical Engineering, Czech Technical University, Prague (Czech Republic)
2009-08-28
Using a method originally developed for the random matrix theory, we derive an approximate mathematical formula for the number variance {delta}{sub N}(L) describing the rigidity of particle ensembles with a power-law repulsion. The resulting relation is compared with the relevant statistics of the single-vehicle data measured on the Dutch freeway A9. The detected value of the inverse temperature {beta}, which can be identified as a coefficient of the mental strain of the car drivers, is then discussed in detail with the relation to the traffic density {rho} and flow J.
Brunner, N. M.; Mladinich, C. S.; Caldwell, M. K.; Beal, Y. J. G.
2014-12-01
The U.S. Geological Survey is generating a suite of Essential Climate Variables (ECVs) products, as defined by the Global Climate Observing System, from the Landsat data archive. Validation protocols for these products are being established, incorporating the Committee on Earth Observing Satellites Land Product Validation Subgroup's best practice guidelines and validation hierarchy stages. The sampling design and accuracy measures follow the methodology developed by the European Space Agency's Climate Change Initiative Fire Disturbance (fire_cci) project (Padilla and others, 2014). A rigorous validation was performed on the 2008 Burned Area ECV (BAECV) prototype product, using a stratified random sample of 48 Thiessen scene areas overlaying Landsat path/rows distributed across several terrestrial biomes throughout North America. The validation reference data consisted of fourteen sample sites acquired from the fire_cci project and the remaining new samples sites generated from a densification of the stratified sampling for North America. The reference burned area polygons were generated using the ABAMS (Automatic Burned Area Mapping) software (Bastarrika and others, 2011; Izagirre, 2014). Accuracy results will be presented indicating strengths and weaknesses of the BAECV algorithm.Bastarrika, A., Chuvieco, E., and Martín, M.P., 2011, Mapping burned areas from Landsat TM/ETM+ data with a two-phase algorithm: Balancing omission and commission errors: Remote Sensing of Environment, v. 115, no. 4, p. 1003-1012.Izagirre, A.B., 2014, Automatic Burned Area Mapping Software (ABAMS), Preliminary Documentation, Version 10 v4,: Vitoria-Gasteiz, Spain, University of Basque Country, p. 27.Padilla, M., Chuvieco, E., Hantson, S., Theis, R., and Sandow, C., 2014, D2.1 - Product Validation Plan: UAH - University of Alcalá de Henares (Spain), 37 p.
GOKPINAR, Esra; GUL, Hasan; GOKPINAR, Fikri; BAYRAK, Hülya; OZONUR, Deniz
2013-01-01
Randomized complete block design is one of the most used experimental designs in statistical analysis. For testing ordered alternatives in randomized complete block design, parametric tests are used if random sample are drawn from Normal distribution. If normality assumption is not provide, nonparametric methods are used. In this study, we are interested nonparametric tests and we introduce briefly the nonparametric tests, such as Page, Modified Page and Hollander tests. We also give Permutat...
Hildebrandt, Thomas; Pick, Denis; Einax, Jürgen W
2012-02-01
The pollution of soil and environment as a result of human activity is a major problem. Nowadays, the determination of local contaminations is of interest for environmental remediation. These hotspots can have various toxic effects on plants, animals, humans, and the whole ecological system. However, economical and juridical consequences are also possible, e.g., high costs for remediation measures. In this study three sampling strategies (simple random sampling, stratified sampling, and systematic sampling) were applied on randomly distributed hotspot contaminations to prove their efficiency in term of finding hotspots. The results were used for the validation of a computerized simulation. This application can simulate the contamination on a field, the sampling pattern, and a virtual sampling. A constant hit rate showed that none of the sampling patterns could reach better results than others. Furthermore, the uncertainty associated with the results is described by confidence intervals. It is to be considered that the uncertainty during sampling is enormous and will decrease slightly, even the number of samples applied was increased to an unreasonable amount. It is hardly possible to identify the exact number of randomly distributed hotspot contaminations by statistical sampling. But a range of possible results could be calculated. Depending on various parameters such as shape and size of the area, number of hotspots, and sample quantity, optimal sampling strategies could be derived. Furthermore, an estimation of bias arising from sampling methodology is possible. The developed computerized simulation is an innovative tool for optimizing sampling strategies in terrestrial compartments for hotspot distributions.
Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch
2017-06-06
An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.
Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory
Suliman, Mohamed; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.
2016-12-01
In this supplementary appendix we provide proofs and additional extensive simulations that complement the analysis of the main paper (constrained perturbation regularization approach for signal estimation using random matrix theory).
Directory of Open Access Journals (Sweden)
P. M. A. Diaz
2016-06-01
Full Text Available This paper presents a method to estimate the temporal interaction in a Conditional Random Field (CRF based approach for crop recognition from multitemporal remote sensing image sequences. This approach models the phenology of different crop types as a CRF. Interaction potentials are assumed to depend only on the class labels of an image site at two consecutive epochs. In the proposed method, the estimation of temporal interaction parameters is considered as an optimization problem, whose goal is to find the transition matrix that maximizes the CRF performance, upon a set of labelled data. The objective functions underlying the optimization procedure can be formulated in terms of different accuracy metrics, such as overall and average class accuracy per crop or phenological stages. To validate the proposed approach, experiments were carried out upon a dataset consisting of 12 co-registered LANDSAT images of a region in southeast of Brazil. Pattern Search was used as the optimization algorithm. The experimental results demonstrated that the proposed method was able to substantially outperform estimates related to joint or conditional class transition probabilities, which rely on training samples.
Sample size and power for a stratified doubly randomized preference design.
Cameron, Briana; Esserman, Denise A
2016-11-21
The two-stage (or doubly) randomized preference trial design is an important tool for researchers seeking to disentangle the role of patient treatment preference on treatment response through estimation of selection and preference effects. Up until now, these designs have been limited by their assumption of equal preference rates and effect sizes across the entire study population. We propose a stratified two-stage randomized trial design that addresses this limitation. We begin by deriving stratified test statistics for the treatment, preference, and selection effects. Next, we develop a sample size formula for the number of patients required to detect each effect. The properties of the model and the efficiency of the design are established using a series of simulation studies. We demonstrate the applicability of the design using a study of Hepatitis C treatment modality, specialty clinic versus mobile medical clinic. In this example, a stratified preference design (stratified by alcohol/drug use) may more closely capture the true distribution of patient preferences and allow for a more efficient design than a design which ignores these differences (unstratified version). © The Author(s) 2016.
2010-03-01
AFRL-RY-HS-TR-2010-0029 REMARKS ON THE RADIATIVE TRANSFER APPROACH TO SCATTERING OF ELECTROMAGNETIC WAVES IN LAYERED RANDOM MEDIA...TRANSFER APPROACH TO SCATTERING OF ELECTROMAGNETIC WAVES IN LAYERED RANDOM MEDIA 5a. CONTRACT NUMBER IN-HOUSE 5b. GRANT NUMBER 5c. PROGRAM...Beckmann and A. Spizzichino. The Scattering of Electromagnetic Waves from Rough Surfaces. Artech House, Norwood, Massachusetts, 1987. [3] G. S. Brown. A
Control capacity and a random sampling method in exploring controllability of complex networks.
Jia, Tao; Barabási, Albert-László
2013-01-01
Controlling complex systems is a fundamental challenge of network science. Recent advances indicate that control over the system can be achieved through a minimum driver node set (MDS). The existence of multiple MDS's suggests that nodes do not participate in control equally, prompting us to quantify their participations. Here we introduce control capacity quantifying the likelihood that a node is a driver node. To efficiently measure this quantity, we develop a random sampling algorithm. This algorithm not only provides a statistical estimate of the control capacity, but also bridges the gap between multiple microscopic control configurations and macroscopic properties of the network under control. We demonstrate that the possibility of being a driver node decreases with a node's in-degree and is independent of its out-degree. Given the inherent multiplicity of MDS's, our findings offer tools to explore control in various complex systems.
Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui
2016-06-01
Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.
A hybrid partial least squares and random forest approach to ...
African Journals Online (AJOL)
Up to date forest inventory data has become increasingly essential for sustainable planning and management of a commercial forest plantation. Forest inventory data may be collected in the form of traditional field based approaches or using remote sensing techniques. The aim of this study was to examine the utility of the ...
Rational and random approaches to adenoviral vector engineering
Uil, Taco Gilles
2011-01-01
The overall aim of this thesis is to contribute to the engineering of more selective and effective oncolytic Adenovirus (Ad) vectors. Two general approaches are taken for this purpose: (i) genetic capsid modification to achieve Ad retargeting (Chapters 2 to 4), and (ii) directed evolution to improve
An effective Hamiltonian approach to quantum random walk
Indian Academy of Sciences (India)
2017-02-09
Feb 9, 2017 ... We showed that in the case of two-step walk, the time evolution operator effectively can have multiplicative form. In the case of a square lattice, quantum walk has been studied computationally for different coins and the results for both the additive and the multiplica- tive approaches have been compared.
Random matrix theory approach to vibrations near the jamming transition
Beltukov, Y. M.
2015-03-01
It has been shown that the dynamical matrix M describing harmonic oscillations in granular media can be represented in the form M = AA T, where the rows of the matrix A correspond to the degrees of freedom of individual granules and its columns correspond to elastic contacts between granules. Such a representation of the dynamical matrix makes it possible to estimate the density of vibrational states with the use of the random matrix theory. The found density of vibrational states is approximately constant in a wide frequency range ω- < ω < ω+, which is determined by the ratio of the number of degrees of freedom to the total number of contacts in the system, which is in good agreement with the results of the numerical experiments.
Clerkin, Elise M.; Magee, Joshua C.; Wells, Tony T.; Beard, Courtney; Barnett, Nancy P.
2016-01-01
Objective Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Method Adult participants (N=86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Results Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. Conclusions These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. PMID:27591918
Brus, D.J.; Gruijter, de J.J.
1997-01-01
Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based
A hybrid partial least squares and random forest approach to ...
African Journals Online (AJOL)
Nicole Reddy
Satellite based remote sensing data have been used to predict .... Equation (1). The linear regression model is then fit to the latent variables known as the PLS factors in an orthogonal space (M) (Equation 2). = 0 + ∑ ... mean square error (RMSE) for the validation sample data were computed (Equation 3). Dataset.
Chaudhuri, Arijit
2014-01-01
Exposure to SamplingAbstract Introduction Concepts of Population, Sample, and SamplingInitial RamificationsAbstract Introduction Sampling Design, Sampling SchemeRandom Numbers and Their Uses in Simple RandomSampling (SRS)Drawing Simple Random Samples with and withoutReplacementEstimation of Mean, Total, Ratio of Totals/Means:Variance and Variance EstimationDetermination of Sample SizesA.2 Appendix to Chapter 2 A.More on Equal Probability Sampling A.Horvitz-Thompson EstimatorA.SufficiencyA.LikelihoodA.Non-Existence Theorem More Intricacies Abstract Introduction Unequal Probability Sampling StrategiesPPS Sampling Exploring Improved WaysAbstract Introduction Stratified Sampling Cluster SamplingMulti-Stage SamplingMulti-Phase Sampling: Ratio and RegressionEstimationviiviii ContentsControlled SamplingModeling Introduction Super-Population ModelingPrediction Approach Model-Assisted Approach Bayesian Methods Spatial SmoothingSampling on Successive Occasions: Panel Rotation Non-Response and Not-at-Homes Weighting Adj...
Active music therapy approach in amyotrophic lateral sclerosis: a randomized-controlled trial.
Raglio, Alfredo; Giovanazzi, Elena; Pain, Debora; Baiardi, Paola; Imbriani, Chiara; Imbriani, Marcello; Mora, Gabriele
2016-12-01
This randomized controlled study assessed the efficacy of active music therapy (AMT) on anxiety, depression, and quality of life in amyotrophic lateral sclerosis (ALS). Communication and relationship during AMT treatment were also evaluated. Thirty patients were assigned randomly to experimental [AMT plus standard of care (SC)] or control (SC) groups. AMT consisted of 12 sessions (three times a week), whereas the SC treatment was based on physical and speech rehabilitation sessions, occupational therapy, and psychological support. ALS Functional Rating Scale-Revised, Hospital Anxiety and Depression Scale, McGill Quality of Life Questionnaire, and Music Therapy Rating Scale were administered to assess functional, psychological, and music therapy outcomes. The AMT group improved significantly in McGill Quality of Life Questionnaire global scores (P=0.035) and showed a positive trend in nonverbal and sonorous-music relationship during the treatment. Further studies involving larger samples in a longer AMT intervention are needed to confirm the effectiveness of this approach in ALS.
Song, Zhuoyi; Zhou, Yu; Juusola, Mikko
2016-01-01
Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779
Mohsenzadeh, Yalda; Sheikhzadeh, Hamid; Reza, Ali M; Bathaee, Najmehsadat; Kalayeh, Mahdi M
2013-12-01
This paper introduces a novel sparse Bayesian machine-learning algorithm for embedded feature selection in classification tasks. Our proposed algorithm, called the relevance sample feature machine (RSFM), is able to simultaneously choose the relevance samples and also the relevance features for regression or classification problems. We propose a separable model in feature and sample domains. Adopting a Bayesian approach and using Gaussian priors, the learned model by RSFM is sparse in both sample and feature domains. The proposed algorithm is an extension of the standard RVM algorithm, which only opts for sparsity in the sample domain. Experimental comparisons on synthetic as well as benchmark data sets show that RSFM is successful in both feature selection (eliminating the irrelevant features) and accurate classification. The main advantages of our proposed algorithm are: less system complexity, better generalization and avoiding overfitting, and less computational cost during the testing stage.
Fractional calculus approach to the statistical characterization of random variables and vectors
Cottone, Giulio; Di Paola, Mario; Metzler, Ralf
2010-03-01
Fractional moments have been investigated by many authors to represent the density of univariate and bivariate random variables in different contexts. Fractional moments are indeed important when the density of the random variable has inverse power-law tails and, consequently, it lacks integer order moments. In this paper, starting from the Mellin transform of the characteristic function and by fractional calculus method we present a new perspective on the statistics of random variables. Introducing the class of complex moments, that include both integer and fractional moments, we show that every random variable can be represented within this approach, even if its integer moments diverge. Applications to the statistical characterization of raw data and in the representation of both random variables and vectors are provided, showing that the good numerical convergence makes the proposed approach a good and reliable tool also for practical data analysis.
Exploring multicollinearity using a random matrix theory approach.
Feher, Kristen; Whelan, James; Müller, Samuel
2012-01-01
Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.
Bouhanick, B; Berrut, G; Chameau, A M; Hallar, M; Bled, F; Chevet, B; Vergely, J; Rohmer, V; Fressinaud, P; Marre, M
1992-01-01
The predictive value of random urine sample during outpatient visit to predict persistent microalbuminuria was studied in 76 Type 1, insulin-dependent diabetic subjects, 61 Type 2, non-insulin-dependent diabetic subjects, and 72 Type 2, insulin-treated diabetic subjects. Seventy-six patients attended outpatient clinic during morning, and 133 during afternoon. Microalbuminuria was suspected if Urinary Albumin Excretion (UAE) exceeded 20 mg/l. All patients were hospitalized within 6 months following outpatient visit, and persistent microalbuminuria was assessed then if UAE was between 30 and 300 mg/24 h on 2-3 occasions in 3 urines samples. Of these 209 subjects eighty-three were also screened with Microbumintest (Ames-Bayer), a semi-quantitative method. Among the 209 subjects, 71 were positive both for microalbuminuria during outpatient visit and a persistent microalbuminuria during hospitalization: sensitivity 91.0%, specificity 83.2%, concordance 86.1%, and positive predictive value 76.3% (chi-squared test: 191; p less than 10(-4)). Data were not different for subjects examined on morning, or on afternoon. Among the 83 subjects also screened with Microbumintest, 22 displayed both a positive reaction and a persistent microalbuminuria: sensitivity 76%, specificity 81%, concordance 80%, and positive predictive value 69% (chi-squared test: 126; p less than 10(-4)). Both types of screening appeared equally effective during outpatient visit. Hence, a persistent microalbuminuria can be predicted during an outpatient visit in a diabetic clinic.
Effectiveness of hand hygiene education among a random sample of women from the community.
Ubheeram, J; Biranjia-Hurdoyal, S D
2017-03-01
The effectiveness of hand hygiene education was investigated by studying the hand hygiene awareness and bacterial hand contamination among a random sample of 170 women in the community. Questionnaire was used to assess the hand hygiene awareness score, followed by swabbing of the dominant hand. Bacterial identification was done by conventional biochemical tests. Better hand hygiene awareness score was significantly associated with age, scarce bacterial growth and absence of potential pathogen (p hand samples, bacterial growth was noted in 155 (91.2%), which included 91 (53.5%) heavy growth, 53 (31.2%) moderate growth and 11 (6.47%) scanty growth. The presence of enteric bacteria was associated with long nails (49.4% vs 29.2%; p = 0.007; OR = 2.3; 95% CI: 1.25-4.44) while finger rings were associated with higher bacterial load (p = 0.003). Coliforms was significantly higher among women who had a lower hand hygiene awareness score, washed their hands at lower frequency (59.0% vs 32.8%; p = 0.003; OR = 2.9; 95% CI: 1.41-6.13) and used common soap as compared to antiseptic soaps (69.7% vs 30.3%, p = 0.000; OR = 4.11; 95% CI: 1.67-10.12). Level of hand hygiene awareness among the participants was satisfactory but not the compliance of hand washing practice, especially among the elders.
Association between stalking victimisation and psychiatric morbidity in a random community sample.
Purcell, Rosemary; Pathé, Michele; Mullen, Paul E
2005-11-01
No studies have assessed psychopathology among victims of stalking who have not sought specialist help. To examine the associations between stalking victimisation and psychiatric morbidity in a representative community sample. A random community sample (n=1844) completed surveys examining the experience of harassment and current mental health. The 28-item General Health Questionnaire (GHQ-28) and the Impact of Event Scale were used to assess symptomatology in those reporting brief harassment (n=196) or protracted stalking (n=236) and a matched control group reporting no harassment (n=432). Rates of caseness on the GHQ-28 were higher among stalking victims (36.4%) than among controls (19.3%) and victims of brief harassment (21.9%). Psychiatric morbidity did not differ according to the recency of victimisation, with 34.1% of victims meeting caseness criteria 1 year after stalking had ended. In a significant minority of victims, stalking victimisation is associated with psychiatric morbidity that may persist long after it has ceased. Recognition of the immediate and long-term impacts of stalking is necessary to assist victims and help alleviate distress and long-term disability.
Random sample community-based health surveys: does the effort to reach participants matter?
Messiah, Antoine; Castro, Grettel; Rodríguez de la Vega, Pura; Acuna, Juan M
2014-12-15
Conducting health surveys with community-based random samples are essential to capture an otherwise unreachable population, but these surveys can be biased if the effort to reach participants is insufficient. This study determines the desirable amount of effort to minimise such bias. A household-based health survey with random sampling and face-to-face interviews. Up to 11 visits, organised by canvassing rounds, were made to obtain an interview. Single-family homes in an underserved and understudied population in North Miami-Dade County, Florida, USA. Of a probabilistic sample of 2200 household addresses, 30 corresponded to empty lots, 74 were abandoned houses, 625 households declined to participate and 265 could not be reached and interviewed within 11 attempts. Analyses were performed on the 1206 remaining households. Each household was asked if any of their members had been told by a doctor that they had high blood pressure, heart disease including heart attack, cancer, diabetes, anxiety/ depression, obesity or asthma. Responses to these questions were analysed by the number of visit attempts needed to obtain the interview. Return per visit fell below 10% after four attempts, below 5% after six attempts and below 2% after eight attempts. As the effort increased, household size decreased, while household income and the percentage of interviewees active and employed increased; proportion of the seven health conditions decreased, four of which did so significantly: heart disease 20.4-9.2%, high blood pressure 63.5-58.1%, anxiety/depression 24.4-9.2% and obesity 21.8-12.6%. Beyond the fifth attempt, however, cumulative percentages varied by less than 1% and precision varied by less than 0.1%. In spite of the early and steep drop, sustaining at least five attempts to reach participants is necessary to reduce selection bias. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
Random matrix approach to the dynamics of stock inventory variations
Zhou, Wei-Xing; Mu, Guo-Hua; Kertész, János
2012-09-01
It is well accepted that investors can be classified into groups owing to distinct trading strategies, which forms the basic assumption of many agent-based models for financial markets when agents are not zero-intelligent. However, empirical tests of these assumptions are still very rare due to the lack of order flow data. Here we adopt the order flow data of Chinese stocks to tackle this problem by investigating the dynamics of inventory variations for individual and institutional investors that contain rich information about the trading behavior of investors and have a crucial influence on price fluctuations. We find that the distributions of cross-correlation coefficient Cij have power-law forms in the bulk that are followed by exponential tails, and there are more positive coefficients than negative ones. In addition, it is more likely that two individuals or two institutions have a stronger inventory variation correlation than one individual and one institution. We find that the largest and the second largest eigenvalues (λ1 and λ2) of the correlation matrix cannot be explained by random matrix theory and the projections of investors' inventory variations on the first eigenvector u(λ1) are linearly correlated with stock returns, where individual investors play a dominating role. The investors are classified into three categories based on the cross-correlation coefficients CV R between inventory variations and stock returns. A strong Granger causality is unveiled from stock returns to inventory variations, which means that a large proportion of individuals hold the reversing trading strategy and a small part of individuals hold the trending strategy. Our empirical findings have scientific significance in the understanding of investors' trading behavior and in the construction of agent-based models for emerging stock markets.
Blind Measurement Selection: A Random Matrix Theory Approach
Elkhalil, Khalil
2016-12-14
This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter vector. The exhaustive search inspecting each of the $n\\\\choose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.
Sustar-Vozlic, Jelka; Rostohar, Katja; Blejec, Andrej; Kozjak, Petra; Cergan, Zoran; Meglic, Vladimir
2010-03-01
In order to comply with the European Union regulatory threshold for the adventitious presence of genetically modified organisms (GMOs) in food and feed, it is important to trace GMOs from the field. Appropriate sampling methods are needed to accurately predict the presence of GMOs at the field level. A 2-year field experiment with two maize varieties differing in kernel colour was conducted in Slovenia. Based on the results of data mining analyses and modelling, it was concluded that spatial relations between the donor and receptor field were the most important factors influencing the distribution of outcrossing rate (OCR) in the field. The approach for estimation fitting function parameters in the receptor (non-GM) field at two distances from the donor (GM) field (10 and 25 m) for estimation of the OCR (GMO content) in the whole receptor field was developed. Different sampling schemes were tested; a systematic random scheme in rows was proposed to be applied for sampling at the two distances for the estimation of fitting function parameters for determination of OCR. The sampling approach had already been validated with some other OCR data and was practically applied in the 2009 harvest in Poland. The developed approach can be used for determination of the GMO presence at the field level and for making appropriate labelling decisions. The importance of this approach lies in its possibility to also address other threshold levels beside the currently prescribed labelling threshold of 0.9% for food and feed.
Sample-to-sample fluctuations of power spectrum of a random motion in a periodic Sinai model
Dean, David S.; Iorio, Antonio; Marinari, Enzo; Oshanin, Gleb
2016-09-01
The Sinai model of a tracer diffusing in a quenched Brownian potential is a much-studied problem exhibiting a logarithmically slow anomalous diffusion due to the growth of energy barriers with the system size. However, if the potential is random but periodic, the regime of anomalous diffusion crosses over to one of normal diffusion once a tracer has diffused over a few periods of the system. Here we consider a system in which the potential is given by a Brownian bridge on a finite interval (0 ,L ) and then periodically repeated over the whole real line and study the power spectrum S (f ) of the diffusive process x (t ) in such a potential. We show that for most of realizations of x (t ) in a given realization of the potential, the low-frequency behavior is S (f ) ˜A /f2 , i.e., the same as for standard Brownian motion, and the amplitude A is a disorder-dependent random variable with a finite support. Focusing on the statistical properties of this random variable, we determine the moments of A of arbitrary, negative, or positive order k and demonstrate that they exhibit a multifractal dependence on k and a rather unusual dependence on the temperature and on the periodicity L , which are supported by atypical realizations of the periodic disorder. We finally show that the distribution of A has a log-normal left tail and exhibits an essential singularity close to the right edge of the support, which is related to the Lifshitz singularity. Our findings are based both on analytic results and on extensive numerical simulations of the process x (t ) .
Kaye, Linda K.; Brewer, Gayle
2013-01-01
The current study examined approaches to teaching in a postgraduate psychology sample. This included considering teaching-focused (information transfer) and student-focused (conceptual changes in understanding) approaches to teaching. Postgraduate teachers of psychology (N = 113) completed a questionnaire measuring their use of a teacher- or…
Discriminative motif discovery via simulated evolution and random under-sampling.
Directory of Open Access Journals (Sweden)
Tao Song
Full Text Available Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.
Stumpf, Felix; Schmidt, Karsten; Behrens, Thorsten; Schoenbrodt-Stitt, Sarah; Scholten, Thomas
2014-05-01
One crucial component of a Digital Soil Mapping (DSM) framework is outlined by geo-referenced soil observations. Nevertheless, highly informative legacy soil information, acquired by traditional soil surveys, is often neglected due to lacking accordance with specific statistical DSM designs. The focus of this study is to integrate legacy data into a state-of-the-art DSM approach, based on a modified conditioned Latin Hypercube Sampling (cLHS) design and Random Forest. Furthermore, by means of the cLHS modification the scope of actually unique cLHS sampling locations is widened in order to compensate limited accessability in the field. As well, the maximally stratified cLHS design is not diluted by the modification. Exemplarily the target variables of the modelling are represented by sand and clay fractions. The study site is a small mountainous hydrological catchment of 4.2 km² in the reservoir of the Three Gorges Dam in Central China. The modification is accomplished by demarcating the histogram borders of each cLHS stratum, which are based on the multivariate cLHS feature space. Thereby, all potential sample locations per stratum are identified. This provides a possibility to integrate legacy data samples that match one of the newly created sample locations, and flexibility with respect to field accessibility. Consequently, six legacy data samples, taken from a total sample size of n = 30 were integrated into the sampling design and for all strata several potential sample locations are identified. The comparability of the modified and standard cLHS data sets is approved by (i) identifying their feature space coverage with respect to the cLHS stratifying variables, and (ii) by assessing the Random Forest accuracy estimates.
A sub-sampled approach to extremely low-dose STEM
Energy Technology Data Exchange (ETDEWEB)
Stevens, A. [OptimalSensing, Southlake, Texas 76092, USA; Duke University, ECE, Durham, North Carolina 27708, USA; Luzi, L. [Rice University, ECE, Houston, Texas 77005, USA; Yang, H. [Lawrence Berkeley National Laboratory, Berkeley, California 94720, USA; Kovarik, L. [Pacific NW National Laboratory, Richland, Washington 99354, USA; Mehdi, B. L. [Pacific NW National Laboratory, Richland, Washington 99354, USA; University of Liverpool, Materials Engineering, Liverpool L69 3GH, United Kingdom; Liyu, A. [Pacific NW National Laboratory, Richland, Washington 99354, USA; Gehm, M. E. [Duke University, ECE, Durham, North Carolina 27708, USA; Browning, N. D. [Pacific NW National Laboratory, Richland, Washington 99354, USA; University of Liverpool, Materials Engineering, Liverpool L69 3GH, United Kingdom
2018-01-22
The inpainting of randomly sub-sampled images acquired by scanning transmission electron microscopy (STEM) is an attractive method for imaging under low-dose conditions (≤ 1 e^{-}Å^{2}) without changing either the operation of the microscope or the physics of the imaging process. We show that 1) adaptive sub-sampling increases acquisition speed, resolution, and sensitivity; and 2) random (non-adaptive) sub-sampling is equivalent, but faster than, traditional low-dose techniques. Adaptive sub-sampling opens numerous possibilities for the analysis of beam sensitive materials and in-situ dynamic processes at the resolution limit of the aberration corrected microscope and is demonstrated here for the analysis of the node distribution in metal-organic frameworks (MOFs).
Directory of Open Access Journals (Sweden)
Gunter eSpöck
2015-05-01
Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.
Practical Approaches For Determination Of Sample Size In Paired Case-Control Studies
Demirel, Neslihan; Ozlem EGE ORUC; Gurler, Selma
2016-01-01
Objective: Cross-over design or paired case control studies that are using in clinical studies are the methods of design of experiments which requires dependent samples. The problem of sample size determination is generally difficult step of planning the statistical design. The aim of this study is to provide the researchers a practical approach for determining the sample size in paired control studies. Material and Methods: In this study, determination of sample size is mentioned in detail i...
Zhang, Xiaojia Shelly; de Sturler, Eric; Paulino, Glaucio H.
2017-10-01
We propose an efficient probabilistic method to solve a deterministic problem -- we present a randomized optimization approach that drastically reduces the enormous computational cost of optimizing designs under many load cases for both continuum and truss topology optimization. Practical structural designs by topology optimization typically involve many load cases, possibly hundreds or more. The optimal design minimizes a, possibly weighted, average of the compliance under each load case (or some other objective). This means that in each optimization step a large finite element problem must be solved for each load case, leading to an enormous computational effort. On the contrary, the proposed randomized optimization method with stochastic sampling requires the solution of only a few (e.g., 5 or 6) finite element problems (large linear systems) per optimization step. Based on simulated annealing, we introduce a damping scheme for the randomized approach. Through numerical examples in two and three dimensions, we demonstrate that the stochastic algorithm drastically reduces computational cost to obtain similar final topologies and results (e.g., compliance) compared with the standard algorithms. The results indicate that the damping scheme is effective and leads to rapid convergence of the proposed algorithm.
A Selective Dynamic Sampling Back-Propagation Approach for Handling the Two-Class Imbalance Problem
Directory of Open Access Journals (Sweden)
Roberto Alejo
2016-07-01
Full Text Available In this work, we developed a Selective Dynamic Sampling Approach (SDSA to deal with the class imbalance problem. It is based on the idea of using only the most appropriate samples during the neural network training stage. The “average samples”are the best to train the neural network, they are neither hard, nor easy to learn, and they could improve the classifier performance. The experimental results show that the proposed method is a successful method to deal with the two-class imbalance problem. It is very competitive with respect to well-known over-sampling approaches and dynamic sampling approaches, even often outperforming the under-sampling and standard back-propagation methods. SDSA is a very simple method for automatically selecting the most appropriate samples (average samples during the training of the back-propagation, and it is very efficient. In the training stage, SDSA uses significantly fewer samples than the popular over-sampling approaches and even than the standard back-propagation trained with the original dataset.
Kazemzadeh, Farnoud; Shafiee, Mohammad J.; Wong, Alexander; Clausi, David A.
2014-09-01
The prevalence of compressive sensing is continually growing in all facets of imaging science. Com- pressive sensing allows for the capture and reconstruction of an entire signal from a sparse (under- sampled), yet sufficient, set of measurements that is representative of the target being observed. This compressive sensing strategy reduces the duration of the data capture, the size of the acquired data, and the cost of the imaging hardware as well as complexity while preserving the necessary underlying information. Compressive sensing systems require the accompaniment of advanced re- construction algorithms to reconstruct complete signals from the sparse measurements made. Here, a new reconstruction algorithm is introduced specifically for the reconstruction of compressive multispectral (MS) sensing data that allows for high-quality reconstruction from acquisitions at sub-Nyquist rates. We propose a multilayered conditional random field (MCRF) model, which extends upon the CRF model by incorporating two joint layers of certainty and estimated states. The proposed algorithm treats the reconstruction of each spectral channel as a MCRF given the sparse MS measurements. Since the observations are incomplete, the MCRF incorporates an extra layer determining the certainty of the measurements. The proposed MCRF approach was evaluated using simulated compressive MS data acquisitions, and is shown to enable fast acquisition of MS sensing data with reduced imaging hardware cost and complexity.
Directory of Open Access Journals (Sweden)
Timothy C. Guetterman
2015-05-01
Full Text Available Although recommendations exist for determining qualitative sample sizes, the literature appears to contain few instances of research on the topic. Practical guidance is needed for determining sample sizes to conduct rigorous qualitative research, to develop proposals, and to budget resources. The purpose of this article is to describe qualitative sample size and sampling practices within published studies in education and the health sciences by research design: case study, ethnography, grounded theory methodology, narrative inquiry, and phenomenology. I analyzed the 51 most highly cited studies using predetermined content categories and noteworthy sampling characteristics that emerged. In brief, the findings revealed a mean sample size of 87. Less than half of the studies identified a sampling strategy. I include a description of findings by approach and recommendations for sampling to assist methodologists, reviewers, program officers, graduate students, and other qualitative researchers in understanding qualitative sampling practices in recent studies. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs1502256
Mendelian Randomization as an Approach to Assess Causality Using Observational Data.
Sekula, Peggy; Del Greco M, Fabiola; Pattaro, Cristian; Köttgen, Anna
2016-11-01
Mendelian randomization refers to an analytic approach to assess the causality of an observed association between a modifiable exposure or risk factor and a clinically relevant outcome. It presents a valuable tool, especially when randomized controlled trials to examine causality are not feasible and observational studies provide biased associations because of confounding or reverse causality. These issues are addressed by using genetic variants as instrumental variables for the tested exposure: the alleles of this exposure-associated genetic variant are randomly allocated and not subject to reverse causation. This, together with the wide availability of published genetic associations to screen for suitable genetic instrumental variables make Mendelian randomization a time- and cost-efficient approach and contribute to its increasing popularity for assessing and screening for potentially causal associations. An observed association between the genetic instrumental variable and the outcome supports the hypothesis that the exposure in question is causally related to the outcome. This review provides an overview of the Mendelian randomization method, addresses assumptions and implications, and includes illustrative examples. We also discuss special issues in nephrology, such as inverse risk factor associations in advanced disease, and outline opportunities to design Mendelian randomization studies around kidney function and disease. Copyright © 2016 by the American Society of Nephrology.
Experiments with central-limit properties of spatial samples from locally covariant random fields
Barringer, T.H.; Smith, T.E.
1992-01-01
When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.
RIJCKEN, B; SCHOUTEN, JP; WEISS, ST; ROSNER, B; DEVRIES, K; VANDERLENDE, R
1993-01-01
Long-term variability of bronchial responsiveness has been studied in a random population sample of adults. During a follow-up period of 18 yr, 2,216 subjects contributed 5,012 observations to the analyses. Each subject could have as many as seven observations. Bronchial responsiveness was assessed
Albumin to creatinine ratio in a random urine sample: Correlation with severity of preeclampsia
Directory of Open Access Journals (Sweden)
Fady S. Moiety
2014-06-01
Conclusions: Random urine ACR may be a reliable method for prediction and assessment of severity of preeclampsia. Using the estimated cut-off may add to the predictive value of such a simple quick test.
Optimization of Integrative Passive Sampling Approaches for Use in the Epibenthic Environment
2016-12-23
manufactured containing PRCs including 13C- caffeine . Calibration studies were conducted under static and flow conditions. The relationship of...approaches for UXO sites. Benefits : The improved ability to accurately sample and quantify concentrations of MCs, and other moderately polar organic...uptake of caffeine concurrent with MCs. The sampler did not integratively sample caffeine by design; the concentration of caffeine on the sampler
Datema, Frank R; Moya, Ana; Krause, Peter; Bäck, Thomas; Willmes, Lars; Langeveld, Ton; Baatenburg de Jong, Robert J; Blom, Henk M
2012-01-01
Electronic patient files generate an enormous amount of medical data. These data can be used for research, such as prognostic modeling. Automatization of statistical prognostication processes allows automatic updating of models when new data is gathered. The increase of power behind an automated prognostic model makes its predictive capability more reliable. Cox proportional hazard regression is most frequently used in prognostication. Automatization of a Cox model is possible, but we expect the updating process to be time-consuming. A possible solution lies in an alternative modeling technique called random survival forests (RSFs). RSF is easily automated and is known to handle the proportionality assumption coherently and automatically. Performance of RSF has not yet been tested on a large head and neck oncological dataset. This study investigates performance of head and neck overall survival of RSF models. Performances are compared to a Cox model as the "gold standard." RSF might be an interesting alternative modeling approach for automatization when performances are similar. RSF models were created in R (Cox also in SPSS). Four RSF splitting rules were used: log-rank, conservation of events, log-rank score, and log-rank approximation. Models were based on historical data of 1371 patients with primary head-and-neck cancer, diagnosed between 1981 and 1998. Models contain 8 covariates: tumor site, T classification, N classification, M classification, age, sex, prior malignancies, and comorbidity. Model performances were determined by Harrell's concordance error rate, in which 33% of the original data served as a validation sample. RSF and Cox models delivered similar error rates. The Cox model performed slightly better (error rate, 0.2826). The log-rank splitting approach gave the best RSF performance (error rate, 0.2873). In accord with Cox and RSF models, high T classification, high N classification, and severe comorbidity are very important covariates in the
Exploratory factor analysis with small sample sizes: a comparison of three approaches.
Jung, Sunho
2013-07-01
Exploratory factor analysis (EFA) has emerged in the field of animal behavior as a useful tool for determining and assessing latent behavioral constructs. Because the small sample size problem often occurs in this field, a traditional approach, unweighted least squares, has been considered the most feasible choice for EFA. Two new approaches were recently introduced in the statistical literature as viable alternatives to EFA when sample size is small: regularized exploratory factor analysis and generalized exploratory factor analysis. A simulation study is conducted to evaluate the relative performance of these three approaches in terms of factor recovery under various experimental conditions of sample size, degree of overdetermination, and level of communality. In this study, overdetermination and sample size are the meaningful conditions in differentiating the performance of the three approaches in factor recovery. Specifically, when there are a relatively large number of factors, regularized exploratory factor analysis tends to recover the correct factor structure better than the other two approaches. Conversely, when few factors are retained, unweighted least squares tends to recover the factor structure better. Finally, generalized exploratory factor analysis exhibits very poor performance in factor recovery compared to the other approaches. This tendency is particularly prominent as sample size increases. Thus, generalized exploratory factor analysis may not be a good alternative to EFA. Regularized exploratory factor analysis is recommended over unweighted least squares unless small expected number of factors is ensured. Copyright © 2013 Elsevier B.V. All rights reserved.
Analogies between colored Lévy noise and random channel approach to disordered kinetics
Vlad, Marcel O.; Velarde, Manuel G.; Ross, John
2004-02-01
We point out some interesting analogies between colored Lévy noise and the random channel approach to disordered kinetics. These analogies are due to the fact that the probability density of the Lévy noise source plays a similar role as the probability density of rate coefficients in disordered kinetics. Although the equations for the two approaches are not identical, the analogies can be used for deriving new, useful results for both problems. The random channel approach makes it possible to generalize the fractional Uhlenbeck-Ornstein processes (FUO) for space- and time-dependent colored noise. We describe the properties of colored noise in terms of characteristic functionals, which are evaluated by using a generalization of Huber's approach to complex relaxation [Phys. Rev. B 31, 6070 (1985)]. We start out by investigating the properties of symmetrical white noise and then define the Lévy colored noise in terms of a Langevin equation with a Lévy white noise source. We derive exact analytical expressions for the various characteristic functionals, which characterize the noise, and a functional fractional Fokker-Planck equation for the probability density functional of the noise at a given moment in time. Second, by making an analogy between the theory of colored noise and the random channel approach to disordered kinetics, we derive fractional equations for the evolution of the probability densities of the random rate coefficients in disordered kinetics. These equations serve as a basis for developing methods for the evaluation of the statistical properties of the random rate coefficients from experimental data. Special attention is paid to the analysis of systems for which the observed kinetic curves can be described by linear or nonlinear stretched exponential kinetics.
Lee, Chul-Ho; Eun, Do Young
2012-01-01
Graph sampling via crawling has been actively considered as a generic and important tool for collecting uniform node samples so as to consistently estimate and uncover various characteristics of complex networks. The so-called simple random walk with re-weighting (SRW-rw) and Metropolis-Hastings (MH) algorithm have been popular in the literature for such unbiased graph sampling. However, an unavoidable downside of their core random walks -- slow diffusion over the space, can cause poor estimation accuracy. In this paper, we propose non-backtracking random walk with re-weighting (NBRW-rw) and MH algorithm with delayed acceptance (MHDA) which are theoretically guaranteed to achieve, at almost no additional cost, not only unbiased graph sampling but also higher efficiency (smaller asymptotic variance of the resulting unbiased estimators) than the SRW-rw and the MH algorithm, respectively. In particular, a remarkable feature of the MHDA is its applicability for any non-uniform node sampling like the MH algorithm,...
Baudron, Paul; Alonso-Sarría, Francisco; García-Aróstegui, José Luís; Cánovas-García, Fulgencio; Martínez-Vicente, David; Moreno-Brotóns, Jesús
2013-08-01
Accurate identification of the origin of groundwater samples is not always possible in complex multilayered aquifers. This poses a major difficulty for a reliable interpretation of geochemical results. The problem is especially severe when the information on the tubewells design is hard to obtain. This paper shows a supervised classification method based on the Random Forest (RF) machine learning technique to identify the layer from where groundwater samples were extracted. The classification rules were based on the major ion composition of the samples. We applied this method to the Campo de Cartagena multi-layer aquifer system, in southeastern Spain. A large amount of hydrogeochemical data was available, but only a limited fraction of the sampled tubewells included a reliable determination of the borehole design and, consequently, of the aquifer layer being exploited. Added difficulty was the very similar compositions of water samples extracted from different aquifer layers. Moreover, not all groundwater samples included the same geochemical variables. Despite of the difficulty of such a background, the Random Forest classification reached accuracies over 90%. These results were much better than the Linear Discriminant Analysis (LDA) and Decision Trees (CART) supervised classification methods. From a total of 1549 samples, 805 proceeded from one unique identified aquifer, 409 proceeded from a possible blend of waters from several aquifers and 335 were of unknown origin. Only 468 of the 805 unique-aquifer samples included all the chemical variables needed to calibrate and validate the models. Finally, 107 of the groundwater samples of unknown origin could be classified. Most unclassified samples did not feature a complete dataset. The uncertainty on the identification of training samples was taken in account to enhance the model. Most of the samples that could not be identified had an incomplete dataset.
DEFF Research Database (Denmark)
Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan
algorithm were evaluated. The resulting maps were validated on 777 soil profiles situated in a grid covering Denmark. The experiments showed that the results obtained with Jacobsen’s map were more accurate than the results obtained with the CEC map, despite a nominally coarser scale of 1:2,000,000 vs. 1...... of European Communities (CEC, 1985) respectively, both using the FAO 1974 classification. Furthermore, the effects of implementing soil-landscape relationships, using area proportional sampling instead of per polygon sampling, and replacing the default C5.0 classification tree algorithm with a random forest......:1,000,000. This finding is probably related to the fact that Jacobsen’s map was more detailed with a larger number of polygons, soil map units and soil types, despite its coarser scale. The results showed that the implementation of soil-landscape relationships, area-proportional sampling and the random forest...
Bajard, Agathe; Chabaud, Sylvie; Cornu, Catherine; Castellan, Anne-Charlotte; Malik, Salma; Kurbatova, Polina; Volpert, Vitaly; Eymard, Nathalie; Kassai, Behrouz; Nony, Patrice
2016-01-01
The main objective of our work was to compare different randomized clinical trial (RCT) experimental designs in terms of power, accuracy of the estimation of treatment effect, and number of patients receiving active treatment using in silico simulations. A virtual population of patients was simulated and randomized in potential clinical trials. Treatment effect was modeled using a dose-effect relation for quantitative or qualitative outcomes. Different experimental designs were considered, and performances between designs were compared. One thousand clinical trials were simulated for each design based on an example of modeled disease. According to simulation results, the number of patients needed to reach 80% power was 50 for crossover, 60 for parallel or randomized withdrawal, 65 for drop the loser (DL), and 70 for early escape or play the winner (PW). For a given sample size, each design had its own advantage: low duration (parallel, early escape), high statistical power and precision (crossover), and higher number of patients receiving the active treatment (PW and DL). Our approach can help to identify the best experimental design, population, and outcome for future RCTs. This may be particularly useful for drug development in rare diseases, theragnostic approaches, or personalized medicine. Copyright © 2016 Elsevier Inc. All rights reserved.
Sampling maternal care behaviour in domestic dogs: What's the best approach?
Czerwinski, Veronika H; Smith, Bradley P; Hynd, Philip I; Hazel, Susan J
2017-07-01
Our understanding of the frequency and duration of maternal care behaviours in the domestic dog during the first two postnatal weeks is limited, largely due to the inconsistencies in the sampling methodologies that have been employed. In order to develop a more concise picture of maternal care behaviour during this period, and to help establish the sampling method that represents these behaviours best, we compared a variety of time sampling methods Six litters were continuously observed for a total of 96h over postnatal days 3, 6, 9 and 12 (24h per day). Frequent (dam presence, nursing duration, contact duration) and infrequent maternal behaviours (anogenital licking duration and frequency) were coded using five different time sampling methods that included: 12-h night (1800-0600h), 12-h day (0600-1800h), one hour period during the night (1800-0600h), one hour period during the day (0600-1800h) and a one hour period anytime. Each of the one hour time sampling method consisted of four randomly chosen 15-min periods. Two random sets of four 15-min period were also analysed to ensure reliability. We then determined which of the time sampling methods averaged over the three 24-h periods best represented the frequency and duration of behaviours. As might be expected, frequently occurring behaviours were adequately represented by short (oneh) sampling periods, however this was not the case with the infrequent behaviour. Thus, we argue that the time sampling methodology employed must match the behaviour of interest. This caution applies to maternal behaviour in altricial species, such as canids, as well as all systematic behavioural observations utilising time sampling methodology. Copyright © 2017. Published by Elsevier B.V.
Are Flow Injection-based Approaches Suitable for Automated Handling of Solid Samples?
DEFF Research Database (Denmark)
Miró, Manuel; Hansen, Elo Harald; Cerdà, Victor
Flow-based approaches were originally conceived for liquid-phase analysis, implying that constituents in solid samples generally had to be transferred into the liquid state, via appropriate batch pretreatment procedures, prior to analysis. Yet, in recent years, much effort has been focused...... on the design and characterisation of sample processings units coupled with flowing systems aiming to enable the direct introduction and treatment of solid samples of environmental and agricultural origin in an automated fashion [1]. In this respect, various sample pre-treatment techniques including......, multisyringe flow injection, and micro-Lab-on-valve are presented as appealing approaches for on-line handling of solid samples. Special emphasis is given to the capability of flow systems to accommodate sequential extraction protocols for partitioning of trace elements and nutrients in environmental solids (e...
Boyacı, Ezel; Rodríguez-Lafuente, Ángel; Gorynski, Krzysztof; Mirnaghi, Fatemeh; Souza-Silva, Érica A; Hein, Dietmar; Pawliszyn, Janusz
2015-05-11
In chemical analysis, sample preparation is frequently considered the bottleneck of the entire analytical method. The success of the final method strongly depends on understanding the entire process of analysis of a particular type of analyte in a sample, namely: the physicochemical properties of the analytes (solubility, volatility, polarity etc.), the environmental conditions, and the matrix components of the sample. Various sample preparation strategies have been developed based on exhaustive or non-exhaustive extraction of analytes from matrices. Undoubtedly, amongst all sample preparation approaches, liquid extraction, including liquid-liquid (LLE) and solid phase extraction (SPE), are the most well-known, widely used, and commonly accepted methods by many international organizations and accredited laboratories. Both methods are well documented and there are many well defined procedures, which make them, at first sight, the methods of choice. However, many challenging tasks, such as complex matrix applications, on-site and in vivo applications, and determination of matrix-bound and free concentrations of analytes, are not easily attainable with these classical approaches for sample preparation. In the last two decades, the introduction of solid phase microextraction (SPME) has brought significant progress in the sample preparation area by facilitating on-site and in vivo applications, time weighted average (TWA) and instantaneous concentration determinations. Recently introduced matrix compatible coatings for SPME facilitate direct extraction from complex matrices and fill the gap in direct sampling from challenging matrices. Following introduction of SPME, numerous other microextraction approaches evolved to address limitations of the above mentioned techniques. There is not a single method that can be considered as a universal solution for sample preparation. This review aims to show the main advantages and limitations of the above mentioned sample
Lyn, Jennifer A; Ramsey, Michael H; Damant, Andrew P; Wood, Roger
2007-12-01
Measurement uncertainty is a vital issue within analytical science. There are strong arguments that primary sampling should be considered the first and perhaps the most influential step in the measurement process. Increasingly, analytical laboratories are required to report measurement results to clients together with estimates of the uncertainty. Furthermore, these estimates can be used when pursuing regulation enforcement to decide whether a measured analyte concentration is above a threshold value. With its recognised importance in analytical measurement, the question arises of 'what is the most appropriate method to estimate the measurement uncertainty?'. Two broad methods for uncertainty estimation are identified, the modelling method and the empirical method. In modelling, the estimation of uncertainty involves the identification, quantification and summation (as variances) of each potential source of uncertainty. This approach has been applied to purely analytical systems, but becomes increasingly problematic in identifying all of such sources when it is applied to primary sampling. Applications of this methodology to sampling often utilise long-established theoretical models of sampling and adopt the assumption that a 'correct' sampling protocol will ensure a representative sample. The empirical approach to uncertainty estimation involves replicated measurements from either inter-organisational trials and/or internal method validation and quality control. A more simple method involves duplicating sampling and analysis, by one organisation, for a small proportion of the total number of samples. This has proven to be a suitable alternative to these often expensive and time-consuming trials, in routine surveillance and one-off surveys, especially where heterogeneity is the main source of uncertainty. A case study of aflatoxins in pistachio nuts is used to broadly demonstrate the strengths and weakness of the two methods of uncertainty estimation. The estimate
Edgington, Eugene
2007-01-01
Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani
Energy Technology Data Exchange (ETDEWEB)
Puls, R.W.
1994-01-01
It is generally accepted that monitoring wells must be purged to access formation water to obtain representative' ground water quality samples. Historically anywhere from 3 to 5 well casing volumes have been removed prior to sample collection to evacuate the standing well water and access the adjacent formation water. However, a common result of such purging practice is highly turbid samples from excessive downhole disturbance to the sampling zone. An alternative purging strategy has been proposed using pumps which permit much lower flow rates (<1 liter/min) and placement within the screened interval of the monitoring well. The advantages of this approach include increased spatial resolution of sampling points, less variability, less purge time (and volume), and low-turbidity samples. The overall objective is a more passive approach to sample extraction with the ideal approach being to match the intake velocity with the natural ground water flow velocity. The volume of water extracted to access formation water is generally independent of well size and capacity and dependant upon well construction, development, hydrogeologic variability and pump flow rate.
Automatic training sample selection for a multi-evidence based crop classification approach
DEFF Research Database (Denmark)
Chellasamy, Menaka; Ferre, Ty; Greve, Mogens Humlekrog
An approach to use the available agricultural parcel information to automatically select training samples for crop classification is investigated. Previous research addressed the multi-evidence crop classification approach using an ensemble classifier. This first produced confidence measures using...... three Multi-Layer Perceptron (MLP) neural networks trained separately with spectral, texture and vegetation indices; classification labels were then assigned based on Endorsement Theory. The present study proposes an approach to feed this ensemble classifier with automatically selected training samples....... The available vector data representing crop boundaries with corresponding crop codes are used as a source for training samples. These vector data are created by farmers to support subsidy claims and are, therefore, prone to errors such as mislabeling of crop codes and boundary digitization errors. The proposed...
Directory of Open Access Journals (Sweden)
Dhruba Das
2015-04-01
Full Text Available In this article, based on Zadeh’s extension principle we have apply the parametric programming approach to construct the membership functions of the performance measures when the interarrival time and the service time are fuzzy numbers based on the Baruah’s Randomness- Fuzziness Consistency Principle. The Randomness-Fuzziness Consistency Principle leads to defining a normal law of fuzziness using two different laws of randomness. In this article, two fuzzy queues FM/M/1 and M/FM/1 has been studied and constructed their membership functions of the system characteristics based on the aforesaid principle. The former represents a queue with fuzzy exponential arrivals and exponential service rate while the latter represents a queue with exponential arrival rate and fuzzy exponential service rate.
Bayesian adaptive approach to estimating sample sizes for seizures of illicit drugs.
Moroni, Rossana; Aalberg, Laura; Reinikainen, Tapani; Corander, Jukka
2012-01-01
A considerable amount of discussion can be found in the forensics literature about the issue of using statistical sampling to obtain for chemical analyses an appropriate subset of units from a police seizure suspected to contain illicit material. Use of the Bayesian paradigm has been suggested as the most suitable statistical approach to solving the question of how large a sample needs to be to ensure legally and practically acceptable purposes. Here, we introduce a hypergeometric sampling model combined with a specific prior distribution for the homogeneity of the seizure, where a parameter for the analyst's expectation of homogeneity (α) is included. Our results show how an adaptive approach to sampling can minimize the practical efforts needed in the laboratory analyses, as the model allows the scientist to decide sequentially how to proceed, while maintaining a sufficiently high confidence in the conclusions. © 2011 American Academy of Forensic Sciences.
Alcohol and marijuana use in adolescents' daily lives: a random sample of experiences.
Larson, R; Csikszentmihalyi, M; Freeman, M
1984-07-01
High school students filled out reports on their experiences at random times during their daily lives, including 48 occasions when they were using alcohol or marijuana. Alcohol use was reported primarily in the context of Friday and Saturday night social gatherings and was associated with a happy and gregarious subjective state. Marijuana use was reported across a wider range of situations and was associated with an average state that differed much less from ordinary experience.
A dimensional approach to personality disorders in a sample of juvenile offenders
Directory of Open Access Journals (Sweden)
Daniela Cantone
2012-03-01
Full Text Available In a sample of 60 male Italian subjects imprisoned at a juvenile detention institute (JDI, psychopathological aspects of the AXIS II were described and the validity of a psychopathological dimensional approach for describing criminological issues was examined. The data show that the sample has psychopathological characteristics which revolve around ego weakness and poor management of relations and aggression. Statistically these psychopathological characteristics explain 85% of criminal behavior.
A dimensional approach to personality disorders in a sample of juvenile offenders
Cantone,Daniela; Sperandeo,Raffaele; Maldonato,Mauro
2012-01-01
In a sample of 60 male Italian subjects imprisoned at a juvenile detention institute (JDI), psychopathological aspects of the AXIS II were described and the validity of a psychopathological dimensional approach for describing criminological issues was examined. The data show that the sample has psychopathological characteristics which revolve around ego weakness and poor management of relations and aggression. Statistically these psychopathological characteristics explain 85% of criminal beha...
Random or systematic sampling to detect a localised microbial contamination within a batch of food
Jongenburger, I.; Reij, M.W.; Boer, E.P.J.; Gorris, L.G.M.; Zwietering, M.H.
2011-01-01
Pathogenic microorganisms are known to be distributed heterogeneously in food products that are solid, semi-solid or powdered, like for instance peanut butter, cereals, or powdered milk. This complicates effective detection of the pathogens by sampling. Two-class sampling plans, which are deployed
Multistage point relascope and randomized branch sampling for downed coarse woody debris estimation
Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine
2002-01-01
New sampling methods have recently been introduced that allow estimation of downed coarse woody debris using an angle gauge, or relascope. The theory behind these methods is based on sampling straight pieces of downed coarse woody debris. When pieces deviate from this ideal situation, auxillary methods must be employed. We describe a two-stage procedure where the...
A margin based approach to determining sample sizes via tolerance bounds.
Energy Technology Data Exchange (ETDEWEB)
Newcomer, Justin T.; Freeland, Katherine Elizabeth
2013-09-01
This paper proposes a tolerance bound approach for determining sample sizes. With this new methodology we begin to think of sample size in the context of uncertainty exceeding margin. As the sample size decreases the uncertainty in the estimate of margin increases. This can be problematic when the margin is small and only a few units are available for testing. In this case there may be a true underlying positive margin to requirements but the uncertainty may be too large to conclude we have sufficient margin to those requirements with a high level of statistical confidence. Therefore, we provide a methodology for choosing a sample size large enough such that an estimated QMU uncertainty based on the tolerance bound approach will be smaller than the estimated margin (assuming there is positive margin). This ensures that the estimated tolerance bound will be within performance requirements and the tolerance ratio will be greater than one, supporting a conclusion that we have sufficient margin to the performance requirements. In addition, this paper explores the relationship between margin, uncertainty, and sample size and provides an approach and recommendations for quantifying risk when sample sizes are limited.
Shemtov-Yona, K; Rittel, D
2016-09-01
The fatigue performance of dental implants is usually assessed on the basis of cyclic S/N curves. This neither provides information on the anticipated service performance of the implant, nor does it allow for detailed comparisons between implants unless a thorough statistical analysis is performed, of the kind not currently required by certification standards. The notion of endurance limit is deemed to be of limited applicability, given unavoidable stress concentrations and random load excursions, that all characterize dental implants and their service conditions. We propose a completely different approach, based on random spectrum loading, as long used in aeronautical design. The implant is randomly loaded by a sequence of loads encompassing all load levels it would endure during its service life. This approach provides a quantitative and comparable estimate of its performance in terms of lifetime, based on the very fact that the implant will fracture sooner or later, instead of defining a fatigue endurance limit of limited practical application. Five commercial monolithic Ti-6Al-4V implants were tested under cyclic, and another 5 under spectrum loading conditions, at room temperature and dry air. The failure modes and fracture planes were identical for all implants. The approach is discussed, including its potential applications, for systematic, straightforward and reliable comparisons of various implant designs and environments, without the need for cumbersome statistical analyses. It is believed that spectrum loading can be considered for the generation of new standardization procedures and design applications. Copyright © 2016 Elsevier Ltd. All rights reserved.
Clarke, Diana E; Narrow, William E; Regier, Darrel A; Kuramoto, S Janet; Kupfer, David J; Kuhl, Emily A; Greiner, Lisa; Kraemer, Helena C
2013-01-01
This article discusses the design,sampling strategy, implementation,and data analytic processes of the DSM-5 Field Trials. The DSM-5 Field Trials were conducted by using a test-retest reliability design with a stratified sampling approach across six adult and four pediatric sites in the United States and one adult site in Canada. A stratified random sampling approach was used to enhance precision in the estimation of the reliability coefficients. A web-based research electronic data capture system was used for simultaneous data collection from patients and clinicians across sites and for centralized data management.Weighted descriptive analyses, intraclass kappa and intraclass correlation coefficients for stratified samples, and receiver operating curves were computed. The DSM-5 Field Trials capitalized on advances since DSM-III and DSM-IV in statistical measures of reliability (i.e., intraclass kappa for stratified samples) and other recently developed measures to determine confidence intervals around kappa estimates. Diagnostic interviews using DSM-5 criteria were conducted by 279 clinicians of varied disciplines who received training comparable to what would be available to any clinician after publication of DSM-5.Overall, 2,246 patients with various diagnoses and levels of comorbidity were enrolled,of which over 86% were seen for two diagnostic interviews. A range of reliability coefficients were observed for the categorical diagnoses and dimensional measures. Multisite field trials and training comparable to what would be available to any clinician after publication of DSM-5 provided “real-world” testing of DSM-5 proposed diagnoses.
Fablet, C; Marois, C; Kobisch, M; Madec, F; Rose, N
2010-07-14
Four sampling techniques for Mycoplasma hyopneumoniae detection, namely nasal swabbing, oral-pharyngeal brushing, tracheo-bronchial swabbing and tracheo-bronchial washing, were compared in naturally infected live pigs. In addition, a quantitative real-time PCR assay for M. hyopneumoniae quantification was validated with the same samples. 60 finishing pigs were randomly selected from a batch of contemporary pigs on a farm chronically affected by respiratory disorders. Each pig was submitted to nasal swabbing, oral-pharyngeal brushing, tracheo-bronchial swabbing and tracheo-bronchial washing. Nested-PCR and real-time PCR assays were performed on all samples. A Bayesian approach was used to analyze the nested-PCR results of the four sampling methods (i.e. positive or negative) to estimate the sensitivity and specificity of each method. M. hyopneumoniae was detected by nested-PCR in at least one sample from 70% of the pigs. The most sensitive sampling methods for detecting M. hyopneumoniae in live naturally infected pigs were tracheo-bronchial swabbing and tracheo-bronchial washing, as compared to oral-pharyngeal brushing and nasal swabbing. Swabbing the nasal cavities appeared to be the least sensitive method. Significantly higher amounts of M. hyopneumoniae DNA were found at the sites of tracheo-bronchial sampling than in the nasal cavities or at the oral-pharyngeal site (p0.05). Our study indicated that tracheo-bronchial swabbing associated with real-time PCR could be an accurate diagnostic tool for assessing infection dynamics in pig herds. (c) 2009 Elsevier B.V. All rights reserved.
Bioagent Sample Matching using Elemental Composition Data: an Approach to Validation
Energy Technology Data Exchange (ETDEWEB)
Velsko, S P
2006-04-21
Sample matching is a fundamental capability that can have high probative value in a forensic context if proper validation studies are performed. In this report we discuss the potential utility of using the elemental composition of two bioagent samples to decide if they were produced in the same batch, or by the same process. Using guidance from the recent NRC study of bullet lead analysis and other sources, we develop a basic likelihood ratio framework for evaluating the evidentiary weight of elemental analysis data for sample matching. We define an objective metric for comparing two samples, and propose a method for constructing an unbiased population of test samples. We illustrate the basic methodology with some existing data on dry Bacillus thuringiensis preparations, and outline a comprehensive plan for experimental validation of this approach.
Sancho-Garnier, H; Tamalet, C; Halfon, P; Leandri, F X; Le Retraite, L; Djoufelkit, K; Heid, P; Davies, P; Piana, L
2013-12-01
Today in France, low attendance to cervical screening by Papanicolaou cytology (Pap-smear) is a major contributor to the 3,000 new cervical cancer cases and 1,000 deaths that occur from this disease every year. Nonattenders are mostly from lower socioeconomic groups and testing of self-obtained samples for high-risk Human Papilloma virus (HPV) types has been proposed as a method to increase screening participation in these groups. In 2011, we conducted a randomized study of women aged 35-69 from very low-income populations around Marseille who had not responded to an initial invitation for a free Pap-smear. After randomization, one group received a second invitation for a free Pap-smear and the other group was offered a free self-sampling kit for HPV testing. Participation rates were significantly different between the two groups with only 2.0% of women attending for a Pap-smear while 18.3% of women returned a self-sample for HPV testing (p ≤ 0.001). The detection rate of high-grade lesions (≥CIN2) was 0.2‰ in the Pap-smear group and 1.25‰ in the self-sampling group (p = 0.01). Offering self-sampling increased participation rates while the use of HPV testing increased the detection of cervical lesions (≥CIN2) in comparison to the group of women receiving a second invitation for a Pap-smear. However, low compliance to follow-up in the self-sampling group reduces the effectiveness of this screening approach in nonattenders women and must be carefully managed. Copyright © 2013 UICC.
Farzamian, Mohammad; Monteiro Santos, Fernando A.; Khalil, Mohamed A.
2017-09-01
The coupled hydrogeophysical approach has proved to be a valuable tool for improving the use of geoelectrical data for hydrological model parameterization. In the coupled approach, hydrological parameters are directly inferred from geoelectrical measurements in a forward manner to eliminate the uncertainty connected to the independent inversion of electrical resistivity data. Several numerical studies have been conducted to demonstrate the advantages of a coupled approach; however, only a few attempts have been made to apply the coupled approach to actual field data. In this study, we developed a 1D coupled hydrogeophysical code to estimate the van Genuchten-Mualem model parameters, K s, n, θ r and α, from time-lapse vertical electrical sounding data collected during a constant inflow infiltration experiment. van Genuchten-Mualem parameters were sampled using the Latin hypercube sampling method to provide a full coverage of the range of each parameter from their distributions. By applying the coupled approach, vertical electrical sounding data were coupled to hydrological models inferred from van Genuchten-Mualem parameter samples to investigate the feasibility of constraining the hydrological model. The key approaches taken in the study are to (1) integrate electrical resistivity and hydrological data and avoiding data inversion, (2) estimate the total water mass recovery of electrical resistivity data and consider it in van Genuchten-Mualem parameters evaluation and (3) correct the influence of subsurface temperature fluctuations during the infiltration experiment on electrical resistivity data. The results of the study revealed that the coupled hydrogeophysical approach can improve the value of geophysical measurements in hydrological model parameterization. However, the approach cannot overcome the technical limitations of the geoelectrical method associated with resolution and of water mass recovery.
Stressful Situations at Work and in Private Life among Young Workers: An Event Sampling Approach
Grebner, Simone; Elfering, Achim; Semmer, Norbert K.; Kaiser-Probst, Claudia; Schlapbach, Marie-Louise
2004-01-01
Most studies on occupational stress concentrate on chronic conditions, whereas research on stressful situations is rather sparse. Using an event-sampling approach, 80 young workers reported stressful events over 7 days (409 work-related and 127 private events). Content analysis showed the newcomers' work experiences to be similar to what is…
Predicting Different Types of School Dropouts: A Typological Approach with Two Longitudinal Samples.
Janosz, Michel; Le Blanc, Marc; Boulerice, Bernard; Tremblay, Richard E.
2000-01-01
Explores the heuristic value of a typological approach for preventing and studying school dropout. Empirically builds a typology of dropouts based on individual school experiences, tests the typology's reliability by replicating the classification with two different longitudinal samples, and examines the typology's predictive and discriminant…
Evaluation of PCR Approaches for Detection of Bartonella bacilliformis in Blood Samples.
Directory of Open Access Journals (Sweden)
Cláudia Gomes
2016-03-01
Full Text Available The lack of an effective diagnostic tool for Carrion's disease leads to misdiagnosis, wrong treatments and perpetuation of asymptomatic carriers living in endemic areas. Conventional PCR approaches have been reported as a diagnostic technique. However, the detection limit of these techniques is not clear as well as if its usefulness in low bacteriemia cases. The aim of this study was to evaluate the detection limit of 3 PCR approaches.We determined the detection limit of 3 different PCR approaches: Bartonella-specific 16S rRNA, fla and its genes. We also evaluated the viability of dry blood spots to be used as a sample transport system. Our results show that 16S rRNA PCR is the approach with a lowest detection limit, 5 CFU/μL, and thus, the best diagnostic PCR tool studied. Dry blood spots diminish the sensitivity of the assay.From the tested PCRs, the 16S rRNA PCR-approach is the best to be used in the direct blood detection of acute cases of Carrion's disease. However its use in samples from dry blood spots results in easier management of transport samples in rural areas, a slight decrease in the sensitivity was observed. The usefulness to detect by PCR the presence of low-bacteriemic or asymptomatic carriers is doubtful, showing the need to search for new more sensible techniques.
Wier, Timothy P.; Moser, Cameron S.; Grant, Jonathan F.; Riley, Scott C.; Robbins-Wamsley, Stephanie H.; First, Matthew R.; Drake, Lisa A.
2017-10-01
Both L-shaped (;L;) and straight (;Straight;) sample probes have been used to collect water samples from a main ballast line in land-based or shipboard verification testing of ballast water management systems (BWMS). A series of experiments was conducted to quantify and compare the sampling efficiencies of L and Straight sample probes. The findings from this research-that both L and Straight probes sample organisms with similar efficiencies-permit increased flexibility for positioning sample probes aboard ships.
Min, M.
2017-10-01
Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.
Kim, Diane N. H.; Teitell, Michael A.; Reed, Jason; Zangle, Thomas A.
2015-11-01
Standard algorithms for phase unwrapping often fail for interferometric quantitative phase imaging (QPI) of biological samples due to the variable morphology of these samples and the requirement to image at low light intensities to avoid phototoxicity. We describe a new algorithm combining random walk-based image segmentation with linear discriminant analysis (LDA)-based feature detection, using assumptions about the morphology of biological samples to account for phase ambiguities when standard methods have failed. We present three versions of our method: first, a method for LDA image segmentation based on a manually compiled training dataset; second, a method using a random walker (RW) algorithm informed by the assumed properties of a biological phase image; and third, an algorithm which combines LDA-based edge detection with an efficient RW algorithm. We show that the combination of LDA plus the RW algorithm gives the best overall performance with little speed penalty compared to LDA alone, and that this algorithm can be further optimized using a genetic algorithm to yield superior performance for phase unwrapping of QPI data from biological samples.
Yadav, B K; Adhikari, S; Gyawali, P; Shrestha, R; Poudel, B; Khanal, M
2010-06-01
Present study was undertaken during a period of 6 months (September 2008-February 2009) to see an correlation of 24 hours urine protein estimation with random spot protein-creatinine (P:C) ratio among a diabetic patients. The study comprised of 144 patients aged 30-70 years, recruited from Kantipur hospital, Kathmandu. The 24-hr urine sample was collected, followed by spot random urine sample. Both samples were analyzed for protein and creatinine excretion. An informed consent was taken from all participants. Sixteen inadequately collected urine samples as defined by (predicted creatinine--measured creatinine)/predicted creatinine > 0.2 were excluded from analysis. The Spearman's rank correlation between the spot urine P:C ratio and 24-hr total protein were performed by the Statistical Package for Social Service. At the P:C ratio cutoff of 0.15 and reference method (24-hr urine protein) cutoff of 150 mg/day, the correlation coefficient was found to be 0.892 (p urine collection but the cutoff should be carefully selected for different patients group under different laboratory procedures and settings.
Lusinchi, Dominic
2017-03-01
The scientific pollsters (Archibald Crossley, George H. Gallup, and Elmo Roper) emerged onto the American news media scene in 1935. Much of what they did in the following years (1935-1948) was to promote both the political and scientific legitimacy of their enterprise. They sought to be recognized as the sole legitimate producers of public opinion. In this essay I examine the, mostly overlooked, rhetorical work deployed by the pollsters to publicize the scientific credentials of their polling activities, and the central role the concept of sampling has had in that pursuit. First, they distanced themselves from the failed straw poll by claiming that their sampling methodology based on quotas was informed by science. Second, although in practice they did not use random sampling, they relied on it rhetorically to derive the symbolic benefits of being associated with the "laws of probability." © 2017 Wiley Periodicals, Inc.
The effect of dead time on randomly sampled power spectral estimates
DEFF Research Database (Denmark)
Buchhave, Preben; Velte, Clara Marika; George, William K.
2014-01-01
consider both the effect on the measured spectrum of a finite sampling time, i.e., a finite time during which the signal is acquired, and a finite dead time, that is a time in which the signal processor is busy evaluating a data point and therefore unable to measure a subsequent data point arriving within...... the dead time delay....
Phase microscopy of technical and biological samples through random phase modulation with a difuser
DEFF Research Database (Denmark)
Almoro, Percival; Pedrini, Giancarlo; Gundu, Phanindra Narayan
2010-01-01
A technique for phase microscopy using a phase diffuser and a reconstruction algorithm is proposed. A magnified specimen wavefront is projected on the diffuser plane that modulates the wavefront into a speckle field. The speckle patterns at axially displaced planes are sampled and used in an iter...
Directory of Open Access Journals (Sweden)
Rachel L Goldfeder
Full Text Available The ability to generate whole genome data is rapidly becoming commoditized. For example, a mammalian sized genome (∼3Gb can now be sequenced using approximately ten lanes on an Illumina HiSeq 2000. Since lanes from different runs are often combined, verifying that each lane in a genome's build is from the same sample is an important quality control. We sought to address this issue in a post hoc bioinformatic manner, instead of using upstream sample or "barcode" modifications. We rely on the inherent small differences between any two individuals to show that genotype concordance rates can be effectively used to test if any two lanes of HiSeq 2000 data are from the same sample. As proof of principle, we use recent data from three different human samples generated on this platform. We show that the distributions of concordance rates are non-overlapping when comparing lanes from the same sample versus lanes from different samples. Our method proves to be robust even when different numbers of reads are analyzed. Finally, we provide a straightforward method for determining the gender of any given sample. Our results suggest that examining the concordance of detected genotypes from lanes purported to be from the same sample is a relatively simple approach for confirming that combined lanes of data are of the same identity and quality.
Energy Technology Data Exchange (ETDEWEB)
Berkolaiko, G., E-mail: berko@math.tamu.edu [Department of Mathematics, Texas A and M University, College Station, Texas 77843-3368 (United States); Kuipers, J., E-mail: Jack.Kuipers@physik.uni-regensburg.de [Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg (Germany)
2013-11-15
To study electronic transport through chaotic quantum dots, there are two main theoretical approaches. One involves substituting the quantum system with a random scattering matrix and performing appropriate ensemble averaging. The other treats the transport in the semiclassical approximation and studies correlations among sets of classical trajectories. There are established evaluation procedures within the semiclassical evaluation that, for several linear and nonlinear transport moments to which they were applied, have always resulted in the agreement with random matrix predictions. We prove that this agreement is universal: any semiclassical evaluation within the accepted procedures is equivalent to the evaluation within random matrix theory. The equivalence is shown by developing a combinatorial interpretation of the trajectory sets as ribbon graphs (maps) with certain properties and exhibiting systematic cancellations among their contributions. Remaining trajectory sets can be identified with primitive (palindromic) factorisations whose number gives the coefficients in the corresponding expansion of the moments of random matrices. The equivalence is proved for systems with and without time reversal symmetry.
Berkolaiko, G.; Kuipers, J.
2013-11-01
To study electronic transport through chaotic quantum dots, there are two main theoretical approaches. One involves substituting the quantum system with a random scattering matrix and performing appropriate ensemble averaging. The other treats the transport in the semiclassical approximation and studies correlations among sets of classical trajectories. There are established evaluation procedures within the semiclassical evaluation that, for several linear and nonlinear transport moments to which they were applied, have always resulted in the agreement with random matrix predictions. We prove that this agreement is universal: any semiclassical evaluation within the accepted procedures is equivalent to the evaluation within random matrix theory. The equivalence is shown by developing a combinatorial interpretation of the trajectory sets as ribbon graphs (maps) with certain properties and exhibiting systematic cancellations among their contributions. Remaining trajectory sets can be identified with primitive (palindromic) factorisations whose number gives the coefficients in the corresponding expansion of the moments of random matrices. The equivalence is proved for systems with and without time reversal symmetry.
Mario, John R
2010-04-15
A probability-based analytical sampling approach for seized containers of cocaine, Cannabis, or heroin, to answer questions of both content weight and identity, is described. It utilizes the Student's t distribution, and, because of the lack of normality in studied populations, the power of the Central Limit Theorem with samples of size 20 to calculate the mean net weights of multiple item drug seizures. Populations studied ranged between 50 and 1200 units. Identity determination is based on chemical testing and sampling using the hypergeometric distribution fit to a program macro - created by the European Network of Forensic Science Institutes (ENFSI) Drugs Working Group. Formal random item selection is effected through use of an Excel-generated list of random numbers. Included, because of their impact on actual practice, are discussions of admissibility, sufficiency of proof, method validation, and harmony with the guidelines of international standardizing bodies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
A Random Matrix Approach for Quantifying Model-Form Uncertainties in Turbulence Modeling
Xiao, Heng; Ghanem, Roger G
2016-01-01
With the ever-increasing use of Reynolds-Averaged Navier--Stokes (RANS) simulations in mission-critical applications, the quantification of model-form uncertainty in RANS models has attracted attention in the turbulence modeling community. Recently, a physics-based, nonparametric approach for quantifying model-form uncertainty in RANS simulations has been proposed, where Reynolds stresses are projected to physically meaningful dimensions and perturbations are introduced only in the physically realizable limits. However, a challenge associated with this approach is to assess the amount of information introduced in the prior distribution and to avoid imposing unwarranted constraints. In this work we propose a random matrix approach for quantifying model-form uncertainties in RANS simulations with the realizability of the Reynolds stress guaranteed. Furthermore, the maximum entropy principle is used to identify the probability distribution that satisfies the constraints from available information but without int...
Estimating a DIF decomposition model using a random-weights linear logistic test model approach.
Paek, Insu; Fukuhara, Hirotaka
2015-09-01
A differential item functioning (DIF) decomposition model separates a testlet item DIF into two sources: item-specific differential functioning and testlet-specific differential functioning. This article provides an alternative model-building framework and estimation approach for a DIF decomposition model that was proposed by Beretvas and Walker (2012). Although their model is formulated under multilevel modeling with the restricted pseudolikelihood estimation method, our approach illustrates DIF decomposition modeling that is directly built upon the random-weights linear logistic test model framework with the marginal maximum likelihood estimation method. In addition to demonstrating our approach's performance, we provide detailed information on how to implement this new DIF decomposition model using an item response theory software program; using DIF decomposition may be challenging for practitioners, yet practical information on how to implement it has previously been unavailable in the measurement literature.
Development and Testing of Harpoon-Based Approaches for Collecting Comet Samples
Purves, Lloyd (Compiler); Nuth, Joseph (Compiler); Amatucci, Edward (Compiler); Wegel, Donald; Smith, Walter; Church, Joseph; Leary, James; Kee, Lake; Hill, Stuart; Grebenstein, Markus;
2017-01-01
Comets, having bright tails visible to the unassisted human eye, are considered to have been known about since pre-historic times. In fact 3,000-year old written records of comet sightings have been identified. In comparison, asteroids, being so dim that telescopes are required for observation, were not discovered until 1801. Yet, despite their later discovery, a space mission returned the first samples of an asteroid in 2010 and two more asteroid sample return missions have already been launched. By contrast no comet sample return mission has ever been funded, despite the fact that comets in certain ways are far more scientifically interesting than asteroids. Why is this? The basic answer is the greater difficulty, and consequently higher cost, of a comet sample return mission. Comets typically are in highly elliptical heliocentric orbits which require much more time and propulsion for Space Craft (SC) to reach from Earth and then return to Earth as compared to many asteroids which are in Earth-like orbits. It is also harder for a SC to maneuver safely near a comet given the generally longer communications distances and the challenge of navigating in the comet's, when the comet is close to perihelion, which turns out to be one of the most interesting times for a SC to get close to the comet surface. Due to the science value of better understanding the sublimation of volatiles near the comet surface, other contributions to higher cost as desire to get sample material from both the comet surface and a little below, to preserve the stratigraphy of the sample, and to return the sample in a storage state where it does not undergo undesirable alterations, such as aqueous. In response to these challenges of comet sample return missions, the NASA Goddard Space Flight Center (GFSC) has worked for about a decade (2006 to this time) to develop and test approaches for comet sample return that would enable such a mission to be scientifically valuable, while having acceptably
Flexible automated approach for quantitative liquid handling of complex biological samples.
Palandra, Joe; Weller, David; Hudson, Gary; Li, Jeff; Osgood, Sarah; Hudson, Emily; Zhong, Min; Buchholz, Lisa; Cohen, Lucinda H
2007-11-01
A fully automated protein precipitation technique for biological sample preparation has been developed for the quantitation of drugs in various biological matrixes. All liquid handling during sample preparation was automated using a Hamilton MicroLab Star Robotic workstation, which included the preparation of standards and controls from a Watson laboratory information management system generated work list, shaking of 96-well plates, and vacuum application. Processing time is less than 30 s per sample or approximately 45 min per 96-well plate, which is then immediately ready for injection onto an LC-MS/MS system. An overview of the process workflow is discussed, including the software development. Validation data are also provided, including specific liquid class data as well as comparative data of automated vs manual preparation using both quality controls and actual sample data. The efficiencies gained from this automated approach are described.
Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae
Huillet, Thierry E.
2017-07-01
We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.
Dual to Ratio-Cum-Product Estimator in Simple and Stratified Random Sampling
Yunusa Olufadi
2013-01-01
New estimators for estimating the finite population mean using two auxiliary variables under simple and stratified sampling design is proposed. Their properties (e.g., mean square error) are studied to the first order of approximation. More so, some estimators are shown to be a particular member of this estimator. Furthermore, comparison of the proposed estimator with the usual unbiased estimator and other estimators considered in this paper reveals interesting results. These results are fur...
The psychometric properties of the AUDIT: a survey from a random sample of elderly Swedish adults.
Källmén, Håkan; Wennberg, Peter; Ramstedt, Mats; Hallgren, Mats
2014-07-01
Increasing alcohol consumption and related harms have been reported among the elderly population of Europe. Consequently, it is important to monitor patterns of alcohol use, and to use a valid and reliable tool when screening for risky consumption in this age group. The aim was to evaluate the internal consistency reliability and construct validity of the Alcohol Use Disorders Identification Test (AUDIT) in elderly Swedish adults, and to compare the results with the general Swedish population. Another aim was to calculate the level of alcohol consumption (AUDIT-C) to be used for comparison in future studies. The questionnaire was sent to 1459 Swedish adults aged 79-80 years with a response rate of 73.3%. Internal consistency reliability, were assessed using Cronbach alpha, and confirmatory factor analysis assessed construct validity of the Alcohol Use Disorders Identification Test (AUDIT) in elderly population as compared to a Swedish general population sample. The results showed that AUDIT was more reliable and valid among the Swedish general population sample than among the elderly and that Item 1 and 4 in AUDIT was less reliable and valid among the elderly. While the AUDIT showed acceptable psychometric properties in the general population sample, it's performance was of less quality among the elderly respondents. Further psychometric assessments of the AUDIT in elderly populations are required before it is implemented more widely.
Lee, Paul H; Tse, Andy C Y
2017-05-01
There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Active Learning Not Associated with Student Learning in a Random Sample of College Biology Courses
Andrews, T. M.; Leonard, M. J.; Colgrove, C. A.; Kalinowski, S. T.
2011-01-01
Previous research has suggested that adding active learning to traditional college science lectures substantially improves student learning. However, this research predominantly studied courses taught by science education researchers, who are likely to have exceptional teaching expertise. The present study investigated introductory biology courses randomly selected from a list of prominent colleges and universities to include instructors representing a broader population. We examined the relationship between active learning and student learning in the subject area of natural selection. We found no association between student learning gains and the use of active-learning instruction. Although active learning has the potential to substantially improve student learning, this research suggests that active learning, as used by typical college biology instructors, is not associated with greater learning gains. We contend that most instructors lack the rich and nuanced understanding of teaching and learning that science education researchers have developed. Therefore, active learning as designed and implemented by typical college biology instructors may superficially resemble active learning used by education researchers, but lacks the constructivist elements necessary for improving learning. PMID:22135373
Subramanian, Nithya
Optimization under uncertainty accounts for design variables and external parameters or factors with probabilistic distributions instead of fixed deterministic values; it enables problem formulations that might maximize or minimize an expected value while satisfying constraints using probabilities. For discrete optimization under uncertainty, a Monte Carlo Sampling (MCS) approach enables high-accuracy estimation of expectations but it also results in high computational expense. The Genetic Algorithm (GA) with a Population-Based Sampling (PBS) technique enables optimization under uncertainty with discrete variables at a lower computational expense than using Monte Carlo sampling for every fitness evaluation. Population-Based Sampling uses fewer samples in the exploratory phase of the GA and a larger number of samples when `good designs' start emerging over the generations. This sampling technique therefore reduces the computational effort spent on `poor designs' found in the initial phase of the algorithm. Parallel computation evaluates the expected value of the objective and constraints in parallel to facilitate reduced wall-clock time. A customized stopping criterion is also developed for the GA with Population-Based Sampling. The stopping criterion requires that the design with the minimum expected fitness value to have at least 99% constraint satisfaction and to have accumulated at least 10,000 samples. The average change in expected fitness values in the last ten consecutive generations is also monitored. The optimization of composite laminates using ply orientation angle as a discrete variable provides an example to demonstrate further developments of the GA with Population-Based Sampling for discrete optimization under uncertainty. The focus problem aims to reduce the expected weight of the composite laminate while treating the laminate's fiber volume fraction and externally applied loads as uncertain quantities following normal distributions. Construction of
An inversion-relaxation approach for sampling stationary points of spin model Hamiltonians
Hughes, Ciaran; Mehta, Dhagash; Wales, David J.
2014-05-01
Sampling the stationary points of a complicated potential energy landscape is a challenging problem. Here, we introduce a sampling method based on relaxation from stationary points of the highest index of the Hessian matrix. We illustrate how this approach can find all the stationary points for potentials or Hamiltonians bounded from above, which includes a large class of important spin models, and we show that it is far more efficient than previous methods. For potentials unbounded from above, the relaxation part of the method is still efficient in finding minima and transition states, which are usually the primary focus of attention for atomistic systems.
The Randomized CRM: An Approach to Overcoming the Long-Memory Property of the CRM.
Koopmeiners, Joseph S; Wey, Andrew
2017-03-01
The primary object of a Phase I clinical trial is to determine the maximum tolerated dose (MTD). Typically, the MTD is identified using a dose-escalation study, where initial subjects are treated at the lowest dose level and subsequent subjects are treated at progressively higher dose levels until the MTD is identified. The continual reassessment method (CRM) is a popular model-based dose-escalation design, which utilizes a formal model for the relationship between dose and toxicity to guide dose finding. Recently, it was shown that the CRM has a tendency to get "stuck" on a dose level, with little escalation or de-escalation in the late stages of the trial, due to the long-memory property of the CRM. We propose the randomized CRM (rCRM), which introduces random escalation and de-escalation into the standard CRM dose-finding algorithm, as well as a hybrid approach that incorporates escalation and de-escalation only when certain criteria are met. Our simulation results show that both the rCRM and the hybrid approach reduce the trial-to-trial variability in the number of cohorts treated at the MTD but that the hybrid approach has a more favorable tradeoff with respect to the average number treated at the MTD.
Seeking mathematics success for college students: a randomized field trial of an adapted approach
Gula, Taras; Hoessler, Carolyn; Maciejewski, Wes
2015-11-01
Many students enter the Canadian college system with insufficient mathematical ability and leave the system with little improvement. Those students who enter with poor mathematics ability typically take a developmental mathematics course as their first and possibly only mathematics course. The educational experiences that comprise a developmental mathematics course vary widely and are, too often, ineffective at improving students' ability. This trend is concerning, since low mathematics ability is known to be related to lower rates of success in subsequent courses. To date, little attention has been paid to the selection of an instructional approach to consistently apply across developmental mathematics courses. Prior research suggests that an appropriate instructional method would involve explicit instruction and practising mathematical procedures linked to a mathematical concept. This study reports on a randomized field trial of a developmental mathematics approach at a college in Ontario, Canada. The new approach is an adaptation of the JUMP Math program, an explicit instruction method designed for primary and secondary school curriculae, to the college learning environment. In this study, a subset of courses was assigned to JUMP Math and the remainder was taught in the same style as in the previous years. We found consistent, modest improvement in the JUMP Math sections compared to the non-JUMP sections, after accounting for potential covariates. The findings from this randomized field trial, along with prior research on effective education for developmental mathematics students, suggest that JUMP Math is a promising way to improve college student outcomes.
Directory of Open Access Journals (Sweden)
P. Friederichs
2008-10-01
Full Text Available Probability distributions of multivariate random variables are generally more complex compared to their univariate counterparts which is due to a possible nonlinear dependence between the random variables. One approach to this problem is the use of copulas, which have become popular over recent years, especially in fields like econometrics, finance, risk management, or insurance. Since this newly emerging field includes various practices, a controversial discussion, and vast field of literature, it is difficult to get an overview. The aim of this paper is therefore to provide an brief overview of copulas for application in meteorology and climate research. We examine the advantages and disadvantages compared to alternative approaches like e.g. mixture models, summarize the current problem of goodness-of-fit (GOF tests for copulas, and discuss the connection with multivariate extremes. An application to station data shows the simplicity and the capabilities as well as the limitations of this approach. Observations of daily precipitation and temperature are fitted to a bivariate model and demonstrate, that copulas are valuable complement to the commonly used methods.
A Sub-Sampling Approach for Data Acquisition in Gamma Ray Emission Tomography
Fysikopoulos, Eleftherios; Kopsinis, Yannis; Georgiou, Maria; Loudos, George
2016-06-01
State of the art data acquisition systems for small animal imaging gamma ray detectors often rely on free running Analog to Digital Converters (ADCs) and high density Field Programmable Gate Arrays (FPGA) devices for digital signal processing. In this work, a sub-sampling acquisition approach, which exploits a priori information regarding the shape of the obtained detector pulses is proposed. Output pulses shape depends on the response of the scintillation crystal, photodetector's properties and amplifier/shaper operation. Using these known characteristics of the detector pulses prior to digitization, one can model the voltage pulse derived from the shaper (a low-pass filter, last in the front-end electronics chain), in order to reduce the desirable sampling rate of ADCs. Fitting with a small number of measurements, pulse shape estimation is then feasible. In particular, the proposed sub-sampling acquisition approach relies on a bi-exponential modeling of the pulse shape. We show that the properties of the pulse that are relevant for Single Photon Emission Computed Tomography (SPECT) event detection (i.e., position and energy) can be calculated by collecting just a small fraction of the number of samples usually collected in data acquisition systems used so far. Compared to the standard digitization process, the proposed sub-sampling approach allows the use of free running ADCs with sampling rate reduced by a factor of 5. Two small detectors consisting of Cerium doped Gadolinium Aluminum Gallium Garnet (Gd3Al2Ga3O12 : Ce or GAGG:Ce) pixelated arrays (array elements: 2 × 2 × 5 mm3 and 1 × 1 × 10 mm3 respectively) coupled to a Position Sensitive Photomultiplier Tube (PSPMT) were used for experimental evaluation. The two detectors were used to obtain raw images and energy histograms under 140 keV and 661.7 keV irradiation respectively. The sub-sampling acquisition technique (10 MHz sampling rate) was compared with a standard acquisition method (52 MHz sampling
Fitó, Montserrat; Estruch, Ramón; Salas-Salvadó, Jordi; Martínez-Gonzalez, Miguel Angel; Arós, Fernando; Vila, Joan; Corella, Dolores; Díaz, Oscar; Sáez, Guillermo; de la Torre, Rafael; Mitjavila, María-Teresa; Muñoz, Miguel Angel; Lamuela-Raventós, Rosa-María; Ruiz-Gutierrez, Valentina; Fiol, Miquel; Gómez-Gracia, Enrique; Lapetra, José; Ros, Emilio; Serra-Majem, Lluis; Covas, María-Isabel
2014-05-01
Scarce data are available on the effect of the traditional Mediterranean diet (TMD) on heart failure biomarkers. We assessed the effect of TMD on biomarkers related to heart failure in a high cardiovascular disease risk population. A total of 930 subjects at high cardiovascular risk (420 men and 510 women) were recruited in the framework of a multicentre, randomized, controlled, parallel-group clinical trial directed at testing the efficacy of the TMD on the primary prevention of cardiovascular disease (The PREDIMED Study). Participants were assigned to a low-fat diet (control, n = 310) or one of two TMDs [TMD + virgin olive oil (VOO) or TMD + nuts]. Depending on group assignment, participants received free provision of extra-virgin olive oil, mixed nuts, or small non-food gifts. After 1 year of intervention, both TMDs decreased plasma N-terminal pro-brain natriuretic peptide, with changes reaching significance vs. control group (P cardiovascular disease (CVD) who improved their diet toward a TMD pattern reduced their N-terminal pro-brain natriuretic peptide compared with those assigned to a low-fat diet. The same was found for in vivo oxidized low-density lipoprotein and lipoprotein(a) plasma concentrations after the TMD + VOO diet. From our results TMD could be a useful tool to mitigate against risk factors for heart failure. From our results TMD could modify markers of heart failure towards a more protective mode. © 2014 The Authors. European Journal of Heart Failure © 2014 European Society of Cardiology.
An Efficient Approach for Mars Sample Return Using Emerging Commercial Capabilities.
Gonzales, Andrew A; Stoker, Carol R
2016-06-01
Mars Sample Return is the highest priority science mission for the next decade as recommended by the 2011 Decadal Survey of Planetary Science [1]. This article presents the results of a feasibility study for a Mars Sample Return mission that efficiently uses emerging commercial capabilities expected to be available in the near future. The motivation of our study was the recognition that emerging commercial capabilities might be used to perform Mars Sample Return with an Earth-direct architecture, and that this may offer a desirable simpler and lower cost approach. The objective of the study was to determine whether these capabilities can be used to optimize the number of mission systems and launches required to return the samples, with the goal of achieving the desired simplicity. All of the major element required for the Mars Sample Return mission are described. Mission system elements were analyzed with either direct techniques or by using parametric mass estimating relationships. The analysis shows the feasibility of a complete and closed Mars Sample Return mission design based on the following scenario: A SpaceX Falcon Heavy launch vehicle places a modified version of a SpaceX Dragon capsule, referred to as "Red Dragon", onto a Trans Mars Injection trajectory. The capsule carries all the hardware needed to return to Earth Orbit samples collected by a prior mission, such as the planned NASA Mars 2020 sample collection rover. The payload includes a fully fueled Mars Ascent Vehicle; a fueled Earth Return Vehicle, support equipment, and a mechanism to transfer samples from the sample cache system onboard the rover to the Earth Return Vehicle. The Red Dragon descends to land on the surface of Mars using Supersonic Retropropulsion. After collected samples are transferred to the Earth Return Vehicle, the single-stage Mars Ascent Vehicle launches the Earth Return Vehicle from the surface of Mars to a Mars phasing orbit. After a brief phasing period, the Earth Return
An Efficient Approach for Mars Sample Return Using Emerging Commercial Capabilities
Gonzales, Andrew A.; Stoker, Carol R.
2016-01-01
Mars Sample Return is the highest priority science mission for the next decade as recommended by the 2011 Decadal Survey of Planetary Science [1]. This article presents the results of a feasibility study for a Mars Sample Return mission that efficiently uses emerging commercial capabilities expected to be available in the near future. The motivation of our study was the recognition that emerging commercial capabilities might be used to perform Mars Sample Return with an Earth-direct architecture, and that this may offer a desirable simpler and lower cost approach. The objective of the study was to determine whether these capabilities can be used to optimize the number of mission systems and launches required to return the samples, with the goal of achieving the desired simplicity. All of the major element required for the Mars Sample Return mission are described. Mission system elements were analyzed with either direct techniques or by using parametric mass estimating relationships. The analysis shows the feasibility of a complete and closed Mars Sample Return mission design based on the following scenario: A SpaceX Falcon Heavy launch vehicle places a modified version of a SpaceX Dragon capsule, referred to as “Red Dragon”, onto a Trans Mars Injection trajectory. The capsule carries all the hardware needed to return to Earth Orbit samples collected by a prior mission, such as the planned NASA Mars 2020 sample collection rover. The payload includes a fully fueled Mars Ascent Vehicle; a fueled Earth Return Vehicle, support equipment, and a mechanism to transfer samples from the sample cache system onboard the rover to the Earth Return Vehicle. The Red Dragon descends to land on the surface of Mars using Supersonic Retropropulsion. After collected samples are transferred to the Earth Return Vehicle, the single-stage Mars Ascent Vehicle launches the Earth Return Vehicle from the surface of Mars to a Mars phasing orbit. After a brief phasing period, the Earth
Diederich, Adele; Oswald, Peter
2014-01-01
A sequential sampling model for multiattribute binary choice options, called multiattribute attention switching (MAAS) model, assumes a separate sampling process for each attribute. During the deliberation process attention switches from one attribute consideration to the next. The order in which attributes are considered as well for how long each attribute is considered-the attention time-influences the predicted choice probabilities and choice response times. Several probability distributions for the attention time with different variances are investigated. Depending on the time and order schedule the model predicts a rich choice probability/choice response time pattern including preference reversals and fast errors. Furthermore, the difference between finite and infinite decision horizons for the attribute considered last is investigated. For the former case the model predicts a probability p 0 > 0 of not deciding within the available time. The underlying stochastic process for each attribute is an Ornstein-Uhlenbeck process approximated by a discrete birth-death process. All predictions are also true for the widely applied Wiener process.
Directory of Open Access Journals (Sweden)
Adele eDiederich
2014-09-01
Full Text Available A sequential sampling model for multiattribute binary choice options, called Multiattribute attention switching (MAAS model, assumes a separate sampling process for each attribute. During the deliberation process attention switches from one attribute consideration to the next. The order in which attributes are considered as well for how long each attribute is considered - the attention time - influences the predicted choice probabilities and choice response times. Several probability distributions for the attention time including deterministic, Poisson, binomial, geometric, and uniform with different variances are investigated. Depending on the time and order schedule the model predicts a rich choice probability/choice response time pattern including preference reversals and fast errors. Furthermore, the difference between a finite and infinite decision horizons for the attribute considered last is investigated. For the former case the model predicts a probability $p_0> 0$ of not deciding within the available time. The underlying stochastic process for each attribute is an Ornstein-Uhlenbeck process approximated by a discrete birth-death process. All predictions are also true for the widely applied Wiener process.
Xu, Chonggang; Gertner, George
2011-01-01
Fourier Amplitude Sensitivity Test (FAST) is one of the most popular uncertainty and sensitivity analysis techniques. It uses a periodic sampling approach and a Fourier transformation to decompose the variance of a model output into partial variances contributed by different model parameters. Until now, the FAST analysis is mainly confined to the estimation of partial variances contributed by the main effects of model parameters, but does not allow for those contributed by specific interactio...
Sörensen, Kenneth; Sevaux, Marc
2007-01-01
In this paper, we investigate how robust and flexible solutions of a number of stochastic variants of the capacitated vehicle routing problem can be obtained. To this end, we develop and discuss a method that combines a sampling based approach to estimate the robustness or flexibility of a solution with a metaheuristic optimization technique. This combination allows us to solve larger problems with more complex stochastic structures than traditional methods based on stochastic programming. It...
Usami, Satoshi
2017-03-01
Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.
Teoh, Jeremy Yuen-Chun; Chan, Eddie Shu-Yin; Yip, Siu-Ying; Tam, Ho-Man; Chiu, Peter Ka-Fung; Yee, Chi-Hang; Wong, Hon-Ming; Chan, Chi-Kwok; Hou, Simon See-Ming; Ng, Chi-Fai
2017-05-01
Our aim was to investigate the detrusor muscle sampling rate after monopolar versus bipolar transurethral resection of bladder tumor (TURBT). This was a single-center, prospective, randomized, phase III trial on monopolar versus bipolar TURBT. Baseline patient characteristics, disease characteristics and perioperative outcomes were compared, with the primary outcome being the detrusor muscle sampling rate in the TURBT specimen. Multivariate logistic regression analyses on detrusor muscle sampling were performed. From May 2012 to December 2015, a total of 160 patients with similar baseline characteristics were randomized to receive monopolar or bipolar TURBT. Fewer patients in the bipolar TURBT group required postoperative irrigation than patients in the monopolar TURBT group (18.7 vs. 43%; p = 0.001). In the whole cohort, no significant difference in the detrusor muscle sampling rates was observed between the bipolar and monopolar TURBT groups (77.3 vs. 63.3%; p = 0.057). In patients with urothelial carcinoma, bipolar TURBT achieved a higher detrusor muscle sampling rate than monopolar TURBT (84.6 vs. 67.7%; p = 0.025). On multivariate analyses, bipolar TURBT (odds ratio [OR] 2.23, 95% confidence interval [CI] 1.03-4.81; p = 0.042) and larger tumor size (OR 1.04, 95% CI 1.01-1.08; p = 0.022) were significantly associated with detrusor muscle sampling in the whole cohort. In addition, bipolar TURBT (OR 2.88, 95% CI 1.10-7.53; p = 0.031), larger tumor size (OR 1.05, 95% CI 1.01-1.10; p = 0.035), and female sex (OR 3.25, 95% CI 1.10-9.59; p = 0.033) were significantly associated with detrusor muscle sampling in patients with urothelial carcinoma. There was a trend towards a superior detrusor muscle sampling rate after bipolar TURBT. Further studies are needed to determine its implications on disease recurrence and progression.
A randomized control study of instructional approaches for struggling adult readers.
Greenberg, Daphne; Wise, Justin; Morris, Robin; Fredrick, Laura; Nanda, Alice O; Pae, Hye-K
2011-01-01
This study measured the effectiveness of various instructional approaches on the reading outcomes of 198 adults who read single words at the 3.0 through 5.9 grade equivalency levels. The students were randomly assigned to one of the following interventions: Decoding and Fluency; Decoding, Comprehension, and Fluency; Decoding, Comprehension, Fluency, and Extensive Reading; Extensive Reading; and a Control/Comparison approach. The Control/Comparison approach employed a curriculum common to community-based adult literacy programs, and the Extensive Reading approach focused on wide exposure to literature. The Fluency component was a guided repeated oral reading approach, and the Decoding/Comprehension components were SRA/McGraw-Hill Direct Instruction Corrective Reading Programs. Results indicated continued weaknesses in and poor integration of participants' skills. Although students made significant gains independent of reading instruction group, all improvements were associated with small effect sizes. When reading instruction group was considered, only one significant finding was detected, with the Comparison/Control group, the Decoding and Fluency group, and the Decoding, Comprehension, Extensive Reading and Fluency group showing stronger word attack outcomes than the Extensive Reading group.
Monte Carlo approaches for determining power and sample size in low-prevalence applications.
Williams, Michael S; Ebel, Eric D; Wagner, Bruce A
2007-11-15
The prevalence of disease in many populations is often low. For example, the prevalence of tuberculosis, brucellosis, and bovine spongiform encephalopathy range from 1 per 100,000 to less than 1 per 1,000,000 in many countries. When an outbreak occurs, epidemiological investigations often require comparing the prevalence in an exposed population with that of an unexposed population. To determine if the level of disease in the two populations is significantly different, the epidemiologist must consider the test to be used, desired power of the test, and determine the appropriate sample size for both the exposed and unexposed populations. Commonly available software packages provide estimates of the required sample sizes for this application. This study shows that these estimated sample sizes can exceed the necessary number of samples by more than 35% when the prevalence is low. We provide a Monte Carlo-based solution and show that in low-prevalence applications this approach can lead to reductions in the total samples size of more than 10,000 samples.
B.M. Craig (Benjamin); J.J. van Busschbach (Jan)
2009-01-01
textabstractABSTRACT: BACKGROUND: To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. METHODS: First, we introduce two alternative random utility models (RUMs) for health preferences: the episodic RUM and the more common
Directory of Open Access Journals (Sweden)
Elena Hilario
Full Text Available Genotyping by sequencing (GBS is a restriction enzyme based targeted approach developed to reduce the genome complexity and discover genetic markers when a priori sequence information is unavailable. Sufficient coverage at each locus is essential to distinguish heterozygous from homozygous sites accurately. The number of GBS samples able to be pooled in one sequencing lane is limited by the number of restriction sites present in the genome and the read depth required at each site per sample for accurate calling of single-nucleotide polymorphisms. Loci bias was observed using a slight modification of the Elshire et al.some restriction enzyme sites were represented in higher proportions while others were poorly represented or absent. This bias could be due to the quality of genomic DNA, the endonuclease and ligase reaction efficiency, the distance between restriction sites, the preferential amplification of small library restriction fragments, or bias towards cluster formation of small amplicons during the sequencing process. To overcome these issues, we have developed a GBS method based on randomly tagging genomic DNA (rtGBS. By randomly landing on the genome, we can, with less bias, find restriction sites that are far apart, and undetected by the standard GBS (stdGBS method. The study comprises two types of biological replicates: six different kiwifruit plants and two independent DNA extractions per plant; and three types of technical replicates: four samples of each DNA extraction, stdGBS vs. rtGBS methods, and two independent library amplifications, each sequenced in separate lanes. A statistically significant unbiased distribution of restriction fragment size by rtGBS showed that this method targeted 49% (39,145 of BamH I sites shared with the reference genome, compared to only 14% (11,513 by stdGBS.
Li, Long; Chen, Guojin; Jin, Tingdu
2017-06-01
As the pile of RNA-Protein complexes sequences mounted, in order to overcome time-consuming problem of the traditional identify RNA-Protein interaction sites (RPIS) method, it is urgent need develop intelligent recognition approach for quickly and reliable recognition of the RNA-Protein interaction sites (RPIS). To settle the question, we developed a new method named iRPIS-PseNNC, in which each sample is a nineteen nucleotides segment that for positive the centre of the segments is RPIS and for negative the segments centre is non-RPIS, and the sample was obtained by sliding window. The RNA sample was formulated by combining the dipeptide position-specific propensity into random forest approach, and by using the random sampling to balance the training dataset. According the voting system, we combine eleven random forest together to construct an ensemble classifier. It is shown that via the rigorous cross validations that the new predictor “iRPIS-PseNNC” achieved very high percentage of accuracy than any other existing algorithms in this field, indicating that the iRPIS-PseNNC predictor will be an effective tool for prediction RNA-Protein interaction sites.
On Generating Optimal Signal Probabilities for Random Tests: A Genetic Approach
Directory of Open Access Journals (Sweden)
M. Srinivas
1996-01-01
Full Text Available Genetic Algorithms are robust search and optimization techniques. A Genetic Algorithm based approach for determining the optimal input distributions for generating random test vectors is proposed in the paper. A cost function based on the COP testability measure for determining the efficacy of the input distributions is discussed. A brief overview of Genetic Algorithms (GAs and the specific details of our implementation are described. Experimental results based on ISCAS-85 benchmark circuits are presented. The performance of our GAbased approach is compared with previous results. While the GA generates more efficient input distributions than the previous methods which are based on gradient descent search, the overheads of the GA in computing the input distributions are larger.
A New Approach To Soil Sampling For Risk Assessment Of Nutrient Mobilisation.
Jonczyk, J. C.; Owen, G. J.; Snell, M. A.; Barber, N.; Benskin, C.; Reaney, S. M.; Haygarth, P.; Quinn, P. F.; Barker, P. A.; Aftab, A.; Burke, S.; Cleasby, W.; Surridge, B.; Perks, M. T.
2016-12-01
Traditionally, risks of nutrient and sediment losses from soils are assessed through a combination of field soil nutrient values on soil samples taken over the whole field and the proximity of the field to water courses. The field average nutrient concentration of the soil is used by farmers to determine fertiliser needs. These data are often used by scientists to assess the risk of nutrient losses to water course, though are not really `fit' for this purpose. The Eden Demonstration Test Catchment (http://www.edendtc.org.uk/) is a research project based in the River Eden catchment, NW UK, with the aim of cost effectively mitigating diffuse pollution from agriculture whilst maintaining agricultural productivity. Three instrumented focus catchments have been monitored since 2011, providing high resolution in-stream chemistry and ecological data, alongside some spatial data on soils, land use and nutrient inputs. An approach to mitigation was demonstrated in a small sub-catchment, where surface runoff was identified as the key drivers of nutrient losses, using a suite of runoff attenuation features. Other issues identified were management of hard- standings and soil compaction. A new approach for evaluating nutrient losses from soils is assessed in the Eden DTC project. The Sensitive Catchment Integrated Modelling and Prediction (SCIMAP) model is a risk-mapping framework designed to identify where in the landscape diffuse pollution is most likely to be originating (http://www.scimap.org.uk) and was used to look at the spatial pattern of erosion potential. The aim of this work was to assess if erosion potential identified through the model could be used to inform a new soil sampling strategy, to better assess risk of erosion and risk of transport of sediment-bound phosphorus. Soil samples were taken from areas with different erosion potential. The chemical analysis of these targeted samples are compared to those obtained using more traditional sampling approaches
A new approach for inversion of large random matrices in massive MIMO systems.
Directory of Open Access Journals (Sweden)
Muhammad Ali Raza Anjum
Full Text Available We report a novel approach for inversion of large random matrices in massive Multiple-Input Multiple Output (MIMO systems. It is based on the concept of inverse vectors in which an inverse vector is defined for each column of the principal matrix. Such an inverse vector has to satisfy two constraints. Firstly, it has to be in the null-space of all the remaining columns. We call it the null-space problem. Secondly, it has to form a projection of value equal to one in the direction of selected column. We term it as the normalization problem. The process essentially decomposes the inversion problem and distributes it over columns. Each column can be thought of as a node in the network or a particle in a swarm seeking its own solution, the inverse vector, which lightens the computational load on it. Another benefit of this approach is its applicability to all three cases pertaining to a linear system: the fully-determined, the over-determined, and the under-determined case. It eliminates the need of forming the generalized inverse for the last two cases by providing a new way to solve the least squares problem and the Moore and Penrose's pseudoinverse problem. The approach makes no assumption regarding the size, structure or sparsity of the matrix. This makes it fully applicable to much in vogue large random matrices arising in massive MIMO systems. Also, the null-space problem opens the door for a plethora of methods available in literature for null-space computation to enter the realm of matrix inversion. There is even a flexibility of finding an exact or approximate inverse depending on the null-space method employed. We employ the Householder's null-space method for exact solution and present a complete exposition of the new approach. A detailed comparison with well-established matrix inversion methods in literature is also given.
A new approach for inversion of large random matrices in massive MIMO systems.
Anjum, Muhammad Ali Raza; Ahmed, Muhammad Mansoor
2014-01-01
We report a novel approach for inversion of large random matrices in massive Multiple-Input Multiple Output (MIMO) systems. It is based on the concept of inverse vectors in which an inverse vector is defined for each column of the principal matrix. Such an inverse vector has to satisfy two constraints. Firstly, it has to be in the null-space of all the remaining columns. We call it the null-space problem. Secondly, it has to form a projection of value equal to one in the direction of selected column. We term it as the normalization problem. The process essentially decomposes the inversion problem and distributes it over columns. Each column can be thought of as a node in the network or a particle in a swarm seeking its own solution, the inverse vector, which lightens the computational load on it. Another benefit of this approach is its applicability to all three cases pertaining to a linear system: the fully-determined, the over-determined, and the under-determined case. It eliminates the need of forming the generalized inverse for the last two cases by providing a new way to solve the least squares problem and the Moore and Penrose's pseudoinverse problem. The approach makes no assumption regarding the size, structure or sparsity of the matrix. This makes it fully applicable to much in vogue large random matrices arising in massive MIMO systems. Also, the null-space problem opens the door for a plethora of methods available in literature for null-space computation to enter the realm of matrix inversion. There is even a flexibility of finding an exact or approximate inverse depending on the null-space method employed. We employ the Householder's null-space method for exact solution and present a complete exposition of the new approach. A detailed comparison with well-established matrix inversion methods in literature is also given.
Global Stratigraphy of Venus: Analysis of a Random Sample of Thirty-Six Test Areas
Basilevsky, Alexander T.; Head, James W., III
1995-01-01
The age relations between 36 impact craters with dark paraboloids and other geologic units and structures at these localities have been studied through photogeologic analysis of Magellan SAR images of the surface of Venus. Geologic settings in all 36 sites, about 1000 x 1000 km each, could be characterized using only 10 different terrain units and six types of structures. These units and structures form a major stratigraphic and geologic sequence (from oldest to youngest): (1) tessera terrain; (2) densely fractured terrains associated with coronae and in the form of remnants among plains; (3) fractured and ridged plains and ridge belts; (4) plains with wrinkle ridges; (5) ridges associated with coronae annulae and ridges of arachnoid annulae which are contemporary with wrinkle ridges of the ridged plains; (6) smooth and lobate plains; (7) fractures of coronae annulae, and fractures not related to coronae annulae, which disrupt ridged and smooth plains; (8) rift-associated fractures; and (9) craters with associated dark paraboloids, which represent the youngest 1O% of the Venus impact crater population (Campbell et al.), and are on top of all volcanic and tectonic units except the youngest episodes of rift-associated fracturing and volcanism; surficial streaks and patches are approximately contemporary with dark-paraboloid craters. Mapping of such units and structures in 36 randomly distributed large regions (each approximately 10(exp 6) sq km) shows evidence for a distinctive regional and global stratigraphic and geologic sequence. On the basis of this sequence we have developed a model that illustrates several major themes in the history of Venus. Most of the history of Venus (that of its first 80% or so) is not preserved in the surface geomorphological record. The major deformation associated with tessera formation in the period sometime between 0.5-1.0 b.y. ago (Ivanov and Basilevsky) is the earliest event detected. In the terminal stages of tessera fon
Use of pornography in a random sample of Norwegian heterosexual couples.
Daneback, Kristian; Traeen, Bente; Månsson, Sven-Axel
2009-10-01
This study examined the use of pornography in couple relationships to enhance the sex-life. The study contained a representative sample of 398 heterosexual couples aged 22-67 years. Data collection was carried out by self-administered postal questionnaires. The majority (77%) of the couples did not report any kind of pornography use to enhance the sex-life. In 15% of the couples, both had used pornography; in 3% of the couples, only the female partner had used pornography; and, in 5% of the couples, only the male partner had used pornography for this purpose. Based on the results of a discriminant function analysis, it is suggested that couples where one or both used pornography had a more permissive erotic climate compared to the couples who did not use pornography. In couples where only one partner used pornography, we found more problems related to arousal (male) and negative (female) self-perception. These findings could be of importance for clinicians who work with couples.
Conceição-Neto, Nádia; Zeller, Mark; Lefrère, Hanne; De Bruyn, Pieter; Beller, Leen; Deboutte, Ward; Yinda, Claude Kwe; Lavigne, Rob; Maes, Piet; Van Ranst, Marc; Heylen, Elisabeth; Matthijnssens, Jelle
2015-11-12
A major limitation for better understanding the role of the human gut virome in health and disease is the lack of validated methods that allow high throughput virome analysis. To overcome this, we evaluated the quantitative effect of homogenisation, centrifugation, filtration, chloroform treatment and random amplification on a mock-virome (containing nine highly diverse viruses) and a bacterial mock-community (containing four faecal bacterial species) using quantitative PCR and next-generation sequencing. This resulted in an optimised protocol that was able to recover all viruses present in the mock-virome and strongly alters the ratio of viral versus bacterial and 16S rRNA genetic material in favour of viruses (from 43.2% to 96.7% viral reads and from 47.6% to 0.19% bacterial reads). Furthermore, our study indicated that most of the currently used virome protocols, using small filter pores and/or stringent centrifugation conditions may have largely overlooked large viruses present in viromes. We propose NetoVIR (Novel enrichment technique of VIRomes), which allows for a fast, reproducible and high throughput sample preparation for viral metagenomics studies, introducing minimal bias. This procedure is optimised mainly for faecal samples, but with appropriate concentration steps can also be used for other sample types with lower initial viral loads.
This document describes three general approaches to the design of a sampling plan for biological monitoring of coral reefs. Status assessment, trend detection and targeted monitoring each require a different approach to site selection and statistical analysis. For status assessm...
Gradl-Dietsch, Gertraud; Lübke, Cavan; Horst, Klemens; Simon, Melanie; Modabber, Ali; Sönmez, Tolga T; Münker, Ralf; Nebelung, Sven; Knobe, Matthias
2016-11-03
The objectives of this prospective randomized trial were to assess the impact of Peyton's four-step approach on the acquisition of complex psychomotor skills and to examine the influence of gender on learning outcomes. We randomly assigned 95 third to fifth year medical students to an intervention group which received instructions according to Peyton (PG) or a control group, which received conventional teaching (CG). Both groups attended four sessions on the principles of manual therapy and specific manipulative and diagnostic techniques for the spine. We assessed differences in theoretical knowledge (multiple choice (MC) exam) and practical skills (Objective Structured Practical Examination (OSPE)) with respect to type of intervention and gender. Participants took a second OSPE 6 months after completion of the course. There were no differences between groups with respect to the MC exam. Students in the PG group scored significantly higher in the OSPE. Gender had no additional impact. Results of the second OSPE showed a significant decline in competency regardless of gender and type of intervention. Peyton's approach is superior to standard instruction for teaching complex spinal manipulation skills regardless of gender. Skills retention was equally low for both techniques.
Biomarker discovery in heterogeneous tissue samples -taking the in-silico deconfounding approach
Directory of Open Access Journals (Sweden)
Parida Shreemanta K
2010-01-01
Full Text Available Abstract Background For heterogeneous tissues, such as blood, measurements of gene expression are confounded by relative proportions of cell types involved. Conclusions have to rely on estimation of gene expression signals for homogeneous cell populations, e.g. by applying micro-dissection, fluorescence activated cell sorting, or in-silico deconfounding. We studied feasibility and validity of a non-negative matrix decomposition algorithm using experimental gene expression data for blood and sorted cells from the same donor samples. Our objective was to optimize the algorithm regarding detection of differentially expressed genes and to enable its use for classification in the difficult scenario of reversely regulated genes. This would be of importance for the identification of candidate biomarkers in heterogeneous tissues. Results Experimental data and simulation studies involving noise parameters estimated from these data revealed that for valid detection of differential gene expression, quantile normalization and use of non-log data are optimal. We demonstrate the feasibility of predicting proportions of constituting cell types from gene expression data of single samples, as a prerequisite for a deconfounding-based classification approach. Classification cross-validation errors with and without using deconfounding results are reported as well as sample-size dependencies. Implementation of the algorithm, simulation and analysis scripts are available. Conclusions The deconfounding algorithm without decorrelation using quantile normalization on non-log data is proposed for biomarkers that are difficult to detect, and for cases where confounding by varying proportions of cell types is the suspected reason. In this case, a deconfounding ranking approach can be used as a powerful alternative to, or complement of, other statistical learning approaches to define candidate biomarkers for molecular diagnosis and prediction in biomedicine, in
Henry, David; Dymnicki, Allison B.; Mohatt, Nathaniel; Allen, James; Kelly, James G.
2016-01-01
Qualitative methods potentially add depth to prevention research, but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data, but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-Means clustering, and latent class analysis produced similar levels of accuracy with binary data, and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a “real-world” example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities. PMID:25946969
Henry, David; Dymnicki, Allison B; Mohatt, Nathaniel; Allen, James; Kelly, James G
2015-10-01
Qualitative methods potentially add depth to prevention research but can produce large amounts of complex data even with small samples. Studies conducted with culturally distinct samples often produce voluminous qualitative data but may lack sufficient sample sizes for sophisticated quantitative analysis. Currently lacking in mixed-methods research are methods allowing for more fully integrating qualitative and quantitative analysis techniques. Cluster analysis can be applied to coded qualitative data to clarify the findings of prevention studies by aiding efforts to reveal such things as the motives of participants for their actions and the reasons behind counterintuitive findings. By clustering groups of participants with similar profiles of codes in a quantitative analysis, cluster analysis can serve as a key component in mixed-methods research. This article reports two studies. In the first study, we conduct simulations to test the accuracy of cluster assignment using three different clustering methods with binary data as produced when coding qualitative interviews. Results indicated that hierarchical clustering, K-means clustering, and latent class analysis produced similar levels of accuracy with binary data and that the accuracy of these methods did not decrease with samples as small as 50. Whereas the first study explores the feasibility of using common clustering methods with binary data, the second study provides a "real-world" example using data from a qualitative study of community leadership connected with a drug abuse prevention project. We discuss the implications of this approach for conducting prevention research, especially with small samples and culturally distinct communities.
MetSizeR: selecting the optimal sample size for metabolomic studies using an analysis based approach
Nyamundanda, Gift; Gormley, Isobel Claire; Fan, Yue; Gallagher, William M.; Brennan, Lorraine
2013-01-01
Background: Determining sample sizes for metabolomic experiments is important but due to the complexity of these experiments, there are currently no standard methods for sample size estimation in metabolomics. Since pilot studies are rarely done in metabolomics, currently existing sample size estimation approaches which rely on pilot data can not be applied. Results: In this article, an analysis based approach called MetSizeR is developed to estimate sample size for metabolomic experime...
Arrow, Peter; Klobas, Elizabeth
2015-12-01
A pragmatic randomized control trial was undertaken to compare the minimum intervention dentistry (MID) approach, based on the atraumatic restorative treatment procedures (MID-ART: Test), against the standard care approach (Control) to treat early childhood caries in a primary care setting. Consenting parent/child dyads were allocated to the Test or Control group using stratified block randomization. Inclusion and exclusion criteria were applied. Participants were examined at baseline and at follow-up by two calibrated examiners blind to group allocation status (κ = 0.77), and parents completed a questionnaire at baseline and follow-up. Dental therapists trained in MID-ART provided treatment to the Test group and dentists treated the Control group using standard approaches. The primary outcome of interest was the number of children who were referred for specialist pediatric care. Secondary outcomes were the number of teeth treated, changes in child oral health-related quality of life and dental anxiety and parental perceptions of care received. Data were analyzed on an intention to treat basis; risk ratio for referral for specialist care, test of proportions, Wilcoxon rank test and logistic regression were used. Three hundred and seventy parents/carers were initially screened; 273 children were examined at baseline and 254 were randomized (Test = 127; Control = 127): mean age = 3.8 years, SD 0.90; 59% male, mean dmft = 4.9, SD 4.0. There was no statistically significant difference in age, sex, baseline caries experience or child oral health-related quality of life between the Test and Control group. At follow-up (mean interval 11.4 months, SD 3.1 months), 220 children were examined: Test = 115, Control = 105. Case-notes review of 231 children showed Test = 6 (5%) and Control = 53 (49%) were referred for specialist care, P ART approach reduced significantly the likelihood of referral for specialist care, and more children and teeth were
Directory of Open Access Journals (Sweden)
Korf Ulrike
2011-07-01
Full Text Available Abstract Background Network inference from high-throughput data has become an important means of current analysis of biological systems. For instance, in cancer research, the functional relationships of cancer related proteins, summarised into signalling networks are of central interest for the identification of pathways that influence tumour development. Cancer cell lines can be used as model systems to study the cellular response to drug treatments in a time-resolved way. Based on these kind of data, modelling approaches for the signalling relationships are needed, that allow to generate hypotheses on potential interference points in the networks. Results We present the R-package 'ddepn' that implements our recent approach on network reconstruction from longitudinal data generated after external perturbation of network components. We extend our approach by two novel methods: a Markov Chain Monte Carlo method for sampling network structures with two edge types (activation and inhibition and an extension of a prior model that penalises deviances from a given reference network while incorporating these two types of edges. Further, as alternative prior we include a model that learns signalling networks with the scale-free property. Conclusions The package 'ddepn' is freely available on R-Forge and CRAN http://ddepn.r-forge.r-project.org, http://cran.r-project.org. It allows to conveniently perform network inference from longitudinal high-throughput data using two different sampling based network structure search algorithms.
Zur, Richard M; Pesce, Lorenzo L; Jiang, Yulei
2015-05-01
To evaluate stratified random sampling (SRS) of screening mammograms by (1) Breast Imaging Reporting and Data System (BI-RADS) assessment categories, and (2) the presence of breast cancer in mammograms, for estimation of screening-mammography receiver operating characteristic (ROC) curves in retrospective observer studies. We compared observer study case sets constructed by (1) random sampling (RS); (2) SRS with proportional allocation (SRS-P) with BI-RADS 1 and 2 noncancer cases accounting for 90.6% of all noncancer cases; (3) SRS with disproportional allocation (SRS-D) with BI-RADS 1 and 2 noncancer cases accounting for 10%-80%; and (4) SRS-D and multiple imputation (SRS-D + MI) with missing BI-RADS 1 and 2 noncancer cases imputed to recover the 90.6% proportion. Monte Carlo simulated case sets were drawn from a large case population modeled after published Digital Mammography Imaging Screening Trial data. We compared the bias, root-mean-square error, and coverage of 95% confidence intervals of area under the ROC curve (AUC) estimates from the sampling methods (200-2000 cases, of which 25% were cancer cases) versus from the large case population. AUC estimates were unbiased from RS, SRS-P, and SRS-D + MI, but biased from SRS-D. AUC estimates from SRS-P and SRS-D + MI had 10% smaller root-mean-square error than RS. Both SRS-P and SRS-D + MI can be used to obtain unbiased and 10% more efficient estimate of screening-mammography ROC curves. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
A Sampling Based Approach to Spacecraft Autonomous Maneuvering with Safety Specifications
Starek, Joseph A.; Barbee, Brent W.; Pavone, Marco
2015-01-01
This paper presents a methods for safe spacecraft autonomous maneuvering that leverages robotic motion-planning techniques to spacecraft control. Specifically the scenario we consider is an in-plan rendezvous of a chaser spacecraft in proximity to a target spacecraft at the origin of the Clohessy Wiltshire Hill frame. The trajectory for the chaser spacecraft is generated in a receding horizon fashion by executing a sampling based robotic motion planning algorithm name Fast Marching Trees (FMT) which efficiently grows a tree of trajectories over a set of probabillistically drawn samples in the state space. To enforce safety the tree is only grown over actively safe samples for which there exists a one-burn collision avoidance maneuver that circularizes the spacecraft orbit along a collision-free coasting arc and that can be executed under potential thrusters failures. The overall approach establishes a provably correct framework for the systematic encoding of safety specifications into the spacecraft trajectory generations process and appears amenable to real time implementation on orbit. Simulation results are presented for a two-fault tolerant spacecraft during autonomous approach to a single client in Low Earth Orbit.
Paul, Joshua S; Song, Yun S
2010-09-01
The multilocus conditional sampling distribution (CSD) describes the probability that an additionally sampled DNA sequence is of a certain type, given that a collection of sequences has already been observed. The CSD has a wide range of applications in both computational biology and population genomics analysis, including phasing genotype data into haplotype data, imputing missing data, estimating recombination rates, inferring local ancestry in admixed populations, and importance sampling of coalescent genealogies. Unfortunately, the true CSD under the coalescent with recombination is not known, so approximations, formulated as hidden Markov models, have been proposed in the past. These approximations have led to a number of useful statistical tools, but it is important to recognize that they were not derived from, though were certainly motivated by, principles underlying the coalescent process. The goal of this article is to develop a principled approach to derive improved CSDs directly from the underlying population genetics model. Our approach is based on the diffusion process approximation and the resulting mathematical expressions admit intuitive genealogical interpretations, which we utilize to introduce further approximations and make our method scalable in the number of loci. The general algorithm presented here applies to an arbitrary number of loci and an arbitrary finite-alleles recurrent mutation model. Empirical results are provided to demonstrate that our new CSDs are in general substantially more accurate than previously proposed approximations.
Puberty Predicts Approach But Not Avoidance on the Iowa Gambling Task in a Multinational Sample.
Icenogle, Grace; Steinberg, Laurence; Olino, Thomas M; Shulman, Elizabeth P; Chein, Jason; Alampay, Liane P; Al-Hassan, Suha M; Takash, Hanan M S; Bacchini, Dario; Chang, Lei; Chaudhary, Nandita; Di Giunta, Laura; Dodge, Kenneth A; Fanti, Kostas A; Lansford, Jennifer E; Malone, Patrick S; Oburu, Paul; Pastorelli, Concetta; Skinner, Ann T; Sorbring, Emma; Tapanya, Sombat; Uribe Tirado, Liliana M
2017-09-01
According to the dual systems model of adolescent risk taking, sensation seeking and impulse control follow different developmental trajectories across adolescence and are governed by two different brain systems. The authors tested whether different underlying processes also drive age differences in reward approach and cost avoidance. Using a modified Iowa Gambling Task in a multinational, cross-sectional sample of 3,234 adolescents (ages 9-17; M = 12.87, SD = 2.36), pubertal maturation, but not age, predicted reward approach, mediated through higher sensation seeking. In contrast, age, but not pubertal maturation, predicted increased cost avoidance, mediated through greater impulse control. These findings add to evidence that adolescent behavior is best understood as the product of two interacting, but independently developing, brain systems. © 2016 The Authors. Child Development © 2016 Society for Research in Child Development, Inc.
Zarzycki, Paweł K; Slączka, Magdalena M; Włodarczyk, Elżbieta; Baran, Michał J
2013-01-01
In this work we demonstrated analytical capability of micro-planar (micro-TLC) technique comprising one and two-dimensional (2D) separation modes to generate fingerprints of environmental samples originated from sewage and ecosystems waters. We showed that elaborated separation and detection protocols are complementary to previously invented HPLC method based on temperature-dependent inclusion chromatography and UV-DAD detection. Presented 1D and 2D micro-TLC chromatograms of SPE (solid-phase extraction) extracts were optimized for fast and low-cost screening of water samples collected from lakes and rivers located in the area of Middle Pomerania in northern part of Poland. Moreover, we studied highly organic compounds loaded in the treated and untreated sewage waters obtained from municipal wastewater treatment plant "Jamno" near Koszalin City (Poland). Analyzed environmental samples contained number of substances characterized by polarity range from estetrol to progesterone as well as chlorophyll-related dyes previously isolated and pre-purified by simple SPE protocol involving C18 cartridges. Optimization of micro-TLC separation and quantification protocols of such samples were discussed from the practical point of view using simple separation efficiency criteria including total peaks number, log(product Δ hR F ), signal intensity and peak asymmetry. Outcomes of the presented analytical approach, especially using detection involving direct fluorescence (UV366/Vis) and phosphomolybdic acid (PMA) visualization are compared with UV-DAD HPLC-generated data reported previously. Chemometric investigation based on principal components analysis revealed that SPE extracts separated by micro-TLC and detected under fluorescence and PMA visualization modes can be used for robust sample fingerprinting even after long-term storage of the extracts (up to 4 years) at subambient temperature (-20 °C). Such approach allows characterization of wide range of sample components
Directory of Open Access Journals (Sweden)
Eric S Walsh
Full Text Available Modeling the magnitude and distribution of sediment-bound pollutants in estuaries is often limited by incomplete knowledge of the site and inadequate sample density. To address these modeling limitations, a decision-support tool framework was conceived that predicts sediment contamination from the sub-estuary to broader estuary extent. For this study, a Random Forest (RF model was implemented to predict the distribution of a model contaminant, triclosan (5-chloro-2-(2,4-dichlorophenoxyphenol (TCS, in Narragansett Bay, Rhode Island, USA. TCS is an unregulated contaminant used in many personal care products. The RF explanatory variables were associated with TCS transport and fate (proxies and direct and indirect environmental entry. The continuous RF TCS concentration predictions were discretized into three levels of contamination (low, medium, and high for three different quantile thresholds. The RF model explained 63% of the variance with a minimum number of variables. Total organic carbon (TOC (transport and fate proxy was a strong predictor of TCS contamination causing a mean squared error increase of 59% when compared to permutations of randomized values of TOC. Additionally, combined sewer overflow discharge (environmental entry and sand (transport and fate proxy were strong predictors. The discretization models identified a TCS area of greatest concern in the northern reach of Narragansett Bay (Providence River sub-estuary, which was validated with independent test samples. This decision-support tool performed well at the sub-estuary extent and provided the means to identify areas of concern and prioritize bay-wide sampling.
Directory of Open Access Journals (Sweden)
Rosa Catarino
Full Text Available Human papillomavirus (HPV self-sampling (self-HPV is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab.A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA. After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET. HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic.HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2 detected by s-DRY, 56.2% (95%CI: 47.6-64.4 by Dr-WET, and 54.6% (95%CI: 46.1-62.9 by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34, and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56. Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+ were: 64.0% (95%CI: 44.5-79.8 for s-FTA, 84.6% (95%CI: 66.5-93.9 for s-DRY, and 76.9% (95%CI: 58.0-89.0 for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%. Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD/per card vs. ~1 USD/per swab.Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA.International Standard Randomized Controlled Trial Number (ISRCTN: 43310942.
A non-uniform sampling approach enables studies of dilute and unstable proteins.
Miljenović, Tomas; Jia, Xinying; Lavrencic, Peter; Kobe, Bostjan; Mobli, Mehdi
2017-06-01
NMR spectroscopy is a powerful method in structural and functional analysis of macromolecules and has become particularly prevalent in studies of protein structure, function and dynamics. Unique to NMR spectroscopy is the relatively low constraints on sample preparation and the high level of control of sample conditions. Proteins can be studied in a wide range of buffer conditions, e.g. different pHs and variable temperatures, allowing studies of proteins under conditions that are closer to their native environment compared to other structural methods such as X-ray crystallography and electron microscopy. The key disadvantage of NMR is the relatively low sensitivity of the method, requiring either concentrated samples or very lengthy data-acquisition times. Thus, proteins that are unstable or can only be studied in dilute solutions are often considered practically unfeasible for NMR studies. Here, we describe a general method, where non-uniform sampling (NUS) allows for signal averaging to be monitored in an iterative manner, enabling efficient use of spectrometer time, ultimately leading to savings in costs associated with instrument and isotope-labelled protein use. The method requires preparation of multiple aliquots of the protein sample that are flash-frozen and thawed just before acquisition of a short NMR experiments carried out while the protein is stable (12 h in the presented case). Non-uniform sampling enables sufficient resolution to be acquired for each short experiment. Identical NMR datasets are acquired and sensitivity is monitored after each co-added spectrum is reconstructed. The procedure is repeated until sufficient signal-to-noise is obtained. We discuss how maximum entropy reconstruction is used to process the data, and propose a variation on the previously described method of automated parameter selection. We conclude that combining NUS with iterative co-addition is a general approach, and particularly powerful when applied to unstable
Directory of Open Access Journals (Sweden)
Fuqun Zhou
2016-10-01
Full Text Available Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS. It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
Perfluoroalkyl substances in aquatic environment-comparison of fish and passive sampling approaches.
Cerveny, Daniel; Grabic, Roman; Fedorova, Ganna; Grabicova, Katerina; Turek, Jan; Kodes, Vit; Golovko, Oksana; Zlabek, Vladimir; Randak, Tomas
2016-01-01
The concentrations of seven perfluoroalkyl substances (PFASs) were investigated in 36 European chub (Squalius cephalus) individuals from six localities in the Czech Republic. Chub muscle and liver tissue were analysed at all sampling sites. In addition, analyses of 16 target PFASs were performed in Polar Organic Chemical Integrative Samplers (POCISs) deployed in the water at the same sampling sites. We evaluated the possibility of using passive samplers as a standardized method for monitoring PFAS contamination in aquatic environments and the mutual relationships between determined concentrations. Only perfluorooctane sulphonate was above the LOQ in fish muscle samples and 52% of the analysed fish individuals exceeded the Environmental Quality Standard for water biota. Fish muscle concentration is also particularly important for risk assessment of fish consumers. The comparison of fish tissue results with published data showed the similarity of the Czech results with those found in Germany and France. However, fish liver analysis and the passive sampling approach resulted in different fish exposure scenarios. The total concentration of PFASs in fish liver tissue was strongly correlated with POCIS data, but pollutant patterns differed between these two matrices. The differences could be attributed to the metabolic activity of the living organism. In addition to providing a different view regarding the real PFAS cocktail to which the fish are exposed, POCISs fulfil the Three Rs strategy (replacement, reduction, and refinement) in animal testing. Copyright © 2015 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Mohd Fo'ad Rohani
2010-11-01
Full Text Available This paper proposes a Multi-Level Sampling (MLS approach for continuous Loss of Self-Similarity (LoSS detection using iterative window. The method defines LoSS based on Second Order Self-Similarity (SOSS statistical model. The Optimization Method (OM is used to estimate self-similarity parameter since it is fast and more accurate in comparison with other estimation methods known in the literature. Probability of LoSS detection is introduced to measure continuous LoSS detection performance. The proposed method has been tested with real Internet traffic simulation dataset. The results demonstrate that normal traces have probability of LoSS detection below the threshold at all sampling levels. Meanwhile, false positive detection can occur where abnormal traces have probability of LoSS that imitates normal behavior at sampling levels below 100ms. However, the LoSS probability exceeds the threshold at sampling levels larger than 100ms. Our results show the possibility of detecting anomaly traffic behavior based on obtaining continuous LoSS detection monitoring
Liran, Levy; Rottem, Kuint; Gregorio, Fridlender Zvi; Avi, Abutbul; Neville, Berkman
2017-09-07
Since the introduction of endobronchial ultrasound-guided transbronchial needle aspiration (EBUS-TBNA), most pulmonary centers use this technique exclusively for mediastinal lymph node (LN) sampling. Conventional "blind" TBNA (cTBNA), however, is cheaper, more accessible, provides more tissue, and requires less training. We evaluated whether sampling of mediastinal LN using EBUS-TBNA or cTBNA according to a predefined set of criteria provides acceptable diagnostic yield. Sampling method was determined prospectively according to a predefined set of criteria based on LN station, LN size, and presumed diagnosis. Sensitivity, specificity, positive, and negative predictive value were evaluated for each modality. One hundred and eighty-six biopsies were carried out over a 3-year period (86 cTBNA, 100 EBUS-TBNA). Seventy-seven percent of LN biopsied by EBUS-TBNA were <20 mm, while 83% of cTBNA biopsies were ≥20 mm. Most common sites of cTBNA sampling were station 7, 4R, and 11R as opposed to 7, 11R, 4R, and 4 L in the case of EBUS-TBNA. Most common EBUS-TBNA diagnosis was malignancy versus sarcoidosis in cTBNA. EBUS-TBNA and cTBNA both had a true positive yield of 65%, but EBUS-TBNA had a higher true negative rate (21% vs. 2% for cTBNA) and a lower false negative rate (7% vs. 28%). Sensitivity, specificity, positive predictive value, and negative predictive value for EBUS-TBNA were 90%, 100%, 100%, and 75%, respectively, and for cTBNA were 68%, 100%, 100%, and 7%, respectively. A stepwise approach based on LN size, station, and presumed diagnosis may be a reasonable, cost-effective approach in choosing between cTBNA and EBUS-TBNA.
Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach
Directory of Open Access Journals (Sweden)
Gabriele Ferri
2015-12-01
Full Text Available This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality, used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support. The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided
Mission Planning and Decision Support for Underwater Glider Networks: A Sampling on-Demand Approach.
Ferri, Gabriele; Cococcioni, Marco; Alvarez, Alberto
2015-12-26
This paper describes an optimal sampling approach to support glider fleet operators and marine scientists during the complex task of planning the missions of fleets of underwater gliders. Optimal sampling, which has gained considerable attention in the last decade, consists in planning the paths of gliders to minimize a specific criterion pertinent to the phenomenon under investigation. Different criteria (e.g., A, G, or E optimality), used in geosciences to obtain an optimum design, lead to different sampling strategies. In particular, the A criterion produces paths for the gliders that minimize the overall level of uncertainty over the area of interest. However, there are commonly operative situations in which the marine scientists may prefer not to minimize the overall uncertainty of a certain area, but instead they may be interested in achieving an acceptable uncertainty sufficient for the scientific or operational needs of the mission. We propose and discuss here an approach named sampling on-demand that explicitly addresses this need. In our approach the user provides an objective map, setting both the amount and the geographic distribution of the uncertainty to be achieved after assimilating the information gathered by the fleet. A novel optimality criterion, called A η , is proposed and the resulting minimization problem is solved by using a Simulated Annealing based optimizer that takes into account the constraints imposed by the glider navigation features, the desired geometry of the paths and the problems of reachability caused by ocean currents. This planning strategy has been implemented in a Matlab toolbox called SoDDS (Sampling on-Demand and Decision Support). The tool is able to automatically download the ocean fields data from MyOcean repository and also provides graphical user interfaces to ease the input process of mission parameters and targets. The results obtained by running SoDDS on three different scenarios are provided and show that So
Mukherjee, Shubhabrata; Walter, Stefan; Kauwe, John S K; Saykin, Andrew J; Bennett, David A; Larson, Eric B; Crane, Paul K; Glymour, M Maria
2015-12-01
Observational research shows that higher body mass index (BMI) increases Alzheimer's disease (AD) risk, but it is unclear whether this association is causal. We applied genetic variants that predict BMI in Mendelian randomization analyses, an approach that is not biased by reverse causation or confounding, to evaluate whether higher BMI increases AD risk. We evaluated individual-level data from the AD Genetics Consortium (ADGC: 10,079 AD cases and 9613 controls), the Health and Retirement Study (HRS: 8403 participants with algorithm-predicted dementia status), and published associations from the Genetic and Environmental Risk for AD consortium (GERAD1: 3177 AD cases and 7277 controls). No evidence from individual single-nucleotide polymorphisms or polygenic scores indicated BMI increased AD risk. Mendelian randomization effect estimates per BMI point (95% confidence intervals) were as follows: ADGC, odds ratio (OR) = 0.95 (0.90-1.01); HRS, OR = 1.00 (0.75-1.32); GERAD1, OR = 0.96 (0.87-1.07). One subscore (cellular processes not otherwise specified) unexpectedly predicted lower AD risk. Copyright © 2015 The Alzheimer's Association. Published by Elsevier Inc. All rights reserved.
Cheval, Boris; Sarrazin, Philippe; Pelletier, Luc; Friese, Malte
2016-12-01
Promoting regular physical activity (PA) and lessening sedentary behaviors (SB) constitute a public health priority. Recent evidence suggests that PA and SB are not only related to reflective processes (eg, behavioral intentions), but also to impulsive approach-avoidance tendencies (IAAT). This study aims to test the effect of a computerized IAAT intervention on an exercise task. Participants (N = 115) were randomly assigned to 1 of 3 experimental conditions, in which they were either trained to approach PA and avoid SB (ApPA-AvSB condition), to approach SB and avoid PA (ApSB-AvPA condition), or to approach and avoid PA and SB equally often (active control condition). The main outcome variable was the time spent carrying out a moderate intensity exercise task. IAAT toward PA decreased in the ApSB-AvPA condition, tended to increase in the ApPA-AvSB condition, and remained stable in the control condition. Most importantly, the ApPA-AvSB manipulation led to more time spent exercising than the ApSB-AvPA condition. Sensitivity analyses excluding individuals who were highly physically active further revealed that participants in the ApPA-AvSB condition spent more time exercising than participants in the control condition. These findings provide preliminary evidence that a single intervention session can successfully change impulsive approach tendencies toward PA and can increase the time devoted to an exercise task, especially among individuals who need to be more physically active. Potential implications for health behavior theories and behavior change interventions are outlined.
Liu, Xiao
2017-03-21
Privacy risks of recommender systems have caused increasing attention. Users’ private data is often collected by probably untrusted recommender system in order to provide high-quality recommendation. Meanwhile, malicious attackers may utilize recommendation results to make inferences about other users’ private data. Existing approaches focus either on keeping users’ private data protected during recommendation computation or on preventing the inference of any single user’s data from the recommendation result. However, none is designed for both hiding users’ private data and preventing privacy inference. To achieve this goal, we propose in this paper a hybrid approach for privacy-preserving recommender systems by combining differential privacy (DP) with randomized perturbation (RP). We theoretically show the noise added by RP has limited effect on recommendation accuracy and the noise added by DP can be well controlled based on the sensitivity analysis of functions on the perturbed data. Extensive experiments on three large-scale real world datasets show that the hybrid approach generally provides more privacy protection with acceptable recommendation accuracy loss, and surprisingly sometimes achieves better privacy without sacrificing accuracy, thus validating its feasibility in practice.
Zhang, Zhiwei; Cheon, Kyeongmi
2017-04-01
A common problem in randomized clinical trials is nonignorable missingness, namely that the clinical outcome(s) of interest can be missing in a way that is not fully explained by the observed quantities. This happens when the continued participation of patients depends on the current outcome after adjusting for the observed history. Standard methods for handling nonignorable missingness typically require specification of the response mechanism, which can be difficult in practice. This article proposes a reverse regression approach that does not require a model for the response mechanism. Instead, the proposed approach relies on the assumption that missingness is independent of treatment assignment upon conditioning on the relevant outcome(s). This conditional independence assumption is motivated by the observation that, when patients are effectively masked to the assigned treatment, their decision to either stay in the trial or drop out cannot depend on the assigned treatment directly. Under this assumption, one can estimate parameters in the reverse regression model, test for the presence of a treatment effect, and in some cases estimate the outcome distributions. The methodology can be extended to longitudinal outcomes under natural conditions. The proposed approach is illustrated with real data from a cardiovascular study.
Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey
Directory of Open Access Journals (Sweden)
Steven R. Corman
2013-12-01
Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.
Negrão, Mariana; Pereira, Mariana; Soares, Isabel; Mesman, Judi
2014-01-01
This study tested the attachment-based intervention program Video-feedback Intervention to promote Positive Parenting and Sensitive Discipline (VIPP-SD) in a randomized controlled trial with poor families of toddlers screened for professional's concerns about the child's caregiving environment. The VIPP-SD is an evidence-based intervention, but has not yet been tested in the context of poverty. The sample included 43 families with 1- to 4-year-old children: mean age at the pretest was 29 months and 51% were boys. At the pretest and posttest, mother-child interactions were observed at home, and mothers reported on family functioning. The VIPP-SD proved to be effective in enhancing positive parent-child interactions and positive family relations in a severely deprived context. Results are discussed in terms of implications for support services provided to such poor families in order to reduce intergenerational risk transmission.
Buller, David B.; Andersen, Peter A.; Walkosz, Barbara J.; Scott, Michael D.; Beck, Larry; Cutter, Gary R.
2016-01-01
Introduction Exposure to solar ultraviolet radiation during recreation is a risk factor for skin cancer. A trial evaluating an intervention to promote advanced sun protection (sunscreen pre-application/reapplication; protective hats and clothing; use of shade) during vacations. Materials and Methods Adult visitors to hotels/resorts with outdoor recreation (i.e., vacationers) participated in a group-randomized pretest-posttest controlled quasi-experimental design in 2012–14. Hotels/resorts were pair-matched and randomly assigned to the intervention or untreated control group. Sun protection (e.g., clothing, hats, shade and sunscreen) was measured in cross-sectional samples by observation and a face-to-face intercept survey during two-day visits. Results Initially, 41 hotel/resorts (11%) participated but 4 dropped out before posttest. Hotel/resorts were diverse (employees=30 to 900; latitude=24o 78′ N to 50o 52′ N; elevation=2 ft. to 9,726 ft. above sea level), and had a variety of outdoor venues (beaches/pools, court/lawn games, golf courses, common areas, and chairlifts). At pretest, 4,347 vacationers were observed and 3,531 surveyed. More females were surveyed (61%) than observed (50%). Vacationers were mostly 35–60 years old, highly educated (college education = 68%) and non-Hispanic white (93%), with high-risk skin types (22%). Vacationers reported covering 60% of their skin with clothing. Also, 40% of vacationers used shade; 60% applied sunscreen; and 42% had been sunburned. Conclusions The trial faced challenges recruiting resorts but result show that the large, multi-state sample of vacationers were at high risk for solar UV exposure. PMID:26593781
Verweij, Karin J H; Treur, Jorien L; Vink, Jacqueline M
2018-01-15
Epidemiological studies consistently show co-occurrence of use of different addictive substances. Whether these associations are causal or due to overlapping underlying influences remains an important question in addiction research. Methodological advances have made it possible to use published genetic associations to infer causal relationships between phenotypes. In this exploratory study, we used Mendelian randomization (MR) to examine the causality of well-established associations between nicotine, alcohol, caffeine, and cannabis use. Two-sample MR was employed to estimate bi-directional causal effects between four addictive substances: nicotine (smoking initiation and cigarettes smoked per day), caffeine (cups of coffee per day), alcohol (units per week), and cannabis (initiation). Based on existing genome-wide association results we selected genetic variants associated with the exposure measure as an instrument to estimate causal effects. Where possible we applied sensitivity analyses (MR-Egger and weighted median) more robust to horizontal pleiotropy. Most MR tests did not reveal causal associations. There was some weak evidence for a causal positive effect of genetically instrumented alcohol use on smoking initiation and of cigarettes per day on caffeine use, but these did not hold up with the sensitivity analyses. There was also some suggestive evidence for a positive effect of alcohol use on caffeine use (only with MR-Egger) and smoking initiation on cannabis initiation (only with weighted median). None of the suggestive causal associations survived corrections for multiple testing. Two-sample Mendelian randomization analyses found little evidence for causal relationships between nicotine, alcohol, caffeine, and cannabis use. This article is protected by copyright. All rights reserved.
Bayesian and variational Bayesian approaches for flows in heterogeneous random media
Yang, Keren; Guha, Nilabja; Efendiev, Yalchin; Mallick, Bani K.
2017-09-01
In this paper, we study porous media flows in heterogeneous stochastic media. We propose an efficient forward simulation technique that is tailored for variational Bayesian inversion. As a starting point, the proposed forward simulation technique decomposes the solution into the sum of separable functions (with respect to randomness and the space), where each term is calculated based on a variational approach. This is similar to Proper Generalized Decomposition (PGD). Next, we apply a multiscale technique to solve for each term (as in [1]) and, further, decompose the random function into 1D fields. As a result, our proposed method provides an approximation hierarchy for the solution as we increase the number of terms in the expansion and, also, increase the spatial resolution of each term. We use the hierarchical solution distributions in a variational Bayesian approximation to perform uncertainty quantification in the inverse problem. We conduct a detailed numerical study to explore the performance of the proposed uncertainty quantification technique and show the theoretical posterior concentration.
ELEMENTARY APPROACH TO SELF-ASSEMBLY AND ELASTIC PROPERTIES OF RANDOM COPOLYMERS
Energy Technology Data Exchange (ETDEWEB)
S. M. CHITANVIS
2000-10-01
The authors have mapped the physics of a system of random copolymers onto a time-dependent density functional-type field theory using techniques of functional integration. Time in the theory is merely a label for the location of a given monomer along the extent of a flexible chain. We derive heuristically within this approach a non-local constraint which prevents segments on chains in the system from straying too far from each other, and leads to self-assembly. The structure factor is then computed in a straightforward fashion. The long wave-length limit of the structure factor is used to obtain the elastic modulus of the network. It is shown that there is a surprising competition between the degree of micro-phase separation and the elastic moduli of the system.
A stochastic control approach to Slotted-ALOHA random access protocol
Pietrabissa, Antonio
2013-12-01
ALOHA random access protocols are distributed protocols based on transmission probabilities, that is, each node decides upon packet transmissions according to a transmission probability value. In the literature, ALOHA protocols are analysed by giving necessary and sufficient conditions for the stability of the queues of the node buffers under a control vector (whose elements are the transmission probabilities assigned to the nodes), given an arrival rate vector (whose elements represent the rates of the packets arriving in the node buffers). The innovation of this work is that, given an arrival rate vector, it computes the optimal control vector by defining and solving a stochastic control problem aimed at maximising the overall transmission efficiency, while keeping a grade of fairness among the nodes. Furthermore, a more general case in which the arrival rate vector changes in time is considered. The increased efficiency of the proposed solution with respect to the standard ALOHA approach is evaluated by means of numerical simulations.
Pinto, Miguel; Antelo, Minia; Ferreira, Rita; Azevedo, Jacinta; Santo, Irene; Borrego, Maria José; Gomes, João Paulo
2017-03-01
Syphilis is the sexually transmitted disease caused by Treponema pallidum, a pathogen highly adapted to the human host. As a multistage disease, syphilis presents distinct clinical manifestations that pose different implications for diagnosis. Nevertheless, the inherent factors leading to diverse disease progressions are still unknown. We aimed to assess the association between treponemal loads and dissimilar disease outcomes, to better understand syphilis. We retrospectively analyzed 309 DNA samples distinct anatomic sites associated with particular syphilis manifestations. All samples had previously tested positive by a PCR-based diagnostic kit. An absolute quantitative real-time PCR procedure was used to precisely quantify the number of treponemal and human cells to determine T. pallidum loads in each sample. In general, lesion exudates presented the highest T. pallidum loads in contrast with blood-derived samples. Within the latter, a higher dispersion of T. pallidum quantities was observed for secondary syphilis. T. pallidum was detected in substantial amounts in 37 samples of seronegative individuals and in 13 cases considered as syphilis-treated. No association was found between treponemal loads and serological results or HIV status. This study suggests a scenario where syphilis may be characterized by: i) heterogeneous and high treponemal loads in primary syphilis, regardless of the anatomic site, reflecting dissimilar duration of chancres development and resolution; ii) high dispersion of bacterial concentrations in secondary syphilis, potentially suggesting replication capability of T. pallidum while in the bloodstream; and iii) bacterial evasiveness, either to the host immune system or antibiotic treatment, while remaining hidden in privileged niches. This work highlights the importance of using molecular approaches to study uncultivable human pathogens, such as T. pallidum, in the infection process. Copyright © 2017 Elsevier Ltd. All rights
Directory of Open Access Journals (Sweden)
Preksedis M. Ndomba
2008-01-01
Full Text Available This paper presents preliminary findings on the adequacy of one hydrological year sampling programme data in developing an excellent sediment rating curve. The study case is a 1DD1 subcatchment in the upstream of Pangani River Basin (PRB, located in the North Eastern part of Tanzania. 1DD1 is the major runoff-sediment contributing tributary to the downstream hydropower reservoir, the Nyumba Ya Mungu (NYM. In literature sediment rating curve method is known to underestimate the actual sediment load. In the case of developing countries long-term sediment sampling monitoring or conservation campaigns have been reported as unworkable options. Besides, to the best knowledge of the authors, to date there is no consensus on how to develop an excellent rating curve. Daily-midway and intermittent-cross section sediment samples from Depth Integrating sampler (D-74 were used to calibrate the subdaily automatic sediment pumping sampler (ISCO 6712 near bank point samples for developing the rating curve. Sediment load correction factors were derived from both statistical bias estimators and actual sediment load approaches. It should be noted that the ongoing study is guided by findings of other studies in the same catchment. For instance, long term sediment yield rate estimated based on reservoir survey validated the performance of the developed rating curve. The result suggests that excellent rating curve could be developed from one hydrological year sediment sampling programme data. This study has also found that uncorrected rating curve underestimates sediment load. The degreeof underestimation depends on the type of rating curve developed and data used.
Burger, Rulof P; McLaren, Zoë M
2017-09-01
The problem of sample selection complicates the process of drawing inference about populations. Selective sampling arises in many real world situations when agents such as doctors and customs officials search for targets with high values of a characteristic. We propose a new method for estimating population characteristics from these types of selected samples. We develop a model that captures key features of the agent's sampling decision. We use a generalized method of moments with instrumental variables and maximum likelihood to estimate the population prevalence of the characteristic of interest and the agents' accuracy in identifying targets. We apply this method to tuberculosis (TB), which is the leading infectious disease cause of death worldwide. We use a national database of TB test data from South Africa to examine testing for multidrug resistant TB (MDR-TB). Approximately one quarter of MDR-TB cases was undiagnosed between 2004 and 2010. The official estimate of 2.5% is therefore too low, and MDR-TB prevalence is as high as 3.5%. Signal-to-noise ratios are estimated to be between 0.5 and 1. Our approach is widely applicable because of the availability of routinely collected data and abundance of potential instruments. Using routinely collected data to monitor population prevalence can guide evidence-based policy making. Copyright © 2017 John Wiley & Sons, Ltd.
Amirabadizadeh, Alireza; Nezami, Hossein; Vaughn, Michael G; Nakhaee, Samaneh; Mehrpour, Omid
2017-11-27
Substance abuse exacts considerable social and health care burdens throughout the world. The aim of this study was to create a prediction model to better identify risk factors for drug use. A prospective cross-sectional study was conducted in South Khorasan Province, Iran. Of the total of 678 eligible subjects, 70% (n: 474) were randomly selected to provide a training set for constructing decision tree and multiple logistic regression (MLR) models. The remaining 30% (n: 204) were employed in a holdout sample to test the performance of the decision tree and MLR models. Predictive performance of different models was analyzed by the receiver operating characteristic (ROC) curve using the testing set. Independent variables were selected from demographic characteristics and history of drug use. For the decision tree model, the sensitivity and specificity for identifying people at risk for drug abuse were 66% and 75%, respectively, while the MLR model was somewhat less effective at 60% and 73%. Key independent variables in the analyses included first substance experience, age at first drug use, age, place of residence, history of cigarette use, and occupational and marital status. While study findings are exploratory and lack generalizability they do suggest that the decision tree model holds promise as an effective classification approach for identifying risk factors for drug use. Convergent with prior research in Western contexts is that age of drug use initiation was a critical factor predicting a substance use disorder.
The effect of sampling rate and lowpass filters on saccades - A modeling approach.
Mack, David J; Belfanti, Sandro; Schwarz, Urs
2017-01-27
The study of eye movements has become popular in many fields of science. However, using the preprocessed output of an eye tracker without scrutiny can lead to low-quality or even erroneous data. For example, the sampling rate of the eye tracker influences saccadic peak velocity, while inadequate filters fail to suppress noise or introduce artifacts. Despite previously published guiding values, most filter choices still seem motivated by a trial-and-error approach, and a thorough analysis of filter effects is missing. Therefore, we developed a simple and easy-to-use saccade model that incorporates measured amplitude-velocity main sequences and produces saccades with a similar frequency content to real saccades. We also derived a velocity divergence measure to rate deviations between velocity profiles. In total, we simulated 155 saccades ranging from 0.5° to 60° and subjected them to different sampling rates, noise compositions, and various filter settings. The final goal was to compile a list with the best filter settings for each of these conditions. Replicating previous findings, we observed reduced peak velocities at lower sampling rates. However, this effect was highly non-linear over amplitudes and increasingly stronger for smaller saccades. Interpolating the data to a higher sampling rate significantly reduced this effect. We hope that our model and the velocity divergence measure will be used to provide a quickly accessible ground truth without the need for recording and manually labeling saccades. The comprehensive list of filters allows one to choose the correct filter for analyzing saccade data without resorting to trial-and-error methods.
Leung, Michael; Bassani, Diego G; Racine-Poon, Amy; Goldenberg, Anna; Ali, Syed Asad; Kang, Gagandeep; Premkumar, Prasanna S; Roth, Daniel E
2017-09-10
Conditioning child growth measures on baseline accounts for regression to the mean (RTM). Here, we present the "conditional random slope" (CRS) model, based on a linear-mixed effects model that incorporates a baseline-time interaction term that can accommodate multiple data points for a child while also directly accounting for RTM. In two birth cohorts, we applied five approaches to estimate child growth velocities from 0 to 12 months to assess the effect of increasing data density (number of measures per child) on the magnitude of RTM of unconditional estimates, and the correlation and concordance between the CRS and four alternative metrics. Further, we demonstrated the differential effect of the choice of velocity metric on the magnitude of the association between infant growth and stunting at 2 years. RTM was minimally attenuated by increasing data density for unconditional growth modeling approaches. CRS and classical conditional models gave nearly identical estimates with two measures per child. Compared to the CRS estimates, unconditional metrics had moderate correlation (r = 0.65-0.91), but poor agreement in the classification of infants with relatively slow growth (kappa = 0.38-0.78). Estimates of the velocity-stunting association were the same for CRS and classical conditional models but differed substantially between conditional versus unconditional metrics. The CRS can leverage the flexibility of linear mixed models while addressing RTM in longitudinal analyses. © 2017 The Authors American Journal of Human Biology Published by Wiley Periodicals, Inc.
Development and Testing of Harpoon-Based Approaches for Collecting Comet Samples (Video Supplement)
Purves, Lloyd (Compiler); Nuth, Joseph (Compiler); Amatucci, Edward (Compiler); Wegel, Donald; Smith, Walter; Leary, James; Kee, Lake; Hill, Stuart; Grebenstein, Markus; Voelk, Stefan;
2017-01-01
This video supplement contains a set of videos created during the approximately 10-year-long course of developing and testing the Goddard Space Flight Center (GSFC) harpoon-based approach for collecting comet samples. The purpose of the videos is to illustrate various design concepts used in this method of acquiring samples of comet material, the testing used to verify the concepts, and the evolution of designs and testing. To play the videos this PDF needs to be opened in the freeware Adobe Reader. They do not seem to play while within a browser. While this supplement can be used as a stand-alone document, it is intended to augment its parent document of the same title, Development and Testing of Harpoon-Based Approaches for Collecting Comet Samples (NASA/CR-2017-219018; this document is accessible from the website: https://ssed.gsfc.nasa.gov/harpoon/SAS_Paper-V1.pdf). The parent document, which only contains text and figures, describes the overall development and testing effort and contains references to each of the videos in this supplement. Thus, the videos are primarily intended to augment the information provided by the text and figures in the parent document. This approach was followed to allow the file size of the parent document to remain small enough to facilitate downloading and storage. Some of the videos were created by other organizations, Johns Hopkins University Applied Physics Laboratory (JHU APL) and the German Aerospace Center called, the Deutsches Zentrum für Luft- und Raumfahrt (DLR), who are partnering with GSFC on developing this technology. Each video is accompanied by text that provides a summary description of its nature and purpose, as well as the identity of the authors. All videos have been edited to only show key parts of the testing. Although not all videos have sound, the sound has been retained in those that have it. Also, each video has been given one or more title screens to clarify what is going in different phases of the video.
High Field In Vivo 13C Magnetic Resonance Spectroscopy of Brain by Random Radiofrequency Heteronuclear Decoupling and Data Sampling
Li, Ningzhi; Li, Shizhe; Shen, Jun
2017-06-01
In vivo 13C magnetic resonance spectroscopy (MRS) is a unique and effective tool for studying dynamic human brain metabolism and the cycling of neurotransmitters. One of the major technical challenges for in vivo 13C-MRS is the high radio frequency (RF) power necessary for heteronuclear decoupling. In the common practice of in vivo 13C-MRS, alkanyl carbons are detected in the spectra range of 10-65ppm. The amplitude of decoupling pulses has to be significantly greater than the large one-bond 1H-13C scalar coupling (1JCH=125-145 Hz). Two main proton decoupling methods have been developed: broadband stochastic decoupling and coherent composite or adiabatic pulse decoupling (e.g., WALTZ); the latter is widely used because of its efficiency and superb performance under inhomogeneous B1 field. Because the RF power required for proton decoupling increases quadratically with field strength, in vivo 13C-MRS using coherent decoupling is often limited to low magnetic fields (Drug Administration (FDA). Alternately, carboxylic/amide carbons are coupled to protons via weak long-range 1H-13C scalar couplings, which can be decoupled using low RF power broadband stochastic decoupling. Recently, the carboxylic/amide 13C-MRS technique using low power random RF heteronuclear decoupling was safely applied to human brain studies at 7T. Here, we review the two major decoupling methods and the carboxylic/amide 13C-MRS with low power decoupling strategy. Further decreases in RF power deposition by frequency-domain windowing and time-domain random under-sampling are also discussed. Low RF power decoupling opens the possibility of performing in vivo 13C experiments of human brain at very high magnetic fields (such as 11.7T), where signal-to-noise ratio as well as spatial and temporal spectral resolution are more favorable than lower fields.
Directory of Open Access Journals (Sweden)
Nguyen Phuong H
2012-10-01
Full Text Available Abstract Background Low birth weight and maternal anemia remain intractable problems in many developing countries. The adequacy of the current strategy of providing iron-folic acid (IFA supplements only during pregnancy has been questioned given many women enter pregnancy with poor iron stores, the substantial micronutrient demand by maternal and fetal tissues, and programmatic issues related to timing and coverage of prenatal care. Weekly IFA supplementation for women of reproductive age (WRA improves iron status and reduces the burden of anemia in the short term, but few studies have evaluated subsequent pregnancy and birth outcomes. The Preconcept trial aims to determine whether pre-pregnancy weekly IFA or multiple micronutrient (MM supplementation will improve birth outcomes and maternal and infant iron status compared to the current practice of prenatal IFA supplementation only. This paper provides an overview of study design, methodology and sample characteristics from baseline survey data and key lessons learned. Methods/design We have recruited 5011 WRA in a double-blind stratified randomized controlled trial in rural Vietnam and randomly assigned them to receive weekly supplements containing either: 1 2800 μg folic acid 2 60 mg iron and 2800 μg folic acid or 3 MM. Women who become pregnant receive daily IFA, and are being followed through pregnancy, delivery, and up to three months post-partum. Study outcomes include birth outcomes and maternal and infant iron status. Data are being collected on household characteristics, maternal diet and mental health, anthropometry, infant feeding practices, morbidity and compliance. Discussion The study is timely and responds to the WHO Global Expert Consultation which identified the need to evaluate the long term benefits of weekly IFA and MM supplementation in WRA. Findings will generate new information to help guide policy and programs designed to reduce the burden of anemia in women and
Ethics and law in research with human biological samples: a new approach.
Petrini, Carlo
2014-01-01
During the last century a large number of documents (regulations, ethical codes, treatises, declarations, conventions) were published on the subject of ethics and clinical trials, many of them focusing on the protection of research participants. More recently various proposals have been put forward to relax some of the constraints imposed on research by these documents and regulations. It is important to distinguish between risks deriving from direct interventions on human subjects and other types of risk. In Italy the Data Protection Authority has acted in the question of research using previously collected health data and biological samples to simplify the procedures regarding informed consent. The new approach may be of help to other researchers working outside Italy.
van Leth, Frank; den Heijer, Casper; Beerepoot, Mariëlle; Stobberingh, Ellen; Geerlings, Suzanne; Schultsz, Constance
2017-04-01
Increasing antimicrobial resistance (AMR) requires rapid surveillance tools, such as Lot Quality Assurance Sampling (LQAS). LQAS classifies AMR as high or low based on set parameters. We compared classifications with the underlying true AMR prevalence using data on 1335 Escherichia coli isolates from surveys of community-acquired urinary tract infection in women, by assessing operating curves, sensitivity and specificity. Sensitivity and specificity of any set of LQAS parameters was above 99% and between 79 and 90%, respectively. Operating curves showed high concordance of the LQAS classification with true AMR prevalence estimates. LQAS-based AMR surveillance is a feasible approach that provides timely and locally relevant estimates, and the necessary information to formulate and evaluate guidelines for empirical treatment.
A nuclear reload optimization approach using a real coded genetic algorithm with random keys
Energy Technology Data Exchange (ETDEWEB)
Lima, Alan M.M. de; Schirru, Roberto; Medeiros, Jose A.C.C., E-mail: alan@lmp.ufrj.b, E-mail: schirru@lmp.ufrj.b, E-mail: canedo@lmp.ufrj.b [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-Graduacao de Engenharia. Programa de Engenharia Nuclear
2009-07-01
The fuel reload of a Pressurized Water Reactor is made whenever the burn up of the fuel assemblies in the nucleus of the reactor reaches a certain value such that it is not more possible to maintain a critical reactor producing energy at nominal power. The problem of fuel reload optimization consists on determining the positioning of the fuel assemblies within the nucleus of the reactor in an optimized way to minimize the cost benefit relationship of fuel assemblies cost per maximum burn up, and also satisfying symmetry and safety restrictions. The fuel reload optimization problem difficulty grows exponentially with the number of fuel assemblies in the nucleus of the reactor. During decades the fuel reload optimization problem was solved manually by experts that used their knowledge and experience to build configurations of the reactor nucleus, and testing them to verify if safety restrictions of the plant are satisfied. To reduce this burden, several optimization techniques have been used, included the binary code genetic algorithm. In this work we show the use of a real valued coded approach of the genetic algorithm, with different recombination methods, together with a transformation mechanism called random keys, to transform the real values of the genes of each chromosome in a combination of discrete fuel assemblies for evaluation of the reload optimization. Four different recombination methods were tested: discrete recombination, intermediate recombination, linear recombination and extended linear recombination. For each of the 4 recombination methods 10 different tests using different seeds for the random number generator were conducted 10 generating, totaling 40 tests. The results of the application of the genetic algorithm are shown with formulation of real numbers for the problem of the nuclear reload of the plant Angra 1 type PWR. Since the best results in the literature for this problem were found by the parallel PSO we will it use for comparison
Craig, Benjamin M; Busschbach, Jan Jv
2009-01-13
To present an episodic random utility model that unifies time trade-off and discrete choice approaches in health state valuation. First, we introduce two alternative random utility models (RUMs) for health preferences: the episodic RUM and the more common instant RUM. For the interpretation of time trade-off (TTO) responses, we show that the episodic model implies a coefficient estimator, and the instant model implies a mean slope estimator. Secondly, we demonstrate these estimators and the differences between the estimates for 42 health states using TTO responses from the seminal Measurement and Valuation in Health (MVH) study conducted in the United Kingdom. Mean slopes are estimates with and without Dolan's transformation of worse-than-death (WTD) responses. Finally, we demonstrate an exploded probit estimator, an extension of the coefficient estimator for discrete choice data that accommodates both TTO and rank responses. By construction, mean slopes are less than or equal to coefficients, because slopes are fractions and, therefore, magnify downward errors in WTD responses. The Dolan transformation of WTD responses causes mean slopes to increase in similarity to coefficient estimates, yet they are not equivalent (i.e., absolute mean difference = 0.179). Unlike mean slopes, coefficient estimates demonstrate strong concordance with rank-based predictions (Lin's rho = 0.91). Combining TTO and rank responses under the exploded probit model improves the identification of health state values, decreasing the average width of confidence intervals from 0.057 to 0.041 compared to TTO only results. The episodic RUM expands upon the theoretical framework underlying health state valuation and contributes to health econometrics by motivating the selection of coefficient and exploded probit estimators for the analysis of TTO and rank responses. In future MVH surveys, sample size requirements may be reduced through the incorporation of multiple responses under a single
Directory of Open Access Journals (Sweden)
Alanis Kelly L
2006-02-01
Full Text Available Abstract Background Establishing more sensible measures to treat cocaine-addicted mothers and their children is essential for improving U.S. drug policy. Favorable post-natal environments have moderated potential deleterious prenatal effects. However, since cocaine is an illicit substance having long been demonized, we hypothesized that attitudes toward prenatal cocaine exposure would be more negative than for licit substances, alcohol, nicotine and caffeine. Further, media portrayals about long-term outcomes were hypothesized to influence viewers' attitudes, measured immediately post-viewing. Reducing popular crack baby stigmas could influence future policy decisions by legislators. In Study 1, 336 participants were randomly assigned to 1 of 4 conditions describing hypothetical legal sanction scenarios for pregnant women using cocaine, alcohol, nicotine or caffeine. Participants rated legal sanctions against pregnant women who used one of these substances and risk potential for developing children. In Study 2, 139 participants were randomly assigned to positive, neutral and negative media conditions. Immediately post-viewing, participants rated prenatal cocaine-exposed or non-exposed teens for their academic performance and risk for problems at age18. Results Participants in Study 1 imposed significantly greater legal sanctions for cocaine, perceiving prenatal cocaine exposure as more harmful than alcohol, nicotine or caffeine. A one-way ANOVA for independent samples showed significant differences, beyond .0001. Post-hoc Sheffe test illustrated that cocaine was rated differently from other substances. In Study 2, a one-way ANOVA for independent samples was performed on difference scores for the positive, neutral or negative media conditions about prenatal cocaine exposure. Participants in the neutral and negative media conditions estimated significantly lower grade point averages and more problems for the teen with prenatal cocaine exposure
Ginsburg, Harvey J; Raffeld, Paul; Alanis, Kelly L; Boyce, Angela S
2006-01-01
Background Establishing more sensible measures to treat cocaine-addicted mothers and their children is essential for improving U.S. drug policy. Favorable post-natal environments have moderated potential deleterious prenatal effects. However, since cocaine is an illicit substance having long been demonized, we hypothesized that attitudes toward prenatal cocaine exposure would be more negative than for licit substances, alcohol, nicotine and caffeine. Further, media portrayals about long-term outcomes were hypothesized to influence viewers' attitudes, measured immediately post-viewing. Reducing popular crack baby stigmas could influence future policy decisions by legislators. In Study 1, 336 participants were randomly assigned to 1 of 4 conditions describing hypothetical legal sanction scenarios for pregnant women using cocaine, alcohol, nicotine or caffeine. Participants rated legal sanctions against pregnant women who used one of these substances and risk potential for developing children. In Study 2, 139 participants were randomly assigned to positive, neutral and negative media conditions. Immediately post-viewing, participants rated prenatal cocaine-exposed or non-exposed teens for their academic performance and risk for problems at age18. Results Participants in Study 1 imposed significantly greater legal sanctions for cocaine, perceiving prenatal cocaine exposure as more harmful than alcohol, nicotine or caffeine. A one-way ANOVA for independent samples showed significant differences, beyond .0001. Post-hoc Sheffe test illustrated that cocaine was rated differently from other substances. In Study 2, a one-way ANOVA for independent samples was performed on difference scores for the positive, neutral or negative media conditions about prenatal cocaine exposure. Participants in the neutral and negative media conditions estimated significantly lower grade point averages and more problems for the teen with prenatal cocaine exposure than for the non-exposed teen
A Quantitative Proteomics Approach to Clinical Research with Non-Traditional Samples.
Licier, Rígel; Miranda, Eric; Serrano, Horacio
2016-10-17
The proper handling of samples to be analyzed by mass spectrometry (MS) can guarantee excellent results and a greater depth of analysis when working in quantitative proteomics. This is critical when trying to assess non-traditional sources such as ear wax, saliva, vitreous humor, aqueous humor, tears, nipple aspirate fluid, breast milk/colostrum, cervical-vaginal fluid, nasal secretions, bronco-alveolar lavage fluid, and stools. We intend to provide the investigator with relevant aspects of quantitative proteomics and to recognize the most recent clinical research work conducted with atypical samples and analyzed by quantitative proteomics. Having as reference the most recent and different approaches used with non-traditional sources allows us to compare new strategies in the development of novel experimental models. On the other hand, these references help us to contribute significantly to the understanding of the proportions of proteins in different proteomes of clinical interest and may lead to potential advances in the emerging field of precision medicine.
A Quantitative Proteomics Approach to Clinical Research with Non-Traditional Samples
Licier, Rígel; Miranda, Eric; Serrano, Horacio
2016-01-01
The proper handling of samples to be analyzed by mass spectrometry (MS) can guarantee excellent results and a greater depth of analysis when working in quantitative proteomics. This is critical when trying to assess non-traditional sources such as ear wax, saliva, vitreous humor, aqueous humor, tears, nipple aspirate fluid, breast milk/colostrum, cervical-vaginal fluid, nasal secretions, bronco-alveolar lavage fluid, and stools. We intend to provide the investigator with relevant aspects of quantitative proteomics and to recognize the most recent clinical research work conducted with atypical samples and analyzed by quantitative proteomics. Having as reference the most recent and different approaches used with non-traditional sources allows us to compare new strategies in the development of novel experimental models. On the other hand, these references help us to contribute significantly to the understanding of the proportions of proteins in different proteomes of clinical interest and may lead to potential advances in the emerging field of precision medicine. PMID:28248241
Naccarato, Attilio; Pawliszyn, Janusz
2016-09-01
This work proposes the novel PDMS/DVB/PDMS fiber as a greener strategy for analysis by direct immersion solid phase microextraction (SPME) in vegetables. SPME is an established sample preparation approach that has not yet been adequately explored for food analysis in direct immersion mode due to the limitations of the available commercial coatings. The robustness and endurance of this new coating were investigated by direct immersion extractions in raw blended vegetables without any further sample preparation steps. The PDMS/DVB/PDMS coating exhibited superior features related to the capability of the external PDMS layer to protect the commercial coating, and showed improvements in terms of extraction capability and in the cleanability of the coating surface. In addition to having contributed to the recognition of the superior features of this new fiber concept before commercialization, the outcomes of this work serve to confirm advancements in the matrix compatibility of the PDMS-modified fiber, and open new prospects for the development of greener high-throughput analytical methods in food analysis using solid phase microextraction in the near future. Copyright © 2016 Elsevier Ltd. All rights reserved.
Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao
2014-10-07
In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.
Tulpan, Dan; Regoui, Chaouki; Durand, Guillaume; Belliveau, Luc; Léger, Serge
2013-01-01
This paper presents a novel hybrid DNA encryption (HyDEn) approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.
Directory of Open Access Journals (Sweden)
Dan Tulpan
2013-01-01
Full Text Available This paper presents a novel hybrid DNA encryption (HyDEn approach that uses randomized assignments of unique error-correcting DNA Hamming code words for single characters in the extended ASCII set. HyDEn relies on custom-built quaternary codes and a private key used in the randomized assignment of code words and the cyclic permutations applied on the encoded message. Along with its ability to detect and correct errors, HyDEn equals or outperforms existing cryptographic methods and represents a promising in silico DNA steganographic approach.
Methodologies for the Extraction of Phenolic Compounds from Environmental Samples: New Approaches
Directory of Open Access Journals (Sweden)
Cristina Mahugo Santana
2009-01-01
Full Text Available Phenolic derivatives are among the most important contaminants present in the environment. These compounds are used in several industrial processes to manufacture chemicals such as pesticides, explosives, drugs and dyes. They also are used in the bleaching process of paper manufacturing. Apart from these sources, phenolic compounds have substantial applications in agriculture as herbicides, insecticides and fungicides. However, phenolic compounds are not only generated by human activity, but they are also formed naturally, e.g., during the decomposition of leaves or wood. As a result of these applications, they are found in soils and sediments and this often leads to wastewater and ground water contamination. Owing to their high toxicity and persistence in the environment, both, the US Environmental Protection Agency (EPA and the European Union have included some of them in their lists of priority pollutants. Current standard methods of phenolic compounds analysis in water samples are based on liquidÃ¢Â€Â“liquid extraction (LLE while Soxhlet extraction is the most used technique for isolating phenols from solid matrices. However, these techniques require extensive cleanup procedures that are time-intensive and involve expensive and hazardous organic solvents, which are undesirable for health and disposal reasons. In the last years, the use of news methodologies such as solid-phase extraction (SPE and solid-phase microextraction (SPME have increased for the extraction of phenolic compounds from liquid samples. In the case of solid samples, microwave assisted extraction (MAE is demonstrated to be an efficient technique for the extraction of these compounds. In this work we review the developed methods in the extraction and determination of phenolic derivatives in different types of environmental matrices such as water, sediments and soils. Moreover, we present the new approach in the use of micellar media coupled with SPME process for the
Methodologies for the extraction of phenolic compounds from environmental samples: new approaches.
Mahugo Santana, Cristina; Sosa Ferrera, Zoraida; Esther Torres Padrón, M; Juan Santana Rodríguez, José
2009-01-09
Phenolic derivatives are among the most important contaminants present in the environment. These compounds are used in several industrial processes to manufacture chemicals such as pesticides, explosives, drugs and dyes. They also are used in the bleaching process of paper manufacturing. Apart from these sources, phenolic compounds have substantial applications in agriculture as herbicides, insecticides and fungicides. However, phenolic compounds are not only generated by human activity, but they are also formed naturally, e.g., during the decomposition of leaves or wood. As a result of these applications, they are found in soils and sediments and this often leads to wastewater and ground water contamination. Owing to their high toxicity and persistence in the environment, both, the US Environmental Protection Agency (EPA) and the European Union have included some of them in their lists of priority pollutants. Current standard methods of phenolic compounds analysis in water samples are based on liquid-liquid extraction (LLE) while Soxhlet extraction is the most used technique for isolating phenols from solid matrices. However, these techniques require extensive cleanup procedures that are time-intensive and involve expensive and hazardous organic solvents, which are undesirable for health and disposal reasons. In the last years, the use of news methodologies such as solid-phase extraction (SPE) and solid-phase microextraction (SPME) have increased for the extraction of phenolic compounds from liquid samples. In the case of solid samples, microwave assisted extraction (MAE) is demonstrated to be an efficient technique for the extraction of these compounds. In this work we review the developed methods in the extraction and determination of phenolic derivatives in different types of environmental matrices such as water, sediments and soils. Moreover, we present the new approach in the use of micellar media coupled with SPME process for the extraction of phenolic
Ahluwalia, N; Ferrières, J; Dallongeville, J; Simon, C; Ducimetière, P; Amouyel, P; Arveiler, D; Ruidavets, J-B
2009-04-01
Diet is considered an important modifiable factor in the overweight. The role of macronutrients in obesity has been examined in general in selected populations, but the results of these studies are mixed, depending on the potential confounders and adjustments for other macronutrients. For this reason, we examined the association between macronutrient intake patterns and being overweight in a population-based representative sample of middle-aged (55.1+/-6.1 years) men (n=966), using various adjustment modalities. The study subjects kept 3-day food-intake records, and the standard cardiovascular risk factors were assessed. Weight, height and waist circumference (WC) were also measured. Carbohydrate intake was negatively associated and fat intake was positively associated with body mass index (BMI) and WC in regression models adjusted for energy intake and other factors, including age, smoking and physical activity. However, with mutual adjustments for other energy-yielding nutrients, the negative association of carbohydrate intake with WC remained significant, whereas the associations between fat intake and measures of obesity did not. Adjusted odds ratios (95% confidence interval) comparing the highest and lowest quartiles of carbohydrate intake were 0.50 (0.25-0.97) for obesity (BMI>29.9) and 0.41 (0.23-0.73) for abdominal obesity (WC>101.9 cm). Consistent negative associations between carbohydrate intake and BMI and WC were seen in this random representative sample of the general male population. The associations between fat intake and these measures of being overweight were attenuated on adjusting for carbohydrate intake. Thus, the balance of carbohydrate-to-fat intake is an important element in obesity in a general male population, and should be highlighted in dietary guidelines.
Pennaforte, Thomas; Moussa, Ahmed; Loye, Nathalie; Charlin, Bernard; Audétat, Marie-Claude
2016-02-17
Helping trainees develop appropriate clinical reasoning abilities is a challenging goal in an environment where clinical situations are marked by high levels of complexity and unpredictability. The benefit of simulation-based education to assess clinical reasoning skills has rarely been reported. More specifically, it is unclear if clinical reasoning is better acquired if the instructor's input occurs entirely after or is integrated during the scenario. Based on educational principles of the dual-process theory of clinical reasoning, a new simulation approach called simulation with iterative discussions (SID) is introduced. The instructor interrupts the flow of the scenario at three key moments of the reasoning process (data gathering, integration, and confirmation). After each stop, the scenario is continued where it was interrupted. Finally, a brief general debriefing ends the session. System-1 process of clinical reasoning is assessed by verbalization during management of the case, and System-2 during the iterative discussions without providing feedback. The aim of this study is to evaluate the effectiveness of Simulation with Iterative Discussions versus the classical approach of simulation in developing reasoning skills of General Pediatrics and Neonatal-Perinatal Medicine residents. This will be a prospective exploratory, randomized study conducted at Sainte-Justine hospital in Montreal, Qc, between January and March 2016. All post-graduate year (PGY) 1 to 6 residents will be invited to complete one SID or classical simulation 30 minutes audio video-recorded complex high-fidelity simulations covering a similar neonatology topic. Pre- and post-simulation questionnaires will be completed and a semistructured interview will be conducted after each simulation. Data analyses will use SPSS and NVivo softwares. This study is in its preliminary stages and the results are expected to be made available by April, 2016. This will be the first study to explore a new
Gage, S H; Jones, H J; Burgess, S; Bowden, J; Davey Smith, G; Zammit, S; Munafò, M R
2017-04-01
Observational associations between cannabis and schizophrenia are well documented, but ascertaining causation is more challenging. We used Mendelian randomization (MR), utilizing publicly available data as a method for ascertaining causation from observational data. We performed bi-directional two-sample MR using summary-level genome-wide data from the International Cannabis Consortium (ICC) and the Psychiatric Genomics Consortium (PGC2). Single nucleotide polymorphisms (SNPs) associated with cannabis initiation (p schizophrenia (p cannabis initiation on risk of schizophrenia [odds ratio (OR) 1.04 per doubling odds of cannabis initiation, 95% confidence interval (CI) 1.01-1.07, p = 0.019]. There was strong evidence consistent with a causal effect of schizophrenia risk on likelihood of cannabis initiation (OR 1.10 per doubling of the odds of schizophrenia, 95% CI 1.05-1.14, p = 2.64 × 10-5). Findings were as predicted for the negative control (height: OR 1.00, 95% CI 0.99-1.01, p = 0.90) but weaker than predicted for the positive control (years in education: OR 0.99, 95% CI 0.97-1.00, p = 0.066) analyses. Our results provide some that cannabis initiation increases the risk of schizophrenia, although the size of the causal estimate is small. We find stronger evidence that schizophrenia risk predicts cannabis initiation, possibly as genetic instruments for schizophrenia are stronger than for cannabis initiation.
Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi
2015-04-10
A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.
Messiah, Antoine; Acuna, Juan M; Castro, Grettel; de la Vega, Pura Rodríguez; Vaiva, Guillaume; Shultz, James; Neria, Yuval; De La Rosa, Mario
2014-07-01
This study examined the mental health consequences of the January 2010 Haiti earthquake on Haitians living in Miami-Dade County, Florida, 2-3 years following the event. A random-sample household survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants (N = 421) were assessed for their earthquake exposure and its impact on family, friends, and household finances; and for symptoms of posttraumatic stress disorder (PTSD), anxiety, and major depression; using standardized screening measures and thresholds. Exposure was considered as "direct" if the interviewee was in Haiti during the earthquake. Exposure was classified as "indirect" if the interviewee was not in Haiti during the earthquake but (1) family members or close friends were victims of the earthquake, and/or (2) family members were hosted in the respondent's household, and/or (3) assets or jobs were lost because of the earthquake. Interviewees who did not qualify for either direct or indirect exposure were designated as "lower" exposure. Eight percent of respondents qualified for direct exposure, and 63% qualified for indirect exposure. Among those with direct exposure, 19% exceeded threshold for PTSD, 36% for anxiety, and 45% for depression. Corresponding percentages were 9%, 22% and 24% for respondents with indirect exposure, and 6%, 14%, and 10% for those with lower exposure. A majority of Miami Haitians were directly or indirectly exposed to the earthquake. Mental health distress among them remains considerable two to three years post-earthquake.
Messiah, Antoine; Lacoste, Jérôme; Gokalsing, Erick; Shultz, James M; Rodríguez de la Vega, Pura; Castro, Grettel; Acuna, Juan M
2016-08-01
Studies on the mental health of families hosting disaster refugees are lacking. This study compares participants in households that hosted 2010 Haitian earthquake disaster refugees with their nonhost counterparts. A random sample survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants were assessed regarding their 2010 earthquake exposure and impact on family and friends and whether they hosted earthquake refugees. Using standardized scores and thresholds, they were evaluated for symptoms of three common mental disorders (CMDs): posttraumatic stress disorder, generalized anxiety disorder, and major depressive disorder (MDD). Participants who hosted refugees (n = 51) had significantly higher percentages of scores beyond thresholds for MDD than those who did not host refugees (n = 365) and for at least one CMD, after adjusting for participants' earthquake exposures and effects on family and friends. Hosting refugees from a natural disaster appears to elevate the risk for MDD and possibly other CMDs, independent of risks posed by exposure to the disaster itself. Families hosting refugees deserve special attention.
Directory of Open Access Journals (Sweden)
Troy David Querec
Full Text Available Detection of multiple human papillomavirus (HPV types in the genital tract is common. Associations among HPV types may impact HPV vaccination modeling and type replacement. The objectives were to determine the distribution of concurrent HPV type infections in cervicovaginal samples and examine type-specific associations. We analyzed HPV genotyping results from 32,245 cervicovaginal specimens collected from women aged 11 to 83 years in the United States from 2001 through 2011. Statistical power was enhanced by combining 6 separate studies. Expected concurrent infection frequencies from a series of permutation models, each with increasing fidelity to the real data, were compared with the observed data. Statistics were computed based on the distributional properties of the randomized data. Concurrent detection occurred more than expected with 0 or ≥3 HPV types and less than expected with 1 and 2 types. Some women bear a disproportionate burden of the HPV type prevalence. Type associations were observed that exceeded multiple hypothesis corrected significance. Multiple HPV types were detected more frequently than expected by chance and associations among particular HPV types were detected. However vaccine-targeted types were not specifically affected, supporting the expectation that current bivalent/quadrivalent HPV vaccination will not result in type replacement with other high-risk types.
Multivariate stratified sampling by stochastic multiobjective optimisation
Diaz-Garcia, Jose A.; Ramos-Quiroga, Rogelio
2011-01-01
This work considers the allocation problem for multivariate stratified random sampling as a problem of integer non-linear stochastic multiobjective mathematical programming. With this goal in mind the asymptotic distribution of the vector of sample variances is studied. Two alternative approaches are suggested for solving the allocation problem for multivariate stratified random sampling. An example is presented by applying the different proposed techniques.
Bitter, Neis A; Roeg, Diana P K; van Nieuwenhuizen, Chijs; van Weeghel, Jaap
2015-07-22
There is an increasing amount of evidence for the effectiveness of rehabilitation interventions for people with severe mental illness (SMI). In the Netherlands, a rehabilitation methodology that is well known and often applied is the Comprehensive Approach to Rehabilitation (CARe) methodology. The overall goal of the CARe methodology is to improve the client's quality of life by supporting the client in realizing his/her goals and wishes, handling his/her vulnerability and improving the quality of his/her social environment. The methodology is strongly influenced by the concept of 'personal recovery' and the 'strengths case management model'. No controlled effect studies have been conducted hitherto regarding the CARe methodology. This study is a two-armed cluster randomized controlled trial (RCT) that will be executed in teams from three organizations for sheltered and supported housing, which provide services to people with long-term severe mental illness. Teams in the intervention group will receive the multiple-day CARe methodology training from a specialized institute and start working according the CARe Methodology guideline. Teams in the control group will continue working in their usual way. Standardized questionnaires will be completed at baseline (T0), and 10 (T1) and 20 months (T2) post baseline. Primary outcomes are recovery, social functioning and quality of life. The model fidelity of the CARe methodology will be assessed at T1 and T2. This study is the first controlled effect study on the CARe methodology and one of the few RCTs on a broad rehabilitation method or strength-based approach. This study is relevant because mental health care organizations have become increasingly interested in recovery and rehabilitation-oriented care. The trial registration number is ISRCTN77355880 .
Randomized trial of a warfarin communication protocol for nursing homes: an SBAR-based approach.
Field, Terry S; Tjia, Jennifer; Mazor, Kathleen M; Donovan, Jennifer L; Kanaan, Abir O; Harrold, Leslie R; Reed, George; Doherty, Peter; Spenard, Ann; Gurwitz, Jerry H
2011-02-01
More than 1.6 million Americans currently reside in nursing homes. As many as 12% of them receive long-term anticoagulant therapy with warfarin. Prior research has demonstrated compelling evidence of safety problems with warfarin therapy in this setting, often associated with suboptimal communication between nursing home staff and prescribing physicians. We conducted a randomized trial of a warfarin management protocol using facilitated telephone communication between nurses and physicians in 26 nursing homes in Connecticut in 2007-2008. Intervention facilities received a warfarin management communication protocol using the approach "Situation, Background, Assessment, and Recommendation" (SBAR). The protocol included an SBAR template to standardize telephone communication about residents on warfarin by requiring information about the situation triggering the call, the background, the nurse's assessment, and recommendations. There were 435 residents who received warfarin therapy during the study period for 55,167 resident days in the intervention homes and 53,601 in control homes. In intervention homes, residents' international normalized ratio (INR) values were in the therapeutic range a statistically significant 4.50% more time than in control homes (95% confidence interval [CI], 0.31%-8.69%). There was no difference in obtaining a follow-up INR within 3 days after an INR value ≥4.5 (odds ratio 1.02; 95% CI, 0.44-2.4). Rates of preventable adverse warfarin-related events were lower in intervention homes, although this result was not statistically significant: the incident rate ratio for any preventable adverse warfarin-related event was .87 (95% CI, .54-1.4). Facilitated telephone communication between nurses and physicians using the SBAR approach modestly improves the quality of warfarin management for nursing home residents. Copyright © 2011 Elsevier Inc. All rights reserved.
A novel approach to assess the treatment response using Gaussian random field in PET
Energy Technology Data Exchange (ETDEWEB)
Wang, Mengdie [Department of Biomedical Engineering, Tsinghua University, Beijing 100084, China and Center for Advanced Medical Imaging Science, Division of Nuclear Medicine and Molecular Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Guo, Ning [Center for Advanced Medical Imaging Science, Division of Nuclear Medicine and Molecular Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 (United States); Hu, Guangshu; Zhang, Hui, E-mail: hzhang@mail.tsinghua.edu.cn, E-mail: li.quanzheng@mgh.harvard.edu [Department of Biomedical Engineering, Tsinghua University, Beijing 100084 (China); El Fakhri, Georges; Li, Quanzheng, E-mail: hzhang@mail.tsinghua.edu.cn, E-mail: li.quanzheng@mgh.harvard.edu [Center for Advanced Medical Imaging Science, Division of Nuclear Medicine and Molecular Imaging, Massachusetts General Hospital, Boston, Massachusetts 02114 and Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115 (United States)
2016-02-15
Purpose: The assessment of early therapeutic response to anticancer therapy is vital for treatment planning and patient management in clinic. With the development of personal treatment plan, the early treatment response, especially before any anatomically apparent changes after treatment, becomes urgent need in clinic. Positron emission tomography (PET) imaging serves an important role in clinical oncology for tumor detection, staging, and therapy response assessment. Many studies on therapy response involve interpretation of differences between two PET images, usually in terms of standardized uptake values (SUVs). However, the quantitative accuracy of this measurement is limited. This work proposes a statistically robust approach for therapy response assessment based on Gaussian random field (GRF) to provide a statistically more meaningful scale to evaluate therapy effects. Methods: The authors propose a new criterion for therapeutic assessment by incorporating image noise into traditional SUV method. An analytical method based on the approximate expressions of the Fisher information matrix was applied to model the variance of individual pixels in reconstructed images. A zero mean unit variance GRF under the null hypothesis (no response to therapy) was obtained by normalizing each pixel of the post-therapy image with the mean and standard deviation of the pretherapy image. The performance of the proposed method was evaluated by Monte Carlo simulation, where XCAT phantoms (128{sup 2} pixels) with lesions of various diameters (2–6 mm), multiple tumor-to-background contrasts (3–10), and different changes in intensity (6.25%–30%) were used. The receiver operating characteristic curves and the corresponding areas under the curve were computed for both the proposed method and the traditional methods whose figure of merit is the percentage change of SUVs. The formula for the false positive rate (FPR) estimation was developed for the proposed therapy response
Global financial indices and twitter sentiment: A random matrix theory approach
García, A.
2016-11-01
We use Random Matrix Theory (RMT) approach to analyze the correlation matrix structure of a collection of public tweets and the corresponding return time series associated to 20 global financial indices along 7 trading months of 2014. In order to quantify the collection of tweets, we constructed daily polarity time series from public tweets via sentiment analysis. The results from RMT analysis support the fact of the existence of true correlations between financial indices, polarities, and the mixture of them. Moreover, we found a good agreement between the temporal behavior of the extreme eigenvalues of both empirical data, and similar results were found when computing the inverse participation ratio, which provides an evidence about the emergence of common factors in global financial information whether we use the return or polarity data as a source. In addition, we found a very strong presumption that polarity Granger causes returns of an Indonesian index for a long range of lag trading days, whereas for Israel, South Korea, Australia, and Japan, the predictive information of returns is also presented but with less presumption. Our results suggest that incorporating polarity as a financial indicator may open up new insights to understand the collective and even individual behavior of global financial indices.
Ahmed, Oumer S.; Franklin, Steven E.; Wulder, Michael A.; White, Joanne C.
2015-03-01
Many forest management activities, including the development of forest inventories, require spatially detailed forest canopy cover and height data. Among the various remote sensing technologies, LiDAR (Light Detection and Ranging) offers the most accurate and consistent means for obtaining reliable canopy structure measurements. A potential solution to reduce the cost of LiDAR data, is to integrate transects (samples) of LiDAR data with frequently acquired and spatially comprehensive optical remotely sensed data. Although multiple regression is commonly used for such modeling, often it does not fully capture the complex relationships between forest structure variables. This study investigates the potential of Random Forest (RF), a machine learning technique, to estimate LiDAR measured canopy structure using a time series of Landsat imagery. The study is implemented over a 2600 ha area of industrially managed coastal temperate forests on Vancouver Island, British Columbia, Canada. We implemented a trajectory-based approach to time series analysis that generates time since disturbance (TSD) and disturbance intensity information for each pixel and we used this information to stratify the forest land base into two strata: mature forests and young forests. Canopy cover and height for three forest classes (i.e. mature, young and mature and young (combined)) were modeled separately using multiple regression and Random Forest (RF) techniques. For all forest classes, the RF models provided improved estimates relative to the multiple regression models. The lowest validation error was obtained for the mature forest strata in a RF model (R2 = 0.88, RMSE = 2.39 m and bias = -0.16 for canopy height; R2 = 0.72, RMSE = 0.068% and bias = -0.0049 for canopy cover). This study demonstrates the value of using disturbance and successional history to inform estimates of canopy structure and obtain improved estimates of forest canopy cover and height using the RF algorithm.
Stockhaus, C; Van Den Ingh, T; Rothuizen, J; Teske, E
2004-09-01
Cytologic criteria were evaluated for their diagnostic value in liver disease in dogs. Therefore, histopathologic and cytologic examination was performed on liver biopsy samples of 73 dogs with liver diseases and 28 healthy dogs. Logistic regression analysis was used to select the measured parameters to be included in a multistep approach. With the logistic regression method, different characteristic cytologic parameters could be defined for each histopathologic diagnosis. In malignant lymphoma of the liver, the presence of large numbers of lymphoblasts with a minimum of 5% of all cells was found. Clusters of epithelial cells with several cytologic characteristics of malignancy intermixed with normal hepatocytes were indicative of metastatic carcinoma or cholangiocellular carcinoma. Liver cells in hepatocellular carcinoma were characterized by a high nucleus/cytoplasm ratio, large cell diameters, increased numbers of nucleoli per nuclei, small numbers of cytoplasmic vacuoles, and frequently, small numbers of lymphocytes. Extrahepatic cholestasis was characterized by excessive extracellular bile pigment in the form of biliary casts, an increased number of nucleoli within hepatocytes, decreased hepatic cell size, and low numbers of lymphocytes. In destructive cholangiolitis, increased numbers of neutrophils and a small mean nuclear size within hepatocytes were seen. Acute and nonspecific reactive hepatitis are diagnosed based on the presence of moderate reactive nuclear patterns, including more pronounced chromatin, prominent nucleoli, increased numbers of inflammatory cells, excluding lymphocytes, and the absence of increased numbers of bile duct cell clusters. Increased number of mast cells also was indicative of nonspecific reactive hepatitis. Important cytologic criteria for the diagnosis of liver cirrhosis, in addition to chronic hepatitis, are intracellular bile accumulation and increased numbers of bile duct cell clusters. In summary, the stepwise approach
Li, Ying; Li, Yan; Liu, Li-an; Zhao, Ling; Hu, Ka-ming; Wu, Xi; Chen, Xiao-qin; Li, Gui-ping; Mang, Ling-ling; Qi, Qi-hua
2011-04-01
To explore the best intervention time of acupuncture and moxibustion for peripheral facial palsy (Bell's palsy) and the clinical advantage program of selective treatment with acupuncture and moxibustion. Multi-central large-sample randomized controlled trial was carried out. Nine hundreds cases of Bell's palsy were randomized into 5 treatment groups, named selective filiform needle group (group A), selective acupuncture + moxibustion group (group B), selective acupuncture + electroacupuncture (group C), selective acupuncture + line-up needling on muscle region of meridian group (group D) and non-selective filiform needle group (group E). Four sessions of treatment were required in each group. Separately, during the enrollment, after 4 sessions of treatment, in 1 month and 3 months of follow-up after treatment, House-Brackmann Scale, Facial Disability Index Scale and Degree of Facial Nerve Paralysis (NFNP) were adopted for efficacy assessment. And the efficacy systematic analysis was provided in view of the intervention time and nerve localization of disease separately. The curative rates of intervention in acute stage and resting stage were 50.1% (223/445) and 52.1% (162/311), which were superior to recovery stage (25.9%, 35/135) separately. There were no statistical significant differences in efficacy in comparison among 5 treatment programs at the same stage (all P > 0.05). The efficacy of intervention of group A and group E in acute stage was superior to that in recovery stage (both P < 0.01). The difference was significant statistically between the efficacy on the localization above chorda tympani nerve and that on the localization below the nerve in group D (P < 0.01). The efficacy on the localization below chorda tympani nerve was superior to the localization above the nerve. The best intervention time for the treatment of Bell's palsy is in acute stage and resting stage, meaning 1 to 3 weeks after occurrence. All of the 5 treatment programs are advantageous
Directory of Open Access Journals (Sweden)
Romain Guignard
Full Text Available OBJECTIVES: It is crucial for policy makers to monitor the evolution of tobacco smoking prevalence. In France, this monitoring is based on a series of cross-sectional general population surveys, the Health Barometers, conducted every five years and based on random samples. A methodological study has been carried out to assess the reliability of a monitoring system based on regular quota sampling surveys for smoking prevalence. DESIGN / OUTCOME MEASURES: In 2010, current and daily tobacco smoking prevalences obtained in a quota survey on 8,018 people were compared with those of the 2010 Health Barometer carried out on 27,653 people. Prevalences were assessed separately according to the telephone equipment of the interviewee (landline phone owner vs "mobile-only", and logistic regressions were conducted in the pooled database to assess the impact of the telephone equipment and of the survey mode on the prevalences found. Finally, logistic regressions adjusted for sociodemographic characteristics were conducted in the random sample in order to determine the impact of the needed number of calls to interwiew "hard-to-reach" people on the prevalence found. RESULTS: Current and daily prevalences were higher in the random sample (respectively 33.9% and 27.5% in 15-75 years-old than in the quota sample (respectively 30.2% and 25.3%. In both surveys, current and daily prevalences were lower among landline phone owners (respectively 31.8% and 25.5% in the random sample and 28.9% and 24.0% in the quota survey. The required number of calls was slightly related to the smoking status after adjustment for sociodemographic characteristics. CONCLUSION: Random sampling appears to be more effective than quota sampling, mainly by making it possible to interview hard-to-reach populations.
Haron, Zaiton; Bakar, Suhaimi Abu; Dimon, Mohamad Ngasri
2015-01-01
Strategic noise mapping provides important information for noise impact assessment and noise abatement. However, producing reliable strategic noise mapping in a dynamic, complex working environment is difficult. This study proposes the implementation of the random walk approach as a new stochastic technique to simulate noise mapping and to predict the noise exposure level in a workplace. A stochastic simulation framework and software, namely RW-eNMS, were developed to facilitate the random walk approach in noise mapping prediction. This framework considers the randomness and complexity of machinery operation and noise emission levels. Also, it assesses the impact of noise on the workers and the surrounding environment. For data validation, three case studies were conducted to check the accuracy of the prediction data and to determine the efficiency and effectiveness of this approach. The results showed high accuracy of prediction results together with a majority of absolute differences of less than 2 dBA; also, the predicted noise doses were mostly in the range of measurement. Therefore, the random walk approach was effective in dealing with environmental noises. It could predict strategic noise mapping to facilitate noise monitoring and noise control in the workplaces. PMID:25875019
Bao, Yuequan; Li, Hui; Zhang, Fujian; Ou, Jinping
2013-04-01
A moving loads distribution identification method for cable-stayed bridges based on compressive sampling (CS) technique is proposed. CS is a technique for obtaining sparse signal representations to underdetermined linear measurement equations. In this paper, CS is employed to localize moving loads of cable-stayed bridges by limit cable force measurements. First, a vehicle-bridge model for cable-stayed bridges is presented. Then the relationship between the cable force and moving loads is constructed based on the influence lines. With the hypothesis of sparsity distribution of vehicles on bridge deck (which is practical for long-span bridges), the moving loads are identified by minimizing the `l2-norm of the difference between the observed and simulated cable forces caused by moving vehicles penalized by the `l1-norm' of the moving load vector. The resultant minimization problem is convex and can be solved efficiently. A numerical example of a real cable-stayed bridge is carried out to verify the proposed method. The robustness and accuracy of the identification approach with limit cable force measurement for multi-vehicle spatial localization are validated.
Radu E. SESTRAŞ; Lorentz JÄNTSCHI; Sorana D. BOLBOACĂ
2009-01-01
Background: The choices of experimental design as well as of statisticalanalysis are of huge importance in field experiments. These are necessary tobe correctly in order to obtain the best possible precision of the results. Therandom arrangements, randomized blocks and Latin square designs werereviewed and analyzed from the statistical perspective of error analysis.Material and Method: Random arrangements, randomized block and Latinsquares experimental designs were used as field experiments. ...
Fractional calculus approach to the statistical characterization of random variables and vectors
Cottone, D. ; Paola, M.D.
2015-01-01
Fractional moments have been investigated by many authors to represent the density of univariate and bivariate random variables in different contexts. Fractional moments are indeed important when the density of the random variable has inverse power-law tails and, consequently, it lacks integer order moments. In this paper, starting from the Mellin transform of the characteristic function and by fractional calculus method we present a new perspective on the statistics of random variables. Intr...
2014-01-01
Background As health care has increased in complexity and health care teams have been offered as a solution, so too is there an increased need for stronger interprofessional collaboration. However the intraprofessional factions that exist within every profession challenge interprofessional communication through contrary paradigms. As a contender in the conservative spinal health care market, factions within chiropractic that result in unorthodox practice behaviours may compromise interprofessional relations and that profession’s progress toward institutionalization. The purpose of this investigation was to quantify the professional stratification among Canadian chiropractic practitioners and evaluate the practice perceptions of those factions. Methods A stratified random sample of 740 Canadian chiropractors was surveyed to determine faction membership and how professional stratification could be related to views that could be considered unorthodox to current evidence-based care and guidelines. Stratification in practice behaviours is a stated concern of mainstream medicine when considering interprofessional referrals. Results Of 740 deliverable questionnaires, 503 were returned for a response rate of 68%. Less than 20% of chiropractors (18.8%) were aligned with a predefined unorthodox perspective of the conditions they treat. Prediction models suggest that unorthodox perceptions of health practice related to treatment choices, x-ray use and vaccinations were strongly associated with unorthodox group membership (X2 =13.4, p = 0.0002). Conclusion Chiropractors holding unorthodox views may be identified based on response to specific beliefs that appear to align with unorthodox health practices. Despite continued concerns by mainstream medicine, only a minority of the profession has retained a perspective in contrast to current scientific paradigms. Understanding the profession’s factions is important to the anticipation of care delivery when considering
Saccone, Gabriele; Caissutti, Claudia; Khalifeh, Adeeb; Meltzer, Sara; Scifres, Christina; Simhan, Hyagriv N; Kelekci, Sefa; Sevket, Osman; Berghella, Vincenzo
2017-12-03
To compare both the prevalence of gestational diabetes mellitus (GDM) as well as maternal and neonatal outcomes by either the one-step or the two-step approaches. Electronic databases were searched from their inception until June 2017. We included all randomized controlled trials (RCTs) comparing the one-step with the two-step approaches for the screening and diagnosis of GDM. The primary outcome was the incidence of GDM. Three RCTs (n = 2333 participants) were included in the meta-analysis. 910 were randomized to the one step approach (75 g, 2 hrs), and 1423 to the two step approach. No significant difference in the incidence of GDM was found comparing the one step versus the two step approaches (8.4 versus 4.3%; relative risk (RR) 1.64, 95%CI 0.77-3.48). Women screened with the one step approach had a significantly lower risk of preterm birth (PTB) (3.7 versus 7.6%; RR 0.49, 95%CI 0.27-0.88), cesarean delivery (16.3 versus 22.0%; RR 0.74, 95%CI 0.56-0.99), macrosomia (2.9 versus 6.9%; RR 0.43, 95%CI 0.22-0.82), neonatal hypoglycemia (1.7 versus 4.5%; RR 0.38, 95%CI 0.16-0.90), and admission to neonatal intensive care unit (NICU) (4.4 versus 9.0%; RR 0.49, 95%CI 0.29-0.84), compared to those randomized to screening with the two step approach. The one and the two step approaches were not associated with a significant difference in the incidence of GDM. However, the one step approach was associated with better maternal and perinatal outcomes.
Kikuchi, Takashi; Gittins, John
2011-08-01
The behavioural Bayes approach to sample size determination for clinical trials assumes that the number of subsequent patients switching to a new drug from the current drug depends on the strength of the evidence for efficacy and safety that was observed in the clinical trials. The optimal sample size is the one which maximises the expected net benefit of the trial. The approach has been developed in a series of papers by Pezeshk and the present authors (Gittins JC, Pezeshk H. A behavioral Bayes method for determining the size of a clinical trial. Drug Information Journal 2000; 34: 355-63; Gittins JC, Pezeshk H. How Large should a clinical trial be? The Statistician 2000; 49(2): 177-87; Gittins JC, Pezeshk H. A decision theoretic approach to sample size determination in clinical trials. Journal of Biopharmaceutical Statistics 2002; 12(4): 535-51; Gittins JC, Pezeshk H. A fully Bayesian approach to calculating sample sizes for clinical trials with binary responses. Drug Information Journal 2002; 36: 143-50; Kikuchi T, Pezeshk H, Gittins J. A Bayesian cost-benefit approach to the determination of sample size in clinical trials. Statistics in Medicine 2008; 27(1): 68-82; Kikuchi T, Gittins J. A behavioral Bayes method to determine the sample size of a clinical trial considering efficacy and safety. Statistics in Medicine 2009; 28(18): 2293-306; Kikuchi T, Gittins J. A Bayesian procedure for cost-benefit evaluation of a new drug in multi-national clinical trials. Statistics in Medicine 2009 (Submitted)). The purpose of this article is to provide a rationale for experimental designs which allocate more patients to the new treatment than to the control group. The model uses a logistic weight function, including an interaction term linking efficacy and safety, which determines the number of patients choosing the new drug, and hence the resulting benefit. A Monte Carlo simulation is employed for the calculation. Having a larger group of patients on the new drug in general
A Bayesian meta-analytic approach for safety signal detection in randomized clinical trials.
Odani, Motoi; Fukimbara, Satoru; Sato, Tosiya
2017-04-01
Meta-analyses are frequently performed on adverse event data and are primarily used for improving statistical power to detect safety signals. However, in the evaluation of drug safety for New Drug Applications, simple pooling of adverse event data from multiple clinical trials is still commonly used. We sought to propose a new Bayesian hierarchical meta-analytic approach based on consideration of a hierarchical structure of reported individual adverse event data from multiple randomized clinical trials. To develop our meta-analysis model, we extended an existing three-stage Bayesian hierarchical model by including an additional stage of the clinical trial level in the hierarchical model; this generated a four-stage Bayesian hierarchical model. We applied the proposed Bayesian meta-analysis models to published adverse event data from three premarketing randomized clinical trials of tadalafil and to a simulation study motivated by the case example to evaluate the characteristics of three alternative models. Comparison of the results from the Bayesian meta-analysis model with those from Fisher's exact test after simple pooling showed that 6 out of 10 adverse events were the same within a top 10 ranking of individual adverse events with regard to association with treatment. However, more individual adverse events were detected in the Bayesian meta-analysis model than in Fisher's exact test under the body system "Musculoskeletal and connective tissue disorders." Moreover, comparison of the overall trend of estimates between the Bayesian model and the standard approach (odds ratios after simple pooling methods) revealed that the posterior median odds ratios for the Bayesian model for most adverse events shrank toward values for no association. Based on the simulation results, the Bayesian meta-analysis model could balance the false detection rate and power to a better extent than Fisher's exact test. For example, when the threshold value of the posterior probability for
Fiero, Mallorie H; Hsu, Chiu-Hsieh; Bell, Melanie L
2017-11-20
We extend the pattern-mixture approach to handle missing continuous outcome data in longitudinal cluster randomized trials, which randomize groups of individuals to treatment arms, rather than the individuals themselves. Individuals who drop out at the same time point are grouped into the same dropout pattern. We approach extrapolation of the pattern-mixture model by applying multilevel multiple imputation, which imputes missing values while appropriately accounting for the hierarchical data structure found in cluster randomized trials. To assess parameters of interest under various missing data assumptions, imputed values are multiplied by a sensitivity parameter, k, which increases or decreases imputed values. Using simulated data, we show that estimates of parameters of interest can vary widely under differing missing data assumptions. We conduct a sensitivity analysis using real data from a cluster randomized trial by increasing k until the treatment effect inference changes. By performing a sensitivity analysis for missing data, researchers can assess whether certain missing data assumptions are reasonable for their cluster randomized trial. Copyright © 2017 John Wiley & Sons, Ltd.
A random generation approach to pattern library creation for full chip lithographic simulation
Zou, Elain; Hong, Sid; Liu, Limei; Huang, Lucas; Yang, Legender; Kabeel, Aliaa; Madkour, Kareem; ElManhawy, Wael; Kwan, Joe; Du, Chunshan; Hu, Xinyi; Wan, Qijian; Zhang, Recoo
2017-04-01
As technology advances, the need for running lithographic (litho) checking for early detection of hotspots before tapeout has become essential. This process is important at all levels—from designing standard cells and small blocks to large intellectual property (IP) and full chip layouts. Litho simulation provides high accuracy for detecting printability issues due to problematic geometries, but it has the disadvantage of slow performance on large designs and blocks [1]. Foundries have found a good compromise solution for running litho simulation on full chips by filtering out potential candidate hotspot patterns using pattern matching (PM), and then performing simulation on the matched locations. The challenge has always been how to easily create a PM library of candidate patterns that provides both comprehensive coverage for litho problems and fast runtime performance. This paper presents a new strategy for generating candidate real design patterns through a random generation approach using a layout schema generator (LSG) utility. The output patterns from the LSG are simulated, and then classified by a scoring mechanism that categorizes patterns according to the severity of the hotspots, probability of their presence in the design, and the likelihood of the pattern causing a hotspot. The scoring output helps to filter out the yield problematic patterns that should be removed from any standard cell design, and also to define potential problematic patterns that must be simulated within a bigger context to decide whether or not they represent an actual hotspot. This flow is demonstrated on SMIC 14nm technology, creating a candidate hotspot pattern library that can be used in full chip simulation with very high coverage and robust performance.
2015-08-24
Squared Error (MSE) tracking performance for direction of arrival estimation in the presence of noise and missing data; see Fig. 5. 6) We have...scatter in random directions, thereby hindering its passage. As the thickness of a slab of highly scattering random medium increases, this effect
Random-matrix-theory approach to mesoscopic fluctuations of heat current
Schmidt, Martin; Kottos, Tsampikos; Shapiro, Boris
2013-08-01
We consider an ensemble of fully connected networks of N oscillators coupled harmonically with random springs and show, using random-matrix-theory considerations, that both the average phonon heat current and its variance are scale invariant and take universal values in the large N limit. These anomalous mesoscopic fluctuations is the hallmark of strong correlations between normal modes.
DEFF Research Database (Denmark)
Ruban, Andrei; Simak, S.I.; Shallcross, S.
2003-01-01
We present a simple effective tetrahedron model for local lattice relaxation effects in random metallic alloys on simple primitive lattices. A comparison with direct ab initio calculations for supercells representing random Ni0.50Pt0.50 and Cu0.25Au0.75 alloys as well as the dilute limit of Au-ri...
On the nature of rainfall intermittency as revealed by different metrics and sampling approaches
Directory of Open Access Journals (Sweden)
G. Mascaro
2013-01-01
island and, thus, can be associated with the corresponding synoptic circulation patterns. Last but not least, we demonstrate how the methodology adopted to sample the rainfall signal from the records of the tipping instants can significantly affect the intermittency analysis, especially at smaller scales. The multifractal scale invariance analysis is the only tool that is insensitive to the sampling approach. Results of this work may be useful to improve the calibration of stochastic algorithms used to downscale coarse rainfall predictions of climate and weather forecasting models, as well as the parameterization of intensity-duration-frequency curves, adopted for land planning and design of civil infrastructures.
Apollo Lunar Sample Integration into Google Moon: A New Approach to Digitization
Dawson, Melissa D.; Todd, nancy S.; Lofgren, Gary E.
2011-01-01
The Google Moon Apollo Lunar Sample Data Integration project is part of a larger, LASER-funded 4-year lunar rock photo restoration project by NASA s Acquisition and Curation Office [1]. The objective of this project is to enhance the Apollo mission data already available on Google Moon with information about the lunar samples collected during the Apollo missions. To this end, we have combined rock sample data from various sources, including Curation databases, mission documentation and lunar sample catalogs, with newly available digital photography of rock samples to create a user-friendly, interactive tool for learning about the Apollo Moon samples
Directory of Open Access Journals (Sweden)
Ayman A Ghoneim
2014-01-01
Full Text Available Context: The classic posterior approach to superior hypogastric plexus block (SHPB is sometimes hindered by the iliac crest or a prominent transverse process of L5. The computed tomography (CT - guided anterior approach might overcome these difficulties. Aims: This prospective, comparative, randomized study was aimed to compare the CT guided anterior approach versus the classic posterior approach. Settings and Design: Controlled randomized study. Materials and Methods: A total of 30 patients with chronic pelvic cancer pain were randomized into either classic or CT groups where classic posterior approach or CT guided anterior approach were done, respectively. Visual analog score, daily analgesic morphine consumed and patient satisfaction were assessed just before the procedure, then, after 24 h, 1 week and monthly for 2 months after the procedure. Duration of the procedure was also recorded. Adverse effects associated with the procedure were closely observed and recorded. Statistical Analysis Used: Student′s t-test was used for comparison between groups. Results: Visual analog scale and morphine consumption decreased significantly in both groups at the measured times after the block compared with the baseline in the same group with no significant difference between both groups. The procedure was carried out in significantly shorter duration in the CT group than that in the classic group. The mean patient satisfaction scale increased significantly in both groups at the measured times after the block compared with the baseline in the same group. The patients in the CT groups were significantly more satisfied than those in classic group from day one after the procedure until the end of the study. Conclusions: The CT guided approach for SHPB is easier, faster, safer and more effective, with less side-effects than the classic approach.
National Research Council Canada - National Science Library
Wing, Rena R; Tate, Deborah; Espeland, Mark; Gorin, Amy; LaRose, Jessica Gokee; Robichaud, Erica Ferguson; Erickson, Karen; Perdue, Letitia; Bahnson, Judy; Lewis, Cora E
2013-01-01
... (Study of Novel Approaches to Weight Gain Prevention) is an NIH-funded randomized clinical trial examining the efficacy of two novel self-regulation approaches to weight gain prevention in young adults compared to a minimal treatment control...
Boezen, H M; Schouten, J. P.; Postma, D S; Rijcken, B
1994-01-01
Peak expiratory flow (PEF) variability can be considered as an index of bronchial lability. Population studies on PEF variability are few. The purpose of the current paper is to describe the distribution of PEF variability in a random population sample of adults with a wide age range (20-70 yrs),
Spybrook, Jessaca; Puente, Anne Cullen; Lininger, Monica
2013-01-01
This article examines changes in the research design, sample size, and precision between the planning phase and implementation phase of group randomized trials (GRTs) funded by the Institute of Education Sciences. Thirty-eight GRTs funded between 2002 and 2006 were examined. Three studies revealed changes in the experimental design. Ten studies…
Elizabeth A. Freeman; Gretchen G. Moisen; Tracy S. Frescino
2012-01-01
Random Forests is frequently used to model species distributions over large geographic areas. Complications arise when data used to train the models have been collected in stratified designs that involve different sampling intensity per stratum. The modeling process is further complicated if some of the target species are relatively rare on the landscape leading to an...
Qi, Shengqi; Hou, Deyi; Luo, Jian
2017-09-01
This study presents a numerical model based on field data to simulate groundwater flow in both the aquifer and the well-bore for the low-flow sampling method and the well-volume sampling method. The numerical model was calibrated to match well with field drawdown, and calculated flow regime in the well was used to predict the variation of dissolved oxygen (DO) concentration during the purging period. The model was then used to analyze sampling representativeness and sampling time. Site characteristics, such as aquifer hydraulic conductivity, and sampling choices, such as purging rate and screen length, were found to be significant determinants of sampling representativeness and required sampling time. Results demonstrated that: (1) DO was the most useful water quality indicator in ensuring groundwater sampling representativeness in comparison with turbidity, pH, specific conductance, oxidation reduction potential (ORP) and temperature; (2) it is not necessary to maintain a drawdown of less than 0.1 m when conducting low flow purging. However, a high purging rate in a low permeability aquifer may result in a dramatic decrease in sampling representativeness after an initial peak; (3) the presence of a short screen length may result in greater drawdown and a longer sampling time for low-flow purging. Overall, the present study suggests that this new numerical model is suitable for describing groundwater flow during the sampling process, and can be used to optimize sampling strategies under various hydrogeological conditions.
Generalized Mittag-Leffler relaxation: clustering-jump continuous-time random walk approach.
Jurlewicz, Agnieszka; Weron, Karina; Teuerle, Marek
2008-07-01
A stochastic generalization of renormalization-group transformation for continuous-time random walk processes is proposed. The renormalization consists in replacing the jump events from a randomly sized cluster by a single renormalized (i.e., overall) jump. The clustering of the jumps, followed by the corresponding transformation of the interjump time intervals, yields a new class of coupled continuous-time random walks which, applied to modeling of relaxation, lead to the general power-law properties usually fitted with the empirical Havriliak-Negami function.
Winter, Joanne R; Kaler, Jasmeet; Ferguson, Eamonn; KilBride, Amy L; Green, Laura E
2015-11-01
The aims of this study were to update the prevalence of lameness in sheep in England and identify novel risk factors. A total of 1260 sheep farmers responded to a postal survey. The survey captured detailed information on the period prevalence of lameness from May 2012-April 2013 and the prevalence and farmer naming of lesions attributable to interdigital dermatitis (ID), severe footrot (SFR), contagious ovine digital dermatitis (CODD) and shelly hoof (SH), management and treatment of lameness, and farm and flock details. The global mean period prevalence of lameness fell between 2004 and 2013 from 10.6% to 4.9% and the geometric mean period prevalence of lameness fell from 5.4% (95% CL: 4.7%-6.0%) to 3.5% (95% CI: 3.3%-3.7%). In 2013, more farmers were using vaccination and antibiotic treatment for ID and SFR and fewer farmers were using foot trimming as a routine or therapeutic treatment than in 2004. Two over-dispersed Poisson regression models were developed with the outcome the period prevalence of lameness, one investigated associations with farmer estimates of prevalence of the four foot lesions and one investigated associations with management practices to control and treat lameness and footrot. A prevalence of ID>10%, SFR>2.5% and CODD>2.5% were associated with a higher prevalence of lameness compared with those lesions being absent, however, the prevalence of SH was not associated with a change in risk of lameness. A key novel management risk associated with higher prevalence of lameness was the rate of feet bleeding/100 ewes trimmed/year. In addition, vaccination of ewes once per year and selecting breeding replacements from never-lame ewes were associated with a decreased risk of lameness. Other factors associated with a lower risk of lameness for the first time in a random sample of farmers and a full risk model were: recognising lameness in sheep at locomotion score 1 compared with higher scores, treatment of the first lame sheep in a group compared
Chien, Ming-Hung; Guo, How-Ran
2014-01-01
Falls are common in older people and may lead to functional decline, disability, and death. Many risk factors have been identified, but studies evaluating effects of nutritional status are limited. To determine whether nutritional status is a predictor of falls in older people living in the community, we analyzed data collected through the Survey of Health and Living Status of the Elderly in Taiwan (SHLSET). SHLSET include a series of interview surveys conducted by the government on a random sample of people living in community dwellings in the nation. We included participants who received nutritional status assessment using the Mini Nutritional Assessment Taiwan Version 2 (MNA-T2) in the 1999 survey when they were 53 years or older and followed up on the cumulative incidence of falls in the one-year period before the interview in the 2003 survey. At the beginning of follow-up, the 4440 participants had a mean age of 69.5 (standard deviation= 9.1) years, and 467 participants were "not well-nourished," which was defined as having an MNA-T2 score of 23 or less. In the one-year study period, 659 participants reported having at least one fall. After adjusting for other risk factors, we found the associated odds ratio for falls was 1.73 (95% confidence interval, 1.23, 2.42) for "not well-nourished," 1.57 (1.30, 1.90) for female gender, 1.03 (1.02, 1.04) for one-year older, 1.55 (1.22, 1.98) for history of falls, 1.34 (1.05, 1.72) for hospital stay during the past 12 months, 1.66 (1.07, 2.58) for difficulties in activities of daily living, and 1.53 (1.23, 1.91) for difficulties in instrumental activities of daily living. Nutritional status is an independent predictor of falls in older people living in the community. Further studies are warranted to identify nutritional interventions that can help prevent falls in the elderly.
Nagamoto-Combs, Kumi; Manocha, Gunjan D; Puig, Kendra; Combs, Colin K
2016-03-01
Preparation and processing of free-floating histological sections involve a series of steps. The amount of labor, particularly sectioning and mounting, quickly multiplies as the number of samples increases. Embedding tissue samples in a flexible matrix allows simultaneous handling of multiple samples and preserves the integrity of the tissue during histological processing. However, aligning multiple asymmetrical samples, for example small-animal brains, in a particular orientation requires skillful arrangement and securing of the samples by pinning onto a solid surface. Consequently, costly technical services offered by contract research organizations are often sought. An improved approach to align and embed multiple whole or half rodent brain samples into a gelatin-based matrix is described. Using a template specifically designed to form arrayed mouse brain-shaped cavities, a "receiving matrix" is prepared. Inserting brain samples directly into the cavities allows the samples to be effortlessly positioned into a uniform orientation and embedded in a block of matrix. Multiple mouse brains were arrayed in a uniform orientation in a gelatin matrix block with ease using the receiving matrix. The gelatin-embedded brains were simultaneously sectioned and stained, and effortlessly mounted onto glass slides. The improved approach allowed multiple whole or half mouse brains to be easily arrayed without pinning the samples onto a solid surface and prevented damages or shifting of the samples during embedding. The new approach to array multiple brain samples provides a simple way to prepare gelatin-embedded whole or half brain arrays of commercial quality. Copyright © 2016 Elsevier B.V. All rights reserved.
A nonparametric approach to the sample selection problem in survey data
Vazquez-Alvarez, R.
2001-01-01
Responses to economic surveys are usually noisy. Item non-response, as a particular type of censored data, is a common problem for key economic variables such as income and earnings, consumption or accumulated assets. If such non-response is non-random, the consequence can be a bias in the results
Quantifying the sensitivity of camera traps:an adapted distance sampling approach
Rowcliffe, M.; Carbone, C.; Jansen, P.A.; Kays, R.W.; Kranstauber, B.
2011-01-01
1. Abundance estimation is a pervasive goal in ecology. The rate of detection by motion-sensitive camera traps can, in principle, provide information on the abundance of many species of terrestrial vertebrates that are otherwise difficult to survey. The random encounter model (REM, Rowcliffe et al.
A random optimization approach for inherent optic properties of nearshore waters
Zhou, Aijun; Hao, Yongshuai; Xu, Kuo; Zhou, Heng
2016-10-01
Traditional method of water quality sampling is time-consuming and highly cost. It can not meet the needs of social development. Hyperspectral remote sensing technology has well time resolution, spatial coverage and more general segment information on spectrum. It has a good potential in water quality supervision. Via the method of semi-analytical, remote sensing information can be related with the water quality. The inherent optical properties are used to quantify the water quality, and an optical model inside the water is established to analysis the features of water. By stochastic optimization algorithm Threshold Acceptance, a global optimization of the unknown model parameters can be determined to obtain the distribution of chlorophyll, organic solution and suspended particles in water. Via the improvement of the optimization algorithm in the search step, the processing time will be obviously reduced, and it will create more opportunity for the increasing the number of parameter. For the innovation definition of the optimization steps and standard, the whole inversion process become more targeted, thus improving the accuracy of inversion. According to the application result for simulated data given by IOCCG and field date provided by NASA, the approach model get continuous improvement and enhancement. Finally, a low-cost, effective retrieval model of water quality from hyper-spectral remote sensing can be achieved.
Randomized trial of two swallowing assessment approaches in patients with acquired brain injury
DEFF Research Database (Denmark)
Kjaersgaard, Annette; Nielsen, Lars Hedemann; Sjölund, Bengt H.
2014-01-01
OBJECTIVE: To examine whether patients assessed for initiation of oral intake only by Facial-Oral Tract Therapy had a greater risk of developing aspiration pneumonia during neurorehabilitation than patients assessed by Fibreoptic Endoscopic Evaluation of Swallowing. DESIGN: Randomized controlled ...
Diaz, Francisco J; Berg, Michel J; Krebill, Ron; Welty, Timothy; Gidal, Barry E; Alloway, Rita; Privitera, Michael
2013-12-01
Due to concern and debate in the epilepsy medical community and to the current interest of the US Food and Drug Administration (FDA) in revising approaches to the approval of generic drugs, the FDA is currently supporting ongoing bioequivalence studies of antiepileptic drugs, the EQUIGEN studies. During the design of these crossover studies, the researchers could not find commercial or non-commercial statistical software that quickly allowed computation of sample sizes for their designs, particularly software implementing the FDA requirement of using random-effects linear models for the analyses of bioequivalence studies. This article presents tables for sample-size evaluations of average bioequivalence studies based on the two crossover designs used in the EQUIGEN studies: the four-period, two-sequence, two-formulation design, and the six-period, three-sequence, three-formulation design. Sample-size computations assume that random-effects linear models are used in bioequivalence analyses with crossover designs. Random-effects linear models have been traditionally viewed by many pharmacologists and clinical researchers as just mathematical devices to analyze repeated-measures data. In contrast, a modern view of these models attributes an important mathematical role in theoretical formulations in personalized medicine to them, because these models not only have parameters that represent average patients, but also have parameters that represent individual patients. Moreover, the notation and language of random-effects linear models have evolved over the years. Thus, another goal of this article is to provide a presentation of the statistical modeling of data from bioequivalence studies that highlights the modern view of these models, with special emphasis on power analyses and sample-size computations.
Directory of Open Access Journals (Sweden)
M. Renée Umstattd Meyer
2016-09-01
Full Text Available Time spent sitting has been associated with an increased risk of diabetes, cancer, obesity, and mental health impairments. However, 75% of Americans spend most of their days sitting, with work-sitting accounting for 63% of total daily sitting time. Little research examining theory-based antecedents of standing or sitting has been conducted. This lack of solid groundwork makes it difficult to design effective intervention strategies to decrease sitting behaviors. Using the Theory of Planned Behavior (TPB as our theoretical lens to better understand factors related with beneficial standing behaviors already being practiced, we examined relationships between TPB constructs and time spent standing at work among “positive deviants” (those successful in behavior change. Experience sampling methodology (ESM, 4 times a day (midmorning, before lunch, afternoon, and before leaving work for 5 consecutive workdays (Monday to Friday, was used to assess employees’ standing time. TPB scales assessing attitude (α = 0.81–0.84, norms (α = 0.83, perceived behavioral control (α = 0.77, and intention (α = 0.78 were developed using recommended methods and collected once on the Friday before the ESM surveys started. ESM data are hierarchically nested, therefore we tested our hypotheses using multilevel structural equation modeling with Mplus. Hourly full-time university employees (n = 50; 70.6% female, 84.3% white, mean age = 44 (SD = 11, 88.2%in full-time staff positions with sedentary occupation types (time at desk while working ≥6 hours/day participated. A total of 871 daily surveys were completed. Only perceived behavioral control (β = 0.45, p < 0.05 was related with work-standing at the event-level (model fit: just fit; mediation through intention was not supported. This is the first study to examine theoretical antecedents of real-time work-standing in a naturalistic field setting among positive deviants. These relationships should be further
Matzke, Melissa M; Allan, Sarah E; Anderson, Kim A; Waters, Katrina M
2012-12-01
The use of passive sampling devices (PSDs) for monitoring hydrophobic organic contaminants in aquatic environments can entail logistical constraints that often limit a comprehensive statistical sampling plan, thus resulting in a restricted number of samples. The present study demonstrates an approach for using the results of a pilot study designed to estimate sampling variability, which in turn can be used as variance estimates for confidence intervals for future n = 1 PSD samples of the same aquatic system. Sets of three to five PSDs were deployed in the Portland Harbor Superfund site for three sampling periods over the course of two years. The PSD filters were extracted and, as a composite sample, analyzed for 33 polycyclic aromatic hydrocarbon compounds. The between-sample and within-sample variances were calculated to characterize sources of variability in the environment and sampling methodology. A method for calculating a statistically reliable and defensible confidence interval for the mean of a single aquatic passive sampler observation (i.e., n = 1) using an estimate of sample variance derived from a pilot study is presented. Coverage probabilities are explored over a range of variance values using a Monte Carlo simulation. Copyright © 2012 SETAC.
Connor, Thomas H.; Smith, Jerome P.
2017-01-01
Purpose At the present time, the method of choice to determine surface contamination of the workplace with antineoplastic and other hazardous drugs is surface wipe sampling and subsequent sample analysis with a variety of analytical techniques. The purpose of this article is to review current methodology for determining the level of surface contamination with hazardous drugs in healthcare settings and to discuss recent advances in this area. In addition it will provide some guidance for conducting surface wipe sampling and sample analysis for these drugs in healthcare settings. Methods Published studies on the use of wipe sampling to measure hazardous drugs on surfaces in healthcare settings drugs were reviewed. These studies include the use of well-documented chromatographic techniques for sample analysis in addition to newly evolving technology that provides rapid analysis of specific antineoplastic Results Methodology for the analysis of surface wipe samples for hazardous drugs are reviewed, including the purposes, technical factors, sampling strategy, materials required, and limitations. The use of lateral flow immunoassay (LFIA) and fluorescence covalent microbead immunosorbent assay (FCMIA) for surface wipe sample evaluation is also discussed. Conclusions Current recommendations are that all healthcare settings where antineoplastic and other hazardous drugs are handled include surface wipe sampling as part of a comprehensive hazardous drug-safe handling program. Surface wipe sampling may be used as a method to characterize potential occupational dermal exposure risk and to evaluate the effectiveness of implemented controls and the overall safety program. New technology, although currently limited in scope, may make wipe sampling for hazardous drugs more routine, less costly, and provide a shorter response time than classical analytical techniques now in use. PMID:28459100
Chandrasekar, A; Rakkiyappan, R; Cao, Jinde
2015-10-01
This paper studies the impulsive synchronization of Markovian jumping randomly coupled neural networks with partly unknown transition probabilities via multiple integral approach. The array of neural networks are coupled in a random fashion which is governed by Bernoulli random variable. The aim of this paper is to obtain the synchronization criteria, which is suitable for both exactly known and partly unknown transition probabilities such that the coupled neural network is synchronized with mixed time-delay. The considered impulsive effects can be synchronized at partly unknown transition probabilities. Besides, a multiple integral approach is also proposed to strengthen the Markovian jumping randomly coupled neural networks with partly unknown transition probabilities. By making use of Kronecker product and some useful integral inequalities, a novel Lyapunov-Krasovskii functional was designed for handling the coupled neural network with mixed delay and then impulsive synchronization criteria are solvable in a set of linear matrix inequalities. Finally, numerical examples are presented to illustrate the effectiveness and advantages of the theoretical results. Copyright © 2015 Elsevier Ltd. All rights reserved.
Wang, Jian-Xun; Xiao, Heng
2016-01-01
Numerical models based on Reynolds-Averaged Navier-Stokes (RANS) equations are widely used in engineering turbulence modeling. However, the RANS predictions have large model-form uncertainties for many complex flows. Quantification of these large uncertainties originating from the modeled Reynolds stresses has attracted attention in turbulence modeling community. Recently, a physics-based Bayesian framework for quantifying model-form uncertainties has been proposed with successful applications to several flows. Nonetheless, how to specify proper priors without introducing unwarranted, artificial information remains challenging to the current form of the physics-based approach. Another recently proposed method based on random matrix theory provides the prior distributions with the maximum entropy, which is an alternative for model-form uncertainty quantification in RANS simulations. In this work, we utilize the random matrix theoretic approach to assess and possibly improve the specification of priors used in ...
Cheng, Jun; Park, Ju H; Karimi, Hamid Reza; Shen, Hao
2017-08-02
This paper investigates the problem of sampled-data (SD) exponentially synchronization for a class of Markovian neural networks with time-varying delayed signals. Based on the tunable parameter and convex combination computational method, a new approach named flexible terminal approach is proposed to reduce the conservatism of delay-dependent synchronization criteria. The SD subject to stochastic sampling period is introduced to exhibit the general phenomena of reality. Novel exponential synchronization criterion are derived by utilizing uniform Lyapunov-Krasovskii functional and suitable integral inequality. Finally, numerical examples are provided to show the usefulness and advantages of the proposed design procedure.
Barkstrom, Bruce R.; Direskeneli, Haldun; Halyo, Nesim
1992-01-01
An information theory approach to examine the temporal nonuniform sampling characteristics of shortwave (SW) flux for earth radiation budget (ERB) measurements is suggested. The information gain is computed by computing the information content before and after the measurements. A stochastic diurnal model for the SW flux is developed, and measurements for different orbital parameters are examined. The methodology is applied to specific NASA Polar platform and Tropical Rainfall Measuring Mission (TRMM) orbital parameters. The information theory approach, coupled with the developed SW diurnal model, is found to be promising for measurements involving nonuniform orbital sampling characteristics.
Zavodna, Monika; Grueber, Catherine E; Gemmell, Neil J
2013-01-01
Next-generation sequencing (NGS) on pooled samples has already been broadly applied in human medical diagnostics and plant and animal breeding. However, thus far it has been only sparingly employed in ecology and conservation, where it may serve as a useful diagnostic tool for rapid assessment of species genetic diversity and structure at the population level. Here we undertake a comprehensive evaluation of the accuracy, practicality and limitations of parallel tagged amplicon NGS on pooled population samples for estimating species population diversity and structure. We obtained 16S and Cyt b data from 20 populations of Leiopelma hochstetteri, a frog species of conservation concern in New Zealand, using two approaches - parallel tagged NGS on pooled population samples and individual Sanger sequenced samples. Data from each approach were then used to estimate two standard population genetic parameters, nucleotide diversity (π) and population differentiation (FST), that enable population genetic inference in a species conservation context. We found a positive correlation between our two approaches for population genetic estimates, showing that the pooled population NGS approach is a reliable, rapid and appropriate method for population genetic inference in an ecological and conservation context. Our experimental design also allowed us to identify both the strengths and weaknesses of the pooled population NGS approach and outline some guidelines and suggestions that might be considered when planning future projects.
Directory of Open Access Journals (Sweden)
Monika Zavodna
Full Text Available Next-generation sequencing (NGS on pooled samples has already been broadly applied in human medical diagnostics and plant and animal breeding. However, thus far it has been only sparingly employed in ecology and conservation, where it may serve as a useful diagnostic tool for rapid assessment of species genetic diversity and structure at the population level. Here we undertake a comprehensive evaluation of the accuracy, practicality and limitations of parallel tagged amplicon NGS on pooled population samples for estimating species population diversity and structure. We obtained 16S and Cyt b data from 20 populations of Leiopelma hochstetteri, a frog species of conservation concern in New Zealand, using two approaches - parallel tagged NGS on pooled population samples and individual Sanger sequenced samples. Data from each approach were then used to estimate two standard population genetic parameters, nucleotide diversity (π and population differentiation (FST, that enable population genetic inference in a species conservation context. We found a positive correlation between our two approaches for population genetic estimates, showing that the pooled population NGS approach is a reliable, rapid and appropriate method for population genetic inference in an ecological and conservation context. Our experimental design also allowed us to identify both the strengths and weaknesses of the pooled population NGS approach and outline some guidelines and suggestions that might be considered when planning future projects.
Galuschka, Katharina; Ise, Elena; Krick, Kathrin; Schulte-Körne, Gerd
2014-01-01
Children and adolescents with reading disabilities experience a significant impairment in the acquisition of reading and spelling skills. Given the emotional and academic consequences for children with persistent reading disorders, evidence-based interventions are critically needed. The present meta-analysis extracts the results of all available randomized controlled trials. The aims were to determine the effectiveness of different treatment approaches and the impact of various factors on the efficacy of interventions. The literature search for published randomized-controlled trials comprised an electronic search in the databases ERIC, PsycINFO, PubMed, and Cochrane, and an examination of bibliographical references. To check for unpublished trials, we searched the websites clinicaltrials.com and ProQuest, and contacted experts in the field. Twenty-two randomized controlled trials with a total of 49 comparisons of experimental and control groups could be included. The comparisons evaluated five reading fluency trainings, three phonemic awareness instructions, three reading comprehension trainings, 29 phonics instructions, three auditory trainings, two medical treatments, and four interventions with coloured overlays or lenses. One trial evaluated the effectiveness of sunflower therapy and another investigated the effectiveness of motor exercises. The results revealed that phonics instruction is not only the most frequently investigated treatment approach, but also the only approach whose efficacy on reading and spelling performance in children and adolescents with reading disabilities is statistically confirmed. The mean effect sizes of the remaining treatment approaches did not reach statistical significance. The present meta-analysis demonstrates that severe reading and spelling difficulties can be ameliorated with appropriate treatment. In order to be better able to provide evidence-based interventions to children and adolescent with reading disabilities
Directory of Open Access Journals (Sweden)
Serge Clotaire Billong
2016-11-01
Full Text Available Abstract Background Retention on lifelong antiretroviral therapy (ART is essential in sustaining treatment success while preventing HIV drug resistance (HIVDR, especially in resource-limited settings (RLS. In an era of rising numbers of patients on ART, mastering patients in care is becoming more strategic for programmatic interventions. Due to lapses and uncertainty with the current WHO sampling approach in Cameroon, we thus aimed to ascertain the national performance of, and determinants in, retention on ART at 12 months. Methods Using a systematic random sampling, a survey was conducted in the ten regions (56 sites of Cameroon, within the “reporting period” of October 2013–November 2014, enrolling 5005 eligible adults and children. Performance in retention on ART at 12 months was interpreted following the definition of HIVDR early warning indicator: excellent (>85%, fair (85–75%, poor (<75; and factors with p-value < 0.01 were considered statistically significant. Results Majority (74.4% of patients were in urban settings, and 50.9% were managed in reference treatment centres. Nationwide, retention on ART at 12 months was 60.4% (2023/3349; only six sites and one region achieved acceptable performances. Retention performance varied in reference treatment centres (54.2% vs. management units (66.8%, p < 0.0001; male (57.1% vs. women (62.0%, p = 0.007; and with WHO clinical stage I (63.3% vs. other stages (55.6%, p = 0.007; but neither for age (adults [60.3%] vs. children [58.8%], p = 0.730 nor for immune status (CD4351–500 [65.9%] vs. other CD4-staging [59.86%], p = 0.077. Conclusions Poor retention in care, within 12 months of ART initiation, urges active search for lost-to-follow-up targeting preferentially male and symptomatic patients, especially within reference ART clinics. Such sampling strategy could be further strengthened for informed ART monitoring and HIVDR prevention perspectives.
It is generally accepted that monitoring wells must be purged to access formation water to obtain “representative” ground water quality samples. Historically anywhere from 3 to 5 well casing volumes have been removed prior to sample collection to evacuate the standing well water...
Hsieh, Yu-Wei; Wu, Ching-Yi; Wang, Wei-En; Lin, Keh-Chung; Chang, Ku-Chou; Chen, Chih-Chi; Liu, Chien-Ting
2017-02-01
To investigate the treatment effects of bilateral robotic priming combined with the task-oriented approach on motor impairment, disability, daily function, and quality of life in patients with subacute stroke. A randomized controlled trial. Occupational therapy clinics in medical centers. Thirty-one subacute stroke patients were recruited. Participants were randomly assigned to receive bilateral priming combined with the task-oriented approach (i.e., primed group) or to the task-oriented approach alone (i.e., unprimed group) for 90 minutes/day, 5 days/week for 4 weeks. The primed group began with the bilateral priming technique by using a bimanual robot-aided device. Motor impairments were assessed by the Fugal-Meyer Assessment, grip strength, and the Box and Block Test. Disability and daily function were measured by the modified Rankin Scale, the Functional Independence Measure, and actigraphy. Quality of life was examined by the Stroke Impact Scale. The primed and unprimed groups improved significantly on most outcomes over time. The primed group demonstrated significantly better improvement on the Stroke Impact Scale strength subscale ( p = 0.012) and a trend for greater improvement on the modified Rankin Scale ( p = 0.065) than the unprimed group. Bilateral priming combined with the task-oriented approach elicited more improvements in self-reported strength and disability degrees than the task-oriented approach by itself. Further large-scale research with at least 31 participants in each intervention group is suggested to confirm the study findings.
Not too big, not too small: a goldilocks approach to sample size selection.
Broglio, Kristine R; Connor, Jason T; Berry, Scott M
2014-01-01
We present a Bayesian adaptive design for a confirmatory trial to select a trial's sample size based on accumulating data. During accrual, frequent sample size selection analyses are made and predictive probabilities are used to determine whether the current sample size is sufficient or whether continuing accrual would be futile. The algorithm explicitly accounts for complete follow-up of all patients before the primary analysis is conducted. We refer to this as a Goldilocks trial design, as it is constantly asking the question, "Is the sample size too big, too small, or just right?" We describe the adaptive sample size algorithm, describe how the design parameters should be chosen, and show examples for dichotomous and time-to-event endpoints.
Vandeven, Mark; Whitaker, Thomas; Slate, Andy
2002-01-01
Processed food manufacturers often use acceptance sampling plans to screen out lots with unacceptable levels of contamination from incoming raw material streams. Sampling plan designs are determined by specifying sample sizes, sample preparation methods, analytical test methods, and accept/reject criteria. Sampling plan performance can be indicated by plotting acceptance probability versus contamination level as an operating characteristic (OC) curve. In practice, actual plan performance depends on the level of contamination in the incoming lot stream. This level can vary considerably over time, among different crop varieties, and among locales. To better gauge plan performance, a method of coupling an OC curve and crop distributions is proposed. The method provides a precise probabilistic statement about risk and can be easily performed with commercial spreadsheet software.
Directory of Open Access Journals (Sweden)
L Sahin
2017-01-01
Conclusion: Both of the different anatomical approaches have equally high success rates. Although the DSB was found to be significantly longer in the subsartorial approach, this is clinically unimportant, and the medial infracondylar approach is still a viable alternative technique during saphenous nerve blockage.
Daoud, Nihaya; Hayek, Samah; Sheikh Muhammad, Ahmad; Abu-Saad, Kathleen; Osman, Amira; Thrasher, James F; Kalter-Leibovici, Ofra
2015-07-16
Despite advanced smoking prevention and cessation policies in many countries, the prevalence of cigarette smoking among indigenous and some ethnic minorities continues to be high. This study examined the stages of change (SOC) of the readiness to quit smoking among Arab men in Israel shortly after new regulations of free-of-charge smoking cessation workshops and subsidized medications were introduced through primary health care clinics. We conducted a countrywide study in Israel between 2012-2013. Participants, 735 current smokers; 18-64 years old; were recruited from a stratified random sample and interviewed face-to-face using a structured questionnaire in Arabic. We used ordered regression to examine the contribution of socio-economic position (SEP), health status, psychosocial attributes, smoking-related factors, and physician advice to the SOC of the readiness to quit smoking (pre-contemplation, contemplation and preparation). Of the current smokers, 61.8% were at the pre-contemplation stage, 23.8% were at the contemplation stage, and only 14.4% were at the preparation stage. In the multinomial analysis, factors significantly (P stage compared to pre-contemplation stage included [odds ratio (OR), 95% confidence interval (CI)]: chronic morbidity [0.52, (0.31-0.88)], social support [1.35, (1.07-1.70)], duration of smoking for 11-21 years [1.94, (1.07-3.50)], three or more previous attempts to quit [2.27, (1.26-4.01)], knowledge about smoking hazards [1.75, (1.29-2.35)], positive attitudes toward smoking prevention [1.44, (1.14-1.82)], and physician advice to quit smoking [1.88, (1.19-2.97)]. The factors significantly (P stage compared to pre-contemplation stage were [OR, (95 % CI)]: chronic morbidity [0.36, (0.20-0.67)], anxiety [1.07, (1.01-1.13)], social support [1.34, (1.01-1.78)], duration of smoking 5 years or less [2.93, (1.14-7.52)], three or more previous attempts to quit [3.16, (1.60-6.26)], knowledge about smoking hazards [1.57, (1.10-2.21)], and
The Accuracy of Pass/Fail Decisions in Random and Difficulty-Balanced Domain-Sampling Tests.
Schnipke, Deborah L.
A common practice in some certification fields (e.g., information technology) is to draw items from an item pool randomly and apply a common passing score, regardless of the items administered. Because these tests are commonly used, it is important to determine how accurate the pass/fail decisions are for such tests and whether fairly small,…
DEFF Research Database (Denmark)
Vega, Mabel V Martínez; Sharifzadeh, Sara; Wulfsohn, Dvoralai
2013-01-01
representative samples of an early and a late season apple cultivar to evaluate model robustness (in terms of prediction ability and error) on the soluble solids content (SSC) and acidity prediction, in the wavelength range 400–1100 nm. RESULTS A total of 196 middle–early season and 219 late season apples (Malus...... domestica Borkh.) cvs ‘Aroma’ and ‘Holsteiner Cox’ samples were used to construct spectral models for SSC and acidity. Partial least squares (PLS), ridge regression (RR) and elastic net (EN) models were used to build prediction models. Furthermore, we compared three sub-sample arrangements for forming...
A Simplified Approach for Two-Dimensional Optimal Controlled Sampling Designs
Directory of Open Access Journals (Sweden)
Neeraj Tiwari
2014-01-01
Full Text Available Controlled sampling is a unique method of sample selection that minimizes the probability of selecting nondesirable combinations of units. Extending the concept of linear programming with an effective distance measure, we propose a simple method for two-dimensional optimal controlled selection that ensures zero probability to nondesired samples. Alternative estimators for population total and its variance have also been suggested. Some numerical examples have been considered to demonstrate the utility of the proposed procedure in comparison to the existing procedures.
Dynamic flow-through approaches for metal fractionation in environmentally relevant solid samples
DEFF Research Database (Denmark)
Miró, Manuel; Hansen, Elo Harald; Chomchoei, Roongrat
2005-01-01
In the recent decades, batchwise equilibrium-based single or sequential extraction schemes have been consolidated as analytical tools for fractionation analyses to assess the ecotoxicological significance of metal ions in solid environmental samples. However, taking into account that naturally...
Poppe, Stephan; Benner, Philipp; Elze, Tobias
2012-06-01
We present a predictive account on adaptive sequential sampling of stimulus-response relations in psychophysical experiments. Our discussion applies to experimental situations with ordinal stimuli when there is only weak structural knowledge available such that parametric modeling is no option. By introducing a certain form of partial exchangeability, we successively develop a hierarchical Bayesian model based on a mixture of Pólya urn processes. Suitable utility measures permit us to optimize the overall experimental sampling process. We provide several measures that are either based on simple count statistics or more elaborate information theoretic quantities. The actual computation of information theoretic utilities often turns out to be infeasible. This is not the case with our sampling method, which relies on an efficient algorithm to compute exact solutions of our posterior predictions and utility measures. Finally, we demonstrate the advantages of our framework on a hypothetical sampling problem.
Choi, Michael K.
2017-01-01
An innovative thermal design concept to maintain comet surface samples cold (for example, 263 degrees Kelvin, 243 degrees Kelvin or 223 degrees Kelvin) from Earth approach through retrieval is presented. It uses paraffin phase change material (PCM), Cryogel insulation and thermoelectric cooler (TEC), which are commercially available.
Sample size bounding and context ranking as approaches to the HRA data problem
Energy Technology Data Exchange (ETDEWEB)
Reer, Bernhard
2004-02-01
This paper presents a technique denoted as sub sample size bounding (SSSB) useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications for human reliability analysis (HRA) are emphasized in the presentation of the technique. Exemplified by a sample of 180 abnormal event sequences, it is outlined how SSSB can provide viable input for the quantification of errors of commission (EOCs)
Sample Size Bounding and Context Ranking as Approaches to the Human Error Quantification Problem
Energy Technology Data Exchange (ETDEWEB)
Reer, B
2004-03-01
The paper describes a technique denoted as Sub-Sample-Size Bounding (SSSB), which is useable for the statistical derivation of context-specific probabilities from data available in existing reports on operating experience. Applications to human reliability analysis (HRA) are emphasised in the presentation of this technique. Exemplified by a sample of 180 abnormal event sequences, the manner in which SSSB can provide viable input for the quantification of errors of commission (EOCs) are outlined. (author)
Random Qualitative Validation: A Mixed-Methods Approach to Survey Validation
Van Duzer, Eric
2012-01-01
The purpose of this paper is to introduce the process and value of Random Qualitative Validation (RQV) in the development and interpretation of survey data. RQV is a method of gathering clarifying qualitative data that improves the validity of the quantitative analysis. This paper is concerned with validity in relation to the participants'…
Oomens, W.; Maes, J.H.R.; Hasselman, F.W.; Egger, J.I.M.
2015-01-01
The concept of executive functions plays a prominent role in contemporary experimental and clinical studies on cognition. One paradigm used in this framework is the random number generation (RNG) task, the execution of which demands aspects of executive functioning, specifically inhibition and
Low-sampling-rate ultra-wideband channel estimation using a bounded-data-uncertainty approach
Ballal, Tarig
2014-01-01
This paper proposes a low-sampling-rate scheme for ultra-wideband channel estimation. In the proposed scheme, P pulses are transmitted to produce P observations. These observations are exploited to produce channel impulse response estimates at a desired sampling rate, while the ADC operates at a rate that is P times less. To avoid loss of fidelity, the interpulse interval, given in units of sampling periods of the desired rate, is restricted to be co-prime with P. This condition is affected when clock drift is present and the transmitted pulse locations change. To handle this situation and to achieve good performance without using prior information, we derive an improved estimator based on the bounded data uncertainty (BDU) model. This estimator is shown to be related to the Bayesian linear minimum mean squared error (LMMSE) estimator. The performance of the proposed sub-sampling scheme was tested in conjunction with the new estimator. It is shown that high reduction in sampling rate can be achieved. The proposed estimator outperforms the least squares estimator in most cases; while in the high SNR regime, it also outperforms the LMMSE estimator. © 2014 IEEE.
Approaches for sampling the twospotted spider mite (Acari: Tetranychidae) on clementines in Spain.
Martínez-Ferrer, M T; Jacas, J A; Ripollés-Moles, J L; Aucejo-Romero, S
2006-08-01
Tetranychus urticae Koch (Acari: Tetranychidae) is an important pest of clementine mandarins, Citrus reticulata Blanco, in Spain. As a first step toward the development of an integrated crop management program for clementines, dispersion patterns of T. urticae females were determined for different types of leaves and fruit. The study was carried out between 2001 and 2003 in different commercial clementine orchards in the provinces of Castelló and Tarragona (northeastern Spain). We found that symptomatic leaves (those exhibiting typical chlorotic spots) harbored 57.1% of the total mite counts. Furthermore, these leaves were representative of mite dynamics on other leaf types. Therefore, symptomatic leaves were selected as a sampling unit. Dispersion patterns generated by Taylor's power law demonstrated the occurrence of aggregated patterns of spatial distribution (b > 1.21) on both leaves and fruit. Based on these results, the incidence (proportion of infested samples) and mean density relationship were developed. We found that optimal binomial sample sizes for estimating low populations of T. urticae on leaves (up to 0.2 female per leaf) were very large. Therefore, enumerative sampling would be more reliable within this range of T. urticae densities. However, binomial sampling was the only valid method for estimating mite density on fruit.
NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel
2016-01-01
Cluster-level dynamic treatment regimens can be used to guide sequential, intervention or treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level DTR, the intervention or treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including based on aggregate measures of the individuals or patients that comprise it. Cluster-randomized sequentia...
Brus, D.J.; Saby, N.P.A.
2016-01-01
In France like in many other countries, the soil is monitored at the locations of a regular, square grid thus forming a systematic sample (SY). This sampling design leads to good spatial coverage, enhancing the precision of design-based estimates of spatial means and totals. Design-based
Felsing, Stefanie; Kochleus, Christian; Buchinger, Sebastian; Brennholt, Nicole; Stock, Friederike; Reifferscheid, Georg
2017-11-16
Numerous studies on microplastics (MPs; Ø recovery achieved with this method was as high as nearly 100% for each type of material. The method was then tested on plastic particles of different shapes and types isolated from the Rhine River. These were successfully electroseparated from the four materials, which demonstrated the utility of this method. Its advantages include the simplified handling and preparation of different field samples as well as a much shorter processing time, because after the last separation step there is hardly any biological material remaining in the sample fraction. Copyright © 2017 Elsevier Ltd. All rights reserved.
Catallo, Cristina; Jack, Susan M.; Ciliska, Donna; MacMillan, Harriet L.
2013-01-01
Little is known about how to systematically integrate complex qualitative studies within the context of randomized controlled trials. A two-phase sequential explanatory mixed methods study was conducted in Canada to understand how women decide to disclose intimate partner violence in emergency department settings. Mixing a RCT (with a subanalysis of data) with a grounded theory approach required methodological modifications to maintain the overall rigour of this mixed methods study. Modifications were made to the following areas of the grounded theory approach to support the overall integrity of the mixed methods study design: recruitment of participants, maximum variation and negative case sampling, data collection, and analysis methods. Recommendations for future studies include: (1) planning at the outset to incorporate a qualitative approach with a RCT and to determine logical points during the RCT to integrate the qualitative component and (2) consideration for the time needed to carry out a RCT and a grounded theory approach, especially to support recruitment, data collection, and analysis. Data mixing strategies should be considered during early stages of the study, so that appropriate measures can be developed and used in the RCT to support initial coding structures and data analysis needs of the grounded theory phase. PMID:23577245
Gad, Mohamed Z; Abdel Rahman, Mohamed F; Hashad, Ingy M; Abdel-Maksoud, Sahar M; Farag, Nabil M; Abou-Aisha, Khaled
2012-07-01
The aim of this study was to detect endothelial nitric oxide synthase (eNOS) Glu298Asp gene variants in a random sample of the Egyptian population, compare it with those from other populations, and attempt to correlate these variants with serum levels of nitric oxide (NO). The association of eNOS genotypes or serum NO levels with the incidence of acute myocardial infarction (AMI) was also examined. One hundred one unrelated healthy subjects and 104 unrelated AMI patients were recruited randomly from the 57357 Hospital and intensive care units of El Demerdash Hospital and National Heart Institute, Cairo, Egypt. eNOS genotypes were determined by polymerase chain reaction-restriction fragment length polymorphism. Serum NO was determined spectrophotometrically. The genotype distribution of eNOS Glu298Asp polymorphism determined for our sample was 58.42% GG (wild type), 33.66% GT, and 7.92% TT genotypes while allele frequencies were 75.25% and 24.75% for G and T alleles, respectively. No significant association between serum NO and specific eNOS genotype could be detected. No significant correlation between eNOS genotype distribution or allele frequencies and the incidence of AMI was observed. The present study demonstrated the predominance of the homozygous genotype GG over the heterozygous GT and homozygous TT in random samples of Egyptian population. It also showed the lack of association between eNOS genotypes and mean serum levels of NO, as well as the incidence of AMI.
A single weighting approach to analyze respondent-driven sampling data
Directory of Open Access Journals (Sweden)
Vadivoo Selvaraj
2016-01-01
Interpretation & conclusions: The proposed weight was comparable to different weights generated by RDSAT. The estimates were comparable to that by RDS II approach. RDS-MOD provided an efficient and easy-to-use method of estimation and regression accounting inter-individual recruits' dependence.
Sampling Practices and Social Spaces: Exploring a Hip-Hop Approach to Higher Education
Petchauer, Emery
2010-01-01
Much more than a musical genre, hip-hop culture exists as an animating force in the lives of many young adults. This article looks beyond the moral concerns often associated with rap music to explore how hip-hop as a larger set of expressions and practices implicates the educational experiences, activities, and approaches for students. The article…
Bieg, Madeleine; Goetz, Thomas; Sticca, Fabio; Brunner, Esther; Becker, Eva; Morger, Vinzenz; Hubbard, Kyle
2017-01-01
Various theoretical approaches propose that emotions in the classroom are elicited by appraisal antecedents, with subjective experiences of control playing a crucial role in this context. Perceptions of control, in turn, are expected to be influenced by the classroom social environment, which can include the teaching methods being employed (e.g.,…
DEFF Research Database (Denmark)
Vahdatirad, Mohammadjavad; Bayat, Mehdi; Andersen, Lars Vabbersgaard
2012-01-01
In this study a stochastic approach is conducted to obtain the horizontal and rotational stiffness of an offshore monopile foundation. A nonlinear stochastic p-y curve is integrated into a finite element scheme for calculation of the monopile response in over-consolidated clay having spatial undr...
The 4-vessel Sampling Approach to Integrative Studies of Human Placental Physiology In Vivo.
Holme, Ane M; Holm, Maia B; Roland, Marie C P; Horne, Hildegunn; Michelsen, Trond M; Haugen, Guttorm; Henriksen, Tore
2017-08-02
The human placenta is highly inaccessible for research while still in utero. The current understanding of human placental physiology in vivo is therefore largely based on animal studies, despite the high diversity among species in placental anatomy, hemodynamics and duration of the pregnancy. The vast majority of human placenta studies are ex vivo perfusion studies or in vitro trophoblast studies. Although in vitro studies and animal models are essential, extrapolation of the results from such studies to the human placenta in vivo is uncertain. We aimed to study human placenta physiology in vivo at term, and present a detailed protocol of the method. Exploiting the intraabdominal access to the uterine vein just before the uterine incision during planned cesarean section, we collect blood samples from the incoming and outgoing vessels on the maternal and fetal sides of the placenta. When combining concentration measurements from blood samples with volume blood flow measurements, we are able to quantify placental and fetal uptake and release of any compound. Furthermore, placental tissue samples from the same mother-fetus pairs can provide measurements of transporter density and activity and other aspects of placental functions in vivo. Through this integrative use of the 4-vessel sampling method we are able to test some of the current concepts of placental nutrient transfer and metabolism in vivo, both in normal and pathological pregnancies. Furthermore, this method enables the identification of substances secreted by the placenta to the maternal circulation, which could be an important contribution to the search for biomarkers of placenta dysfunction.
Gender Wage Gap : A Semi-Parametric Approach With Sample Selection Correction
Picchio, M.; Mussida, C.
2010-01-01
Sizeable gender differences in employment rates are observed in many countries. Sample selection into the workforce might therefore be a relevant issue when estimating gender wage gaps. This paper proposes a new semi-parametric estimator of densities in the presence of covariates which incorporates
Functional approximations to posterior densities: a neural network approach to efficient sampling
L.F. Hoogerheide (Lennart); J.F. Kaashoek (Johan); H.K. van Dijk (Herman)
2002-01-01
textabstractThe performance of Monte Carlo integration methods like importance sampling or Markov Chain Monte Carlo procedures greatly depends on the choice of the importance or candidate density. Usually, such a density has to be "close" to the target density in order to yield numerically accurate
Ertefaie, Ashkan; Asgharian, Masoud; Stephens, David
2014-01-01
The pervasive use of prevalent cohort studies on disease duration increasingly calls for an appropriate methodology to account for the biases that invariably accompany samples formed by such data. It is well-known, for example, that subjects with shorter lifetime are less likely to be present in such studies. Moreover, certain covariate values could be preferentially selected into the sample, being linked to the long-term survivors. The existing methodology for estimating the propensity score using data collected on prevalent cases requires the correct conditional survival/hazard function given the treatment and covariates. This requirement can be alleviated if the disease under study has stationary incidence, the so-called stationarity assumption. We propose a nonparametric adjustment technique based on a weighted estimating equation for estimating the propensity score which does not require modeling the conditional survival/hazard function when the stationarity assumption holds. The estimator's large-sample properties are established and its small-sample behavior is studied via simulation. The estimated propensity score is utilized to estimate the survival curves.
Brzeski, P; Wojewoda, J; Kapitaniak, T; Kurths, J; Perlikowski, P
2017-07-21
In this paper we show the first broad experimental confirmation of the basin stability approach. The basin stability is one of the sample-based approach methods for analysis of the complex, multidimensional dynamical systems. We show that investigated method is a reliable tool for the analysis of dynamical systems and we prove that it has a significant advantages which make it appropriate for many applications in which classical analysis methods are difficult to apply. We study theoretically and experimentally the dynamics of a forced double pendulum. We examine the ranges of stability for nine different solutions of the system in a two parameter space, namely the amplitude and the frequency of excitation. We apply the path-following and the extended basin stability methods (Brzeski et al., Meccanica 51(11), 2016) and we verify obtained theoretical results in experimental investigations. Comparison of the presented results show that the sample-based approach offers comparable precision to the classical method of analysis. However, it is much simpler to apply and can be used despite the type of dynamical system and its dimensions. Moreover, the sample-based approach has some unique advantages and can be applied without the precise knowledge of parameter values.
Zhang, Jie; Wei, Shimin; Ayres, David W; Smith, Harold T; Tse, Francis L S
2011-09-01
Although it is well known that automation can provide significant improvement in the efficiency of biological sample preparation in quantitative LC-MS/MS analysis, it has not been widely implemented in bioanalytical laboratories throughout the industry. This can be attributed to the lack of a sound strategy and practical procedures in working with robotic liquid-handling systems. Several comprehensive automation assisted procedures for biological sample preparation and method validation were developed and qualified using two types of Hamilton Microlab liquid-handling robots. The procedures developed were generic, user-friendly and covered the majority of steps involved in routine sample preparation and method validation. Generic automation procedures were established as a practical approach to widely implement automation into the routine bioanalysis of samples in support of drug-development programs.
Kashdan, Todd B; Farmer, Antonina S
2014-06-01
The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning.
Kashdan, Todd B.; Farmer, Antonina S.
2014-01-01
The ability to recognize and label emotional experiences has been associated with well-being and adaptive functioning. This skill is particularly important in social situations, as emotions provide information about the state of relationships and help guide interpersonal decisions, such as whether to disclose personal information. Given the interpersonal difficulties linked to social anxiety disorder (SAD), deficient negative emotion differentiation may contribute to impairment in this population. We hypothesized that people with SAD would exhibit less negative emotion differentiation in daily life, and these differences would translate to impairment in social functioning. We recruited 43 people diagnosed with generalized SAD and 43 healthy adults to describe the emotions they experienced over 14 days. Participants received palmtop computers for responding to random prompts and describing naturalistic social interactions; to complete end-of-day diary entries, they used a secure online website. We calculated intraclass correlation coefficients to capture the degree of differentiation of negative and positive emotions for each context (random moments, face-to-face social interactions, and end-of-day reflections). Compared to healthy controls, the SAD group exhibited less negative (but not positive) emotion differentiation during random prompts, social interactions, and (at trend level) end-of-day assessments. These differences could not be explained by emotion intensity or variability over the 14 days, or to comorbid depression or anxiety disorders. Our findings suggest that people with generalized SAD have deficits in clarifying specific negative emotions felt at a given point of time. These deficits may contribute to difficulties with effective emotion regulation and healthy social relationship functioning. PMID:24512246
Han, L. F; Plummer, Niel
2016-01-01
Numerous methods have been proposed to estimate the pre-nuclear-detonation 14C content of dissolved inorganic carbon (DIC) recharged to groundwater that has been corrected/adjusted for geochemical processes in the absence of radioactive decay (14C0) - a quantity that is essential for estimation of radiocarbon age of DIC in groundwater. The models/approaches most commonly used are grouped as follows: (1) single-sample-based models, (2) a statistical approach based on the observed (curved) relationship between 14C and δ13C data for the aquifer, and (3) the geochemical mass-balance approach that constructs adjustment models accounting for all the geochemical reactions known to occur along a groundwater flow path. This review discusses first the geochemical processes behind each of the single-sample-based models, followed by discussions of the statistical approach and the geochemical mass-balance approach. Finally, the applications, advantages and limitations of the three groups of models/approaches are discussed.The single-sample-based models constitute the prevailing use of 14C data in hydrogeology and hydrological studies. This is in part because the models are applied to an individual water sample to estimate the 14C age, therefore the measurement data are easily available. These models have been shown to provide realistic radiocarbon ages in many studies. However, they usually are limited to simple carbonate aquifers and selection of model may have significant effects on 14C0 often resulting in a wide range of estimates of 14C ages.Of the single-sample-based models, four are recommended for the estimation of 14C0 of DIC in groundwater: Pearson's model, (Ingerson and Pearson, 1964; Pearson and White, 1967), Han & Plummer's model (Han and Plummer, 2013), the IAEA model (Gonfiantini, 1972; Salem et al., 1980), and Oeschger's model (Geyh, 2000). These four models include all processes considered in single-sample-based models, and can be used in different ranges of
Equilibrium sampling of hydrophobic organic chemicals in sediments: challenges and new approaches
DEFF Research Database (Denmark)
Schaefer, S.; Mayer, Philipp; Becker, B.
2015-01-01
concentrations in thicker silicone coating for more hydrophobic PCBs can be explained by non-equilibrium. Equilibrium concentrations in silicone were then determined by non-linear least square regression of analyte concentrations in polymer as a function of silicone mass using a first order kinetic model...... that microbial degradation can play a significant role during equilibrium sampling of biodegradable compounds even during short incubation times and despite confirmation of equilibrium partitioning....
Gottlieb, Jacqueline
2017-08-24
In natural behavior we actively gather information using attention and active sensing behaviors (such as shifts of gaze) to sample relevant cues. However, while attention and decision making are naturally coordinated, in the laboratory they have been dissociated. Attention is studied independently of the actions it serves. Conversely, decision theories make the simplifying assumption that the relevant information is given, and do not attempt to describe how the decision maker may learn and implement active sampling policies. In this paper I review recent studies that address questions of attentional learning, cue validity and information seeking in humans and non-human primates. These studies suggest that learning a sampling policy involves large scale interactions between networks of attention and valuation, which implement these policies based on reward maximization, uncertainty reduction and the intrinsic utility of cognitive states. I discuss the importance of using such paradigms for formalizing the role of attention, as well as devising more realistic theories of decision making that capture a broader range of empirical observations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Anne H Berman
Full Text Available The KIDSCREEN-27 is a measure of child and adolescent quality of life (QoL, with excellent psychometric properties, available in child-report and parent-rating versions in 38 languages. This study provides child-reported and parent-rated norms for the KIDSCREEN-27 among Swedish 11-16 year-olds, as well as child-parent agreement. Sociodemographic correlates of self-reported wellbeing and parent-rated wellbeing were also measured.A random population sample consisting of 600 children aged 11-16, 100 per age group and one of their parents (N = 1200, were approached for response to self-reported and parent-rated versions of the KIDSCREEN-27. Parents were also asked about their education, employment status and their own QoL based on the 26-item WHOQOL-Bref. Based on the final sampling pool of 1158 persons, a 34.8% response rate of 403 individuals was obtained, including 175 child-parent pairs, 27 child singleton responders and 26 parent singletons. Gender and age differences for parent ratings and child-reported data were analyzed using t-tests and the Mann-Whitney U-test. Post-hoc Dunn tests were conducted for pairwise comparisons when the p-value for specific subscales was 0.05 or lower. Child-parent agreement was tested item-by-item, using the Prevalence- and Bias-Adjusted Kappa (PABAK coefficient for ordinal data (PABAK-OS; dimensional and total score agreement was evaluated based on dichotomous cut-offs for lower well-being, using the PABAK and total, continuous scores were evaluated using Bland-Altman plots.Compared to European norms, Swedish children in this sample scored lower on Physical wellbeing (48.8 SE/49.94 EU but higher on the other KIDSCREEN-27 dimensions: Psychological wellbeing (53.4/49.77, Parent relations and autonomy (55.1/49.99, Social Support and peers (54.1/49.94 and School (55.8/50.01. Older children self-reported lower wellbeing than younger children. No significant self-reported gender differences occurred and parent ratings
Berman, Anne H.; Liu, Bojing; Ullman, Sara; Jadbäck, Isabel; Engström, Karin
2016-01-01
Background The KIDSCREEN-27 is a measure of child and adolescent quality of life (QoL), with excellent psychometric properties, available in child-report and parent-rating versions in 38 languages. This study provides child-reported and parent-rated norms for the KIDSCREEN-27 among Swedish 11–16 year-olds, as well as child-parent agreement. Sociodemographic correlates of self-reported wellbeing and parent-rated wellbeing were also measured. Methods A random population sample consisting of 600 children aged 11–16, 100 per age group and one of their parents (N = 1200), were approached for response to self-reported and parent-rated versions of the KIDSCREEN-27. Parents were also asked about their education, employment status and their own QoL based on the 26-item WHOQOL-Bref. Based on the final sampling pool of 1158 persons, a 34.8% response rate of 403 individuals was obtained, including 175 child-parent pairs, 27 child singleton responders and 26 parent singletons. Gender and age differences for parent ratings and child-reported data were analyzed using t-tests and the Mann-Whitney U-test. Post-hoc Dunn tests were conducted for pairwise comparisons when the p-value for specific subscales was 0.05 or lower. Child-parent agreement was tested item-by-item, using the Prevalence- and Bias-Adjusted Kappa (PABAK) coefficient for ordinal data (PABAK-OS); dimensional and total score agreement was evaluated based on dichotomous cut-offs for lower well-being, using the PABAK and total, continuous scores were evaluated using Bland-Altman plots. Results Compared to European norms, Swedish children in this sample scored lower on Physical wellbeing (48.8 SE/49.94 EU) but higher on the other KIDSCREEN-27 dimensions: Psychological wellbeing (53.4/49.77), Parent relations and autonomy (55.1/49.99), Social Support and peers (54.1/49.94) and School (55.8/50.01). Older children self-reported lower wellbeing than younger children. No significant self-reported gender differences
Berman, Anne H; Liu, Bojing; Ullman, Sara; Jadbäck, Isabel; Engström, Karin
2016-01-01
The KIDSCREEN-27 is a measure of child and adolescent quality of life (QoL), with excellent psychometric properties, available in child-report and parent-rating versions in 38 languages. This study provides child-reported and parent-rated norms for the KIDSCREEN-27 among Swedish 11-16 year-olds, as well as child-parent agreement. Sociodemographic correlates of self-reported wellbeing and parent-rated wellbeing were also measured. A random population sample consisting of 600 children aged 11-16, 100 per age group and one of their parents (N = 1200), were approached for response to self-reported and parent-rated versions of the KIDSCREEN-27. Parents were also asked about their education, employment status and their own QoL based on the 26-item WHOQOL-Bref. Based on the final sampling pool of 1158 persons, a 34.8% response rate of 403 individuals was obtained, including 175 child-parent pairs, 27 child singleton responders and 26 parent singletons. Gender and age differences for parent ratings and child-reported data were analyzed using t-tests and the Mann-Whitney U-test. Post-hoc Dunn tests were conducted for pairwise comparisons when the p-value for specific subscales was 0.05 or lower. Child-parent agreement was tested item-by-item, using the Prevalence- and Bias-Adjusted Kappa (PABAK) coefficient for ordinal data (PABAK-OS); dimensional and total score agreement was evaluated based on dichotomous cut-offs for lower well-being, using the PABAK and total, continuous scores were evaluated using Bland-Altman plots. Compared to European norms, Swedish children in this sample scored lower on Physical wellbeing (48.8 SE/49.94 EU) but higher on the other KIDSCREEN-27 dimensions: Psychological wellbeing (53.4/49.77), Parent relations and autonomy (55.1/49.99), Social Support and peers (54.1/49.94) and School (55.8/50.01). Older children self-reported lower wellbeing than younger children. No significant self-reported gender differences occurred and parent ratings showed
Weinhold, Jan; Hunger, Christina; Bornhäuser, Annette; Link, Leoni; Rochon, Justine; Wild, Beate; Schweitzer, Jochen
2013-10-01
The study examined the efficacy of nonrecurring family constellation seminars on psychological health. We conducted a monocentric, single-blind, stratified, and balanced randomized controlled trial (RCT). After choosing their roles for participating in a family constellation seminar as either active participant (AP) or observing participant (OP), 208 adults (M = 48 years, SD = 10; 79% women) from the general population were randomly allocated to the intervention group (IG; 3-day family constellation seminar; 64 AP, 40 OP) or a wait-list control group (WLG; 64 AP, 40 OP). It was predicted that family constellation seminars would improve psychological functioning (Outcome Questionnaire OQ-45.2) at 2-week and 4-month follow-ups. In addition, we assessed the impact of family constellation seminars on psychological distress and motivational incongruence. The IG showed significantly improved psychological functioning (d = 0.45 at 2-week follow-up, p = .003; d = 0.46 at 4-month follow-up, p = .003). Results were confirmed for psychological distress and motivational incongruence. No adverse events were reported. This RCT provides evidence for the efficacy of family constellation in a nonclinical population. The implications of the findings are discussed.
Bachschmid-Romano, L.; Battistin, C.; Opper, M.; Roudi, Y.
2016-10-01
We describe and analyze some novel approaches for studying the dynamics of Ising spin glass models. We first briefly consider the variational approach based on minimizing the Kullback-Leibler divergence between independent trajectories and the real ones and note that this approach only coincides with the mean field equations from the saddle point approximation to the generating functional when the dynamics is defined through a logistic link function, which is the case for the kinetic Ising model with parallel update. We then spend the rest of the paper developing two ways of going beyond the saddle point approximation to the generating functional. In the first one, we develop a variational perturbative approximation to the generating functional by expanding the action around a quadratic function of the local fields and conjugate local fields whose parameters are optimized. We derive analytical expressions for the optimal parameters and show that when the optimization is suitably restricted, we recover the mean field equations that are exact for the fully asymmetric random couplings (Mézard and Sakellariou 2011 J. Stat. Mech. 2011 L07001). However, without this restriction the results are different. We also describe an extended Plefka expansion in which in addition to the magnetization, we also fix the correlation and response functions. Finally, we numerically study the performance of these approximations for Sherrington-Kirkpatrick type couplings for various coupling strengths and the degrees of coupling symmetry, for both temporally constant but random, as well as time varying external fields. We show that the dynamical equations derived from the extended Plefka expansion outperform the others in all regimes, although it is computationally more demanding. The unconstrained variational approach does not perform well in the small coupling regime, while it approaches dynamical TAP equations of (Roudi and Hertz 2011 J. Stat. Mech. 2011 P03031) for strong couplings.
Andersson, Ola; Hellström-Westas, Lena; Andersson, Dan; Clausen, Jesper; Domellöf, Magnus
2013-05-01
To investigate the effect of delayed cord clamping (DCC) compared with early cord clamping (ECC) on maternal postpartum hemorrhage (PPH) and umbilical cord blood gas sampling. Secondary analysis of a parallel-group, single-center, randomized controlled trial. Swedish county hospital. 382 term deliveries after a low-risk pregnancy. Deliveries were randomized to DCC (≥180 seconds, n = 193) or ECC (≤10 seconds, n = 189). Maternal blood loss was estimated by the midwife. Samples for blood gas analysis were taken from one umbilical artery and the umbilical vein, from the pulsating unclamped cord in the DCC group and from the double-clamped cord in the ECC group. Samples were classified as valid when the arterial-venous difference was -0.02 or less for pH and 0.5 kPa or more for pCO2 . Main outcome measures. PPH and proportion of valid blood gas samples. The differences between the DCC and ECC groups with regard to PPH (1.2%, p = 0.8) and severe PPH (-2.7%, p = 0.3) were small and non-significant. The proportion of valid blood gas samples was similar between the DCC (67%, n = 130) and ECC (74%, n = 139) groups, with 6% (95% confidence interval: -4%-16%, p = 0.2) fewer valid samples after DCC. Delayed cord clamping, compared with early, did not have a significant effect on maternal postpartum hemorrhage or on the proportion of valid blood gas samples. We conclude that delayed cord clamping is a feasible method from an obstetric perspective. © 2012 The Authors Acta Obstetricia et Gynecologica Scandinavica© 2012 Nordic Federation of Societies of Obstetrics and Gynecology.
Lui, Kung-Jong; Chang, Kuang-Chao
2008-01-15
When a generic drug is developed, it is important to assess the equivalence of therapeutic efficacy between the new and the standard drugs. Although the number of publications on testing equivalence and its relevant sample size determination is numerous, the discussion on sample size determination for a desired power of detecting equivalence under a randomized clinical trial (RCT) with non-compliance and missing outcomes is limited. In this paper, we derive under the compound exclusion restriction model the maximum likelihood estimator (MLE) for the ratio of probabilities of response among compliers between two treatments in a RCT with both non-compliance and missing outcomes. Using the MLE with the logarithmic transformation, we develop an asymptotic test procedure for assessing equivalence and find that this test procedure can perform well with respect to type I error based on Monte Carlo simulation. We further develop a sample size calculation formula for a desired power of detecting equivalence at a nominal alpha-level. To evaluate the accuracy of the sample size calculation formula, we apply Monte Carlo simulation again to calculate the simulated power of the proposed test procedure corresponding to the resulting sample size for a desired power of 80 per cent at 0.05 level in a variety of situations. We also include a discussion on determining the optimal ratio of sample size allocation subject to a desired power to minimize a linear cost function and provide a sensitivity analysis of the sample size formula developed here under an alterative model with missing at random. Copyright (c) 2007 John Wiley & Sons, Ltd.
Energy Technology Data Exchange (ETDEWEB)
Burr, T. [Statistical Sciences Group, Los Alamos National Laboratory, Mail Stop F600, Los Alamos, NM 87545 (United States)], E-mail: tburr@lanl.gov; Butterfield, K. [Advanced Nuclear Technology Group, Los Alamos National Laboratory, Mail Stop F600, Los Alamos, NM 87545 (United States)
2008-09-01
Neutron multiplicity counting is an established method to estimate the spontaneous fission rate, and therefore also the plutonium mass for example, in a sample that includes other neutron sources. The extent to which the sample and detector obey the 'point model' assumptions impacts the estimate's total measurement error, but, in nearly all cases, for the random error contribution, it is useful to evaluate the variances of the second and third reduced sample moments of the neutron source strength. Therefore, this paper derives exact expressions for the variances and covariances of the second and third reduced sample moments for either randomly triggered or signal-triggered non-overlapping counting gates, and compares them to the corresponding variances in simulated data. Approximate expressions are also provided for the case of overlapping counting gates. These variances and covariances are useful in figure of merit calculations to predict assay performance prior to data collection. In addition, whenever real data are available, a bootstrap method is presented as an alternate but effective way to estimate these variances.
Analyzing crash frequency in freeway tunnels: A correlated random parameters approach.
Hou, Qinzhong; Tarko, Andrew P; Meng, Xianghai
2018-02-01
The majority of past road safety studies focused on open road segments while only a few focused on tunnels. Moreover, the past tunnel studies produced some inconsistent results about the safety effects of the traffic patterns, the tunnel design, and the pavement conditions. The effects of these conditions therefore remain unknown, especially for freeway tunnels in China. The study presented in this paper investigated the safety effects of these various factors utilizing a four-year period (2009-2012) of data as well as three models: 1) a random effects negative binomial model (RENB), 2) an uncorrelated random parameters negative binomial model (URPNB), and 3) a correlated random parameters negative binomial model (CRPNB). Of these three, the results showed that the CRPNB model provided better goodness-of-fit and offered more insights into the factors that contribute to tunnel safety. The CRPNB was not only able to allocate the part of the otherwise unobserved heterogeneity to the individual model parameters but also was able to estimate the cross-correlations between these parameters. Furthermore, the study results showed that traffic volume, tunnel length, proportion of heavy trucks, curvature, and pavement rutting were associated with higher frequencies of traffic crashes, while the distance to the tunnel wall, distance to the adjacent tunnel, distress ratio, International Roughness Index (IRI), and friction coefficient were associated with lower crash frequencies. In addition, the effects of the heterogeneity of the proportion of heavy trucks, the curvature, the rutting depth, and the friction coefficient were identified and their inter-correlations were analyzed. Copyright © 2017 Elsevier Ltd. All rights reserved.
A Random Trimming Approach for Obtaining High-Precision Embedded Resistors
2008-12-01
target resistance) Fr eq ue nc y Single-Dive/Random Trimming L-Cut Trimming Higher Precision Lower Precision 67 Points 66 Poin ts 0 5 10 15 20 25 30 35 40...layers.1 For a resistor of a given value, the total power dissipated (P) by the resistor is, R VRIP 2 2 == (9) where I is the current flowing...through the resistor, R is the resistance value of the resistor, and V is the voltage across the resistor. The total power dissipated by the
Reconstruction of large, irregularly sampled multidimensional images. A tensor-based approach.
Morozov, Oleksii Vyacheslav; Unser, Michael; Hunziker, Patrick
2011-02-01
Many practical applications require the reconstruction of images from irregularly sampled data. The spline formalism offers an attractive framework for solving this problem; the currently available methods, however, are hard to deploy for large-scale interpolation problems in dimensions greater than two (3-D, 3-D+time) because of an exponential increase of their computational cost (curse of dimensionality). Here, we revisit the standard regularized least-squares formulation of the interpolation problem, and propose to perform the reconstruction in a uniform tensor-product B-spline basis as an alternative to the classical solution involving radial basis functions. Our analysis reveals that the underlying multilinear system of equations admits a tensor decomposition with an extreme sparsity of its one dimensional components. We exploit this property for implementing a parallel, memory-efficient system solver. We show that the computational complexity of the proposed algorithm is essentially linear in the number of measurements and that its dependency on the number of dimensions is significantly less than that of the original sparse matrix-based implementation. The net benefit is a substantial reduction in memory requirement and operation count when compared to standard matrix-based algorithms, so that even 4-D problems with millions of samples become computationally feasible on desktop PCs in reasonable time. After validating the proposed algorithm in 3-D and 4-D, we apply it to a concrete imaging problem: the reconstruction of medical ultrasound images (3-D+time) from a large set of irregularly sampled measurements, acquired by a fast rotating ultrasound transducer.
A novel pretreatment approach for fast determination of organochlorine pesticides in biotic samples.
Yang, Dai B; Wang, Ya Q; Liu, Wen X; Tao, Shu
2008-05-01
Recent studies have focused on enantiomeric behaviors of chiral organochlorine pesticides (OCPs) in biotic matrix because they provide insights into the biotransformation processes of chiral OCPs. In the present paper, a double in-line column chromatographic method was developed to effectively remove the lipid impurity in different biotic samples for clean-up of OCPs. After an initial Soxhlet extraction of OCPs from the biotic samples by a mixture of acetone and dichloromethane (DCM), dimethyl sulfoxide (DMSO) was directly added to the extract, and low boiling point solvents (acetone and DCM) were then evaporated. OCPs remained in DMSO were eluted via column 1 filled with silicon gel, and subsequently passed through column 2 packed with 15% deactivated florisil. This novel method was characterized by significant time and solvent savings. The recovery rates of alpha-HCH (hexachlorocyclohexane), beta-HCH, gamma-HCH and delta-HCH were 78.5+/-3.1%, 72.4+/-7.7%, 72+/-4.0% and 70.0+/-8.7%, respectively, and 92.5+/-3.8%, 79.7+/-6.7% and 83.4+/-6.5% for 1,1-dichloro-2-(2-chlorophenyl)-2-(4- chlorophenyl) ethylene (o,p'-DDE), 1,1-dichloro-2-(2-chlorophenyl)-2-(4-chloro phenyl)ethane (o,p'-DDD) and 1,1,1-trichloro-2-(2-chlorophenyl)-2-(4-chlorophenyl) ethane (o,p'-DDT), separately. In addition, the separation efficiencies of the target compounds by both achiral and chiral gas chromatographic columns were satisfactory using the established method. Therefore, the double in-line column chromatography was a useful alternative method for pretreatment of OCPs in different biotic samples.
Ghetti, Claire M
2013-01-01
Individuals undergoing cardiac catheterization are likely to experience elevated anxiety periprocedurally, with highest anxiety levels occurring immediately prior to the procedure. Elevated anxiety has the potential to negatively impact these individuals psychologically and physiologically in ways that may influence the subsequent procedure. This study evaluated the use of music therapy, with a specific emphasis on emotional-approach coping, immediately prior to cardiac catheterization to impact periprocedural outcomes. The randomized, pretest/posttest control group design consisted of two experimental groups--the Music Therapy with Emotional-Approach Coping group [MT/EAC] (n = 13), and a talk-based Emotional-Approach Coping group (n = 14), compared with a standard care Control group (n = 10). MT/EAC led to improved positive affective states in adults awaiting elective cardiac catheterization, whereas a talk-based emphasis on emotional-approach coping or standard care did not. All groups demonstrated a significant overall decrease in negative affect. The MT/EAC group demonstrated a statistically significant, but not clinically significant, increase in systolic blood pressure most likely due to active engagement in music making. The MT/EAC group trended toward shortest procedure length and least amount of anxiolytic required during the procedure, while the EAC group trended toward least amount of analgesic required during the procedure, but these differences were not statistically significant. Actively engaging in a session of music therapy with an emphasis on emotional-approach coping can improve the well-being of adults awaiting cardiac catheterization procedures.
Sex estimation from the tarsal bones in a Portuguese sample: a machine learning approach.
Navega, David; Vicente, Ricardo; Vieira, Duarte N; Ross, Ann H; Cunha, Eugénia
2015-05-01
Sex estimation is extremely important in the analysis of human remains as many of the subsequent biological parameters are sex specific (e.g., age at death, stature, and ancestry). When dealing with incomplete or fragmented remains, metric analysis of the tarsal bones of the feet has proven valuable. In this study, the utility of 18 width, length, and height tarsal measurements were assessed for sex-related variation in a Portuguese sample. A total of 300 males and females from the Coimbra Identified Skeletal Collection were used to develop sex prediction models based on statistical and machine learning algorithm such as discriminant function analysis, logistic regression, classification trees, and artificial neural networks. All models were evaluated using 10-fold cross-validation and an independent test sample composed of 60 males and females from the Identified Skeletal Collection of the 21st Century. Results showed that tarsal bone sex-related variation can be easily captured with a high degree of repeatability. A simple tree-based multivariate algorithm involving measurements from the calcaneus, talus, first and third cuneiforms, and cuboid resulted in 88.3% correct sex estimation both on training and independent test sets. Traditional statistical classifiers such as the discriminant function analysis were outperformed by machine learning techniques. Results obtained show that machine learning algorithm are an important tool the forensic practitioners should consider when developing new standards for sex estimation.
Proteomics of hydrophobic samples: Fast, robust and low-cost workflows for clinical approaches.
Pasing, Yvonne; Colnoe, Sayda; Hansen, Terkel
2017-03-01
In a comparative study, we investigated the influence of nine sample preparation workflows and seven different lysis buffers for qualitative and quantitative analysis of the human adipose tissue proteome. Adipose tissue is not just a fat depot but also an endocrine organ, which cross-talks with other tissue types and organs throughout the body, like liver, muscle, pancreas, and brain. Its secreted molecules have an influence on the nervous, immune, and vascular system, thus adipose tissue plays an important role in the regulation of whole-body homeostasis. Proteomic analysis of adipose tissue is challenging due to the extremely high lipid content and a variety of different cell types included. We investigated the influence of different detergents to the lysis buffer and compared commonly used methods like protein precipitation and filter-aided sample preparation (FASP) with workflows involving acid labile or precipitable surfactants. The results indicate that a sodium deoxycholate (SDC) based workflow had the highest efficiency and reproducibility for quantitative proteomic analysis. In total 2564 proteins from the adipose tissue of a single person were identified. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
A Permutation-Randomization Approach to Test the Spatial Distribution of Plant Diseases.
Lione, G; Gonthier, P
2016-01-01
The analysis of the spatial distribution of plant diseases requires the availability of trustworthy geostatistical methods. The mean distance tests (MDT) are here proposed as a series of permutation and randomization tests to assess the spatial distribution of plant diseases when the variable of phytopathological interest is categorical. A user-friendly software to perform the tests is provided. Estimates of power and type I error, obtained with Monte Carlo simulations, showed the reliability of the MDT (power > 0.80; type I error pathogens causing root rot on conifers was successfully performed by verifying the consistency between the MDT responses and previously published data. An application of the MDT was carried out to analyze the relation between the plantation density and the distribution of the infection of Gnomoniopsis castanea, an emerging fungal pathogen causing nut rot on sweet chestnut. Trees carrying nuts infected by the pathogen were randomly distributed in areas with different plantation densities, suggesting that the distribution of G. castanea was not related to the plantation density. The MDT could be used to analyze the spatial distribution of plant diseases both in agricultural and natural ecosystems.
Sadhukhan, B.; Nayak, A.; Mookerjee, A.
2017-12-01
In this communication we present together four distinct techniques for the study of electronic structure of solids: the tight-binding linear muffin-tin orbitals, the real space and augmented space recursions and the modified exchange-correlation. Using this we investigate the effect of random vacancies on the electronic properties of the carbon hexagonal allotrope, graphene, and the non-hexagonal allotrope, planar T graphene. We have inserted random vacancies at different concentrations, to simulate disorder in pristine graphene and planar T graphene sheets. The resulting disorder, both on-site (diagonal disorder) as well as in the hopping integrals (off-diagonal disorder), introduces sharp peaks in the vicinity of the Dirac point built up from localized states for both hexagonal and non-hexagonal structures. These peaks become resonances with increasing vacancy concentration. We find that in presence of vacancies, graphene-like linear dispersion appears in planar T graphene and the cross points form a loop in the first Brillouin zone similar to buckled T graphene that originates from π and π* bands without regular hexagonal symmetry. We also calculate the single-particle relaxation time, τ (ěc {q}) of ěc {q} labeled quantum electronic states which originates from scattering due to presence of vacancies, causing quantum level broadening.
Directory of Open Access Journals (Sweden)
Simon Boitard
2016-03-01
Full Text Available Inferring the ancestral dynamics of effective population size is a long-standing question in population genetics, which can now be tackled much more accurately thanks to the massive genomic data available in many species. Several promising methods that take advantage of whole-genome sequences have been recently developed in this context. However, they can only be applied to rather small samples, which limits their ability to estimate recent population size history. Besides, they can be very sensitive to sequencing or phasing errors. Here we introduce a new approximate Bayesian computation approach named PopSizeABC that allows estimating the evolution of the effective population size through time, using a large sample of complete genomes. This sample is summarized using the folded allele frequency spectrum and the average zygotic linkage disequilibrium at different bins of physical distance, two classes of statistics that are widely used in population genetics and can be easily computed from unphased and unpolarized SNP data. Our approach provides accurate estimations of past population sizes, from the very first generations before present back to the expected time to the most recent common ancestor of the sample, as shown by simulations under a wide range of demographic scenarios. When applied to samples of 15 or 25 complete genomes in four cattle breeds (Angus, Fleckvieh, Holstein and Jersey, PopSizeABC revealed a series of population declines, related to historical events such as domestication or modern breed creation. We further highlight that our approach is robust to sequencing errors, provided summary statistics are computed from SNPs with common alleles.
French, Michael D; Churcher, Thomas S; Basáñez, María-Gloria; Norton, Alice J; Lwambo, Nicholas J S; Webster, Joanne P
2013-11-01
Detecting potential changes in genetic diversity in schistosome populations following chemotherapy with praziquantel (PZQ) is crucial if we are to fully understand the impact of such chemotherapy with respect to the potential emergence of resistance and/or other evolutionary outcomes of interventions. Doing so by implementing effective, and cost-efficient sampling protocols will help to optimise time and financial resources, particularly relevant to a disease such as schistosomiasis currently reliant on a single available drug. Here we explore the effect on measures of parasite genetic diversity of applying various field sampling approaches, both in terms of the number of (human) hosts sampled and the number of transmission stages (miracidia) sampled per host for a Schistosoma mansoni population in Tanzania pre- and post-treatment with PZQ. In addition, we explore population structuring within and between hosts by comparing the estimates of genetic diversity obtained assuming a 'component population' approach with those using an 'infrapopulation' approach. We found that increasing the number of hosts sampled, rather than the number of miracidia per host, gives more robust estimates of genetic diversity. We also found statistically significant population structuring (using Wright's F-statistics) and significant differences in the measures of genetic diversity depending on the parasite population definition. The relative advantages, disadvantages and, hence, subsequent reliability of these metrics for parasites with complex life-cycles are discussed, both for the specific epidemiological and ecological scenario under study here and for their future application to other areas and schistosome species. Copyright © 2012 Elsevier B.V. All rights reserved.
Yeh, Mary S L; Mari, Jair Jesus; Costa, Mariana Caddrobi Pupo; Andreoli, Sergio Baxter; Bressan, Rodrigo Affonseca; Mello, Marcelo Feijó
2011-10-01
To evaluate the efficacy and tolerability of topiramate in patients with posttraumatic stress disorder (PTSD). We conducted a 12-week double-blind, randomized, placebo-controlled study comparing topiramate to placebo. Men and women aged 18-62 years with diagnosis of PTSD according to DSM-IV were recruited from the outpatient clinic of the violence program of Federal University of São Paulo Hospital (Prove-UNIFESP), São Paulo City, between April 2006 and December 2009. Subjects were assessed for the Clinician-Administered Posttraumatic Stress Scale (CAPS), Clinical Global Impression, and Beck Depression Inventory (BDI). After 1-week period of washout, 35 patients were randomized to either group. The primary outcome measure was the CAPS total score changes from baseline to the endpoint. 82.35% of patients in the topiramate group exhibited improvements in PTSD symptoms. The efficacy analysis demonstrated that patients in the topiramate group exhibited significant improvements in reexperiencing symptoms: flashbacks, intrusive memories, and nightmares of the trauma (CAPS-B; P= 0.04) and in avoidance/numbing symptoms associated with the trauma, social isolation, and emotional numbing (CAPS-C; P= 0.0001). Furthermore, the experimental group demonstrated a significant difference in decrease in CAPS total score (topiramate -57.78; placebo -32.41; P= 0.0076). Mean topiramate dose was 102.94 mg/d. Topiramate was generally well tolerated. Topiramate was effective in improving reexperiencing and avoidance/numbing symptom clusters in patients with PTSD. This study supports the use of anticonvulsants for the improvement of symptoms of PTSD. © 2010 Blackwell Publishing Ltd.
Knotters, M.; Brus, D.J.
2013-01-01
The quality of ecotope maps of five districts of main water courses in the Netherlands was assessed on the basis of independent validation samples of field observations. The overall proportion of area correctly classified, and user's and producer's accuracy for each map unit were estimated. In four
Huynh, Huynh; Feldt, Leonard S.
1976-01-01
When the variance assumptions of a repeated measures ANOVA are not met, the F distribution of the mean square ratio should be adjusted by the sample estimate of the Box correction factor. An alternative is proposed which is shown by Monte Carlo methods to be less biased for a moderately large factor. (RC)
DEFF Research Database (Denmark)
Puri, Rajesh; Vilmann, Peter; Saftoiu, Adrian
2009-01-01
). The samples were characterized for cellularity and bloodiness, with a final cytology diagnosis established blindly. The final diagnosis was reached either by EUS-FNA if malignancy was definite, or by surgery and/or clinical follow-up of a minimum of 6 months in the cases of non-specific benign lesions...
A unified approach for performance analysis of randomly generated robotic morphologies
Directory of Open Access Journals (Sweden)
Sameer Gupta
2016-09-01
Full Text Available An attempt is presented towards a unified approach to compute kinematic performance of planar manipulators with the links connected in series, in loops and in combinations of both. The motivation of this work lies in the fact that serially connected links are normally selected for large manipulability and parallel manipulators are utilized for better stiffness. To acquire a topology suitable for a given task, the topological parameters can be considered variables in a manipulator design problems. However, since the complete kinematic model changes with each small change in the basic structure, a unified approach is important to work upon. A case of five-bar mechanism consisting of 2-degrees-of-freedom (dof system is considered to demonstrate the proposed approach for performance analysis.
National Research Council Canada - National Science Library
KAKU, Akiko; NISHINOUE, Nao; TAKANO, Tomoki; ETO, Risa; KATO, Noritada; ONO, Yutaka; TANAKA, Katsutoshi
2012-01-01
To evaluate the effects of a combined sleep hygiene education and behavioral approach program on sleep quality in workers with insomnia, we conducted a randomized controlled trial at a design engineering unit in Japan...
Energy Technology Data Exchange (ETDEWEB)
Shi, Cindy
2015-07-17
The interactions among different microbial populations in a community could play more important roles in determining ecosystem functioning than species numbers and their abundances, but very little is known about such network interactions at a community level. The goal of this project is to develop novel framework approaches and associated software tools to characterize the network interactions in microbial communities based on high throughput, large scale high-throughput metagenomics data and apply these approaches to understand the impacts of environmental changes (e.g., climate change, contamination) on network interactions among different nitrifying populations and associated microbial communities.
Directory of Open Access Journals (Sweden)
Toly Chen
2012-05-01
Full Text Available Predicting the price of a dynamic random access memory (DRAM product is a critical task to the manufacturer. However, it is not easy to contend with the uncertainty of the price. In order to effectively predict the price of a DRAM product, an agent-based fuzzy collaborative intelligence approach is proposed in this study. In the agent-based fuzzy collaborative intelligence approach, each agent uses a fuzzy neural network to predict the DRAM price based on its view. The agent then communicates its view and forecasting results to other agents with the aid of an automatic collaboration mechanism. According to the experimental results, the overall performance was improved through the agents’ collaboration.
Wang, X F; Yang, Qi; Fan, Zhaozhi; Sun, Chang-Kai; Yue, Guang H
2009-02-15
This study investigates time-dependent associations between source strength estimated from high-density scalp electroencephalogram (EEG) and force of voluntary handgrip contraction at different intensity levels. We first estimate source strength from raw EEG signals collected during voluntary muscle contractions at different levels and then propose a functional random-effects model approach in which both functional fixed effects and functional random-effects are considered for the data. Two estimation procedures for the functional model are discussed. The first estimation procedure is a two-step method which involves no iterations. It can flexibly use different smoothing methods and smoothing parameters. The second estimation procedure benefits from the connection between linear mixed models and regression splines and can be fitted using existing software. Functional ANOVA is then suggested to assess the experimental effects from the functional point of view. The statistical analysis shows that the time-dependent source strength function exhibits a nonlinear feature, where a bump is detected around the force onset time. However, there is the lack of significant variations in source strength on different force levels and different cortical areas. The proposed functional random-effects model procedure can be applied to other types of functional data in neuroscience.
Shih, Weichung Joe; Li, Gang; Wang, Yining
2016-03-01
Sample size plays a crucial role in clinical trials. Flexible sample-size designs, as part of the more general category of adaptive designs that utilize interim data, have been a popular topic in recent years. In this paper, we give a comparative review of four related methods for such a design. The likelihood method uses the likelihood ratio test with an adjusted critical value. The weighted method adjusts the test statistic with given weights rather than the critical value. The dual test method requires both the likelihood ratio statistic and the weighted statistic to be greater than the unadjusted critical value. The promising zone approach uses the likelihood ratio statistic with the unadjusted value and other constraints. All four methods preserve the type-I error rate. In this paper we explore their properties and compare their relationships and merits. We show that the sample size rules for the dual test are in conflict with the rules of the promising zone approach. We delineate what is necessary to specify in the study protocol to ensure the validity of the statistical procedure and what can be kept implicit in the protocol so that more flexibility can be attained for confirmatory phase III trials in meeting regulatory requirements. We also prove that under mild conditions, the likelihood ratio test still preserves the type-I error rate when the actual sample size is larger than the re-calculated one. Copyright © 2015 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Jin-E Zhang
2017-01-01
Full Text Available In this paper, the global O(t-α synchronization problem is investigated for a class of fractional-order neural networks with time delays. Taking into account both better control performance and energy saving, we make the first attempt to introduce centralized data-sampling approach to characterize the O(t-α synchronization design strategy. A sufficient criterion is given under which the drive-response-based coupled neural networks can achieve global O(t-α synchronization. It is worth noting that, by using centralized data-sampling principle, fractional-order Lyapunov-like technique, and fractional-order Leibniz rule, the designed controller performs very well. Two numerical examples are presented to illustrate the efficiency of the proposed centralized data-sampling scheme.
Zhao, Yadong; Zhang, Weidong
2017-03-01
To investigate the energy consumption involved in a sampled-data consensus process, the problem of guaranteed cost consensus for sampled-data linear multi-agent systems is considered. By using an input delay approach, an equivalent system is constructed to convert the guaranteed cost consensus problem to a guaranteed cost stabilization problem. A sufficient condition for guaranteed cost consensus is given in terms of linear matrix inequalities (LMIs), based on a refined time-dependent Lyapunov functional analysis. Reduced-order protocol design methodologies are proposed, with further discussions on determining sub-optimal protocol gain and enlarging allowable sampling interval bound made as a complement. Simulation results illustrate the effectiveness of the theoretical results. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.
Improved prediction of MHC class I and class II epitopes using a novel Gibbs sampling approach
DEFF Research Database (Denmark)
Nielsen, Morten; Lundegaard, Claus; Worning, Peder
2004-01-01
, identifying the correct alignment is a crucial part of identifying the core of an MHC class II binding motif. In this context, we wish to describe a novel Gibbs motif sampler method ideally suited for recognizing such weak sequence motifs. The method is based on the Gibbs sampling method, and it incorporates...... novel features optimized for the task of recognizing the binding motif of MHC classes I and II. The method locates the binding motif in a set of sequences and characterizes the motif in terms of a weight-matrix. Subsequently, the weight-matrix can be applied to identifying effectively potential MHC...... binding peptides and to guiding the process of rational vaccine design. Results: We apply the motif sampler method to the complex problem of MHC class II binding. The input to the method is amino acid peptide sequences extracted from the public databases of SYFPEITHI and MHCPEP and known to bind...
Schmidt, Hannes; Seki, David; Woebken, Dagmar; Eickhorst, Thilo
2017-04-01
Fluorescence in situ hybridization (FISH) is routinely used for the phylogenetic identification, detection, and quantification of single microbial cells environmental microbiology. Oligonucleotide probes that match the 16S rRNA sequence of target organisms are generally applied and the resulting signals are visualized via fluorescence microscopy. Consequently, the detection of the microbial cells of interest is limited by the resolution and the sensitivity of light microscopy where objects smaller than 0.2 µm can hardly be represented. Visualizing microbial cells at magnifications beyond light microscopy, however, can provide information on the composition and potential complexity of microbial habitats - the actual sites of nutrient cycling in soil and sediments. We present a recently developed technique that combines (1) the phylogenetic identification and detection of individual microorganisms by epifluorescence microscopy, with (2) the in situ localization of gold-labelled target cells on an ultrastructural level by SEM. Based on 16S rRNA targeted in situ hybridization combined with catalyzed reporter deposition, a streptavidin conjugate labeled with a fluorescent dye and nanogold particles is introduced into whole microbial cells. A two-step visualization process including an autometallographic enhancement of nanogold particles then allows for either fluorescence or electron microscopy, or a correlative application thereof. We will present applications of the Gold-FISH protocol to samples of marine sediments, agricultural soils, and plant roots. The detection and enumeration of bacterial cells in soil and sediment samples was comparable to CARD-FISH applications via fluorescence microscopy. Examples of microbe-surface interaction analysis will be presented on the basis of bacteria colonizing the rhizoplane of rice roots. In principle, Gold-FISH can be performed on any material to give a snapshot of microbe-surface interactions and provides a promising tool for
Directory of Open Access Journals (Sweden)
Amandine Gasc
Full Text Available New Caledonia is a Pacific island with a unique biodiversity showing an extreme microendemism. Many species distributions observed on this island are extremely restricted, localized to mountains or rivers making biodiversity evaluation and conservation a difficult task. A rapid biodiversity assessment method based on acoustics was recently proposed. This method could help to document the unique spatial structure observed in New Caledonia. Here, this method was applied in an attempt to reveal differences among three mountain sites (Mandjélia, Koghis and Aoupinié with similar ecological features and species richness level, but with high beta diversity according to different microendemic assemblages. In each site, several local acoustic communities were sampled with audio recorders. An automatic acoustic sampling was run on these three sites for a period of 82 successive days. Acoustic properties of animal communities were analysed without any species identification. A frequency spectral complexity index (NP was used as an estimate of the level of acoustic activity and a frequency spectral dissimilarity index (Df assessed acoustic differences between pairs of recordings. As expected, the index NP did not reveal significant differences in the acoustic activity level between the three sites. However, the acoustic variability estimated by the index Df , could first be explained by changes in the acoustic communities along the 24-hour cycle and second by acoustic dissimilarities between the three sites. The results support the hypothesis that global acoustic analyses can detect acoustic differences between sites with similar species richness and similar ecological context, but with different species assemblages. This study also demonstrates that global acoustic methods applied at broad spatial and temporal scales could help to assess local biodiversity in the challenging context of microendemism. The method could be deployed over large areas, and
Gasc, Amandine; Sueur, Jérôme; Pavoine, Sandrine; Pellens, Roseli; Grandcolas, Philippe
2013-01-01
New Caledonia is a Pacific island with a unique biodiversity showing an extreme microendemism. Many species distributions observed on this island are extremely restricted, localized to mountains or rivers making biodiversity evaluation and conservation a difficult task. A rapid biodiversity assessment method based on acoustics was recently proposed. This method could help to document the unique spatial structure observed in New Caledonia. Here, this method was applied in an attempt to reveal differences among three mountain sites (Mandjélia, Koghis and Aoupinié) with similar ecological features and species richness level, but with high beta diversity according to different microendemic assemblages. In each site, several local acoustic communities were sampled with audio recorders. An automatic acoustic sampling was run on these three sites for a period of 82 successive days. Acoustic properties of animal communities were analysed without any species identification. A frequency spectral complexity index (NP) was used as an estimate of the level of acoustic activity and a frequency spectral dissimilarity index (Df ) assessed acoustic differences between pairs of recordings. As expected, the index NP did not reveal significant differences in the acoustic activity level between the three sites. However, the acoustic variability estimated by the index Df , could first be explained by changes in the acoustic communities along the 24-hour cycle and second by acoustic dissimilarities between the three sites. The results support the hypothesis that global acoustic analyses can detect acoustic differences between sites with similar species richness and similar ecological context, but with different species assemblages. This study also demonstrates that global acoustic methods applied at broad spatial and temporal scales could help to assess local biodiversity in the challenging context of microendemism. The method could be deployed over large areas, and could help to
Directory of Open Access Journals (Sweden)
H.-C. Chen
2012-07-01
Full Text Available How to effectively describe ecological patterns in nature over broader spatial scales and build a modeling ecological framework has become an important issue in ecological research. We test four modeling methods (MAXENT, DOMAIN, GLM and ANN to predict the potential habitat of Schima superba (Chinese guger tree, CGT with different spatial scale in the Huisun study area in Taiwan. Then we created three sampling design (from small to large scales for model development and validation by different combinations of CGT samples from aforementioned three sites (Tong-Feng watershed, Yo-Shan Mountain, and Kuan-Dau watershed. These models combine points of known occurrence and topographic variables to infer CGT potential spatial distribution. Our assessment revealed that the method performance from highest to lowest was: MAXENT, DOMAIN, GLM and ANN on small spatial scale. The MAXENT and DOMAIN two models were the most capable for predicting the tree's potential habitat. However, the outcome clearly indicated that the models merely based on topographic variables performed poorly on large spatial extrapolation from Tong-Feng to Kuan-Dau because the humidity and sun illumination of the two watersheds are affected by their microterrains and are quite different from each other. Thus, the models developed from topographic variables can only be applied within a limited geographical extent without a significant error. Future studies will attempt to use variables involving spectral information associated with species extracted from high spatial, spectral resolution remotely sensed data, especially hyperspectral image data, for building a model so that it can be applied on a large spatial scale.
Liu, Ying; Cao, Guofeng; Zhao, Naizhuo; Mulligan, Kevin; Ye, Xinyue
2018-01-04
Accurate measurements of ground-level PM 2.5 (particulate matter with aerodynamic diameters equal to or less than 2.5 μm) concentrations are critically important to human and environmental health studies. In this regard, satellite-derived gridded PM 2.5 datasets, particularly those datasets derived from chemical transport models (CTM), have demonstrated unique attractiveness in terms of their geographic and temporal coverage. The CTM-based approaches, however, often yield results with a coarse spatial resolution (typically at 0.1° of spatial resolution) and tend to ignore or simplify the impact of geographic and socioeconomic factors on PM 2.5 concentrations. In this study, with a focus on the long-term PM 2.5 distribution in the contiguous United States, we adopt a random forests-based geostatistical (regression kriging) approach to improve one of the most commonly used satellite-derived, gridded PM 2.5 datasets with a refined spatial resolution (0.01°) and enhanced accuracy. By combining the random forests machine learning method and the kriging family of methods, the geostatistical approach effectively integrates ground-based PM 2.5 measurements and related geographic variables while accounting for the non-linear interactions and the complex spatial dependence. The accuracy and advantages of the proposed approach are demonstrated by comparing the results with existing PM 2.5 datasets. This manuscript also highlights the effectiveness of the geographical variables in long-term PM 2.5 mapping, including brightness of nighttime lights, normalized difference vegetation index and elevation, and discusses the contribution of each of these variables to the spatial distribution of PM 2.5 concentrations. Copyright © 2017 Elsevier Ltd. All rights reserved.
Joshi, Aditya; Lindsey, Brooks D.; Dayton, Paul A.; Pinton, Gianmarco; Muller, Marie
2017-05-01
Ultrasound contrast agents (UCA), such as microbubbles, enhance the scattering properties of blood, which is otherwise hypoechoic. The multiple scattering interactions of the acoustic field with UCA are poorly understood due to the complexity of the multiple scattering theories and the nonlinear microbubble response. The majority of bubble models describe the behavior of UCA as single, isolated microbubbles suspended in infinite medium. Multiple scattering models such as the independent scattering approximation can approximate phase velocity and attenuation for low scatterer volume fractions. However, all current models and simulation approaches only describe multiple scattering and nonlinear bubble dynamics separately. Here we present an approach that combines two existing models: (1) a full-wave model that describes nonlinear propagation and scattering interactions in a heterogeneous attenuating medium and (2) a Paul-Sarkar model that describes the nonlinear interactions between an acoustic field and microbubbles. These two models were solved numerically and combined with an iterative approach. The convergence of this combined model was explored in silico for 0.5 × 106 microbubbles ml-1, 1% and 2% bubble concentration by volume. The backscattering predicted by our modeling approach was verified experimentally with water tank measurements performed with a 128-element linear array transducer. An excellent agreement in terms of the fundamental and harmonic acoustic fields is shown. Additionally, our model correctly predicts the phase velocity and attenuation measured using through transmission and predicted by the independent scattering approximation.
A New Approach for Predicting the Variance of Random Decrement Functions
DEFF Research Database (Denmark)
Asmussen, J. C.; Brincker, Rune
1998-01-01
can be used as basis for a decision about how many time lags from the RD funtions should be used in the modal parameter extraction procedure. This paper suggests a new method for estimating the variance of the RD functions. The method is consistent in the sense that the accuracy of the approach...
Random walk in nonhomogeneous environments: A possible approach to human and animal mobility
Srokowski, Tomasz
2017-03-01
The random walk process in a nonhomogeneous medium, characterized by a Lévy stable distribution of jump length, is discussed. The width depends on a position: either before the jump or after that. In the latter case, the density slope is affected by the variable width and the variance may be finite; then all kinds of the anomalous diffusion are predicted. In the former case, only the time characteristics are sensitive to the variable width. The corresponding Langevin equation with different interpretations of the multiplicative noise is discussed. The dependence of the distribution width on position after jump is interpreted in terms of cognitive abilities and related to such problems as migration in a human population and foraging habits of animals.
Online games: a novel approach to explore how partial information influences human random searches
Martínez-García, Ricardo; Calabrese, Justin M.; López, Cristóbal
2017-01-01
Many natural processes rely on optimizing the success ratio of a search process. We use an experimental setup consisting of a simple online game in which players have to find a target hidden on a board, to investigate how the rounds are influenced by the detection of cues. We focus on the search duration and the statistics of the trajectories traced on the board. The experimental data are explained by a family of random-walk-based models and probabilistic analytical approximations. If no initial information is given to the players, the search is optimized for cues that cover an intermediate spatial scale. In addition, initial information about the extension of the cues results, in general, in faster searches. Finally, strategies used by informed players turn into non-stationary processes in which the length of e ach displacement evolves to show a well-defined characteristic scale that is not found in non-informed searches.
A unified approach to equilibrium statistics in closed systems with random dynamics
Biró, Tamás S
2016-01-01
In a balanced version of decay and growth processes a simple master equation arrives at a final state including the Poisson, Bernoulli, negative binomial and P\\'olya distribution. Such decay and growth rates incorporate a symmetry between the observed subsystem and the rest of a total system with fixed total number of states, K, and occupation numbers N. We give both a complex network and a particle production dynamics interpretation. For networks we follow the evolution of the degree distribution, P(n), in a directed network where a node can activate k fixed connections from K possible partnerships among all nodes while n is a random variable counting the links per node, and N is the total number of connections, which is also fixed. For particle physics problems P(n) is the probability of having n particles (or other quanta) distributed among k states (phase space cells) while altogether a fixed number of N particles reside on K states.
Multidisciplinary Approach to Management of Maternal Asthma (MAMMA): a randomized controlled trial.
Lim, Angelina S; Stewart, Kay; Abramson, Michael J; Walker, Susan P; Smith, Catherine L; George, Johnson
2014-05-01
Uncontrolled asthma during pregnancy is associated with maternal and perinatal hazards. A pharmacist-led intervention directed at improving maternal asthma control, involving multidisciplinary care, education, and regular monitoring to help reduce these risks, was developed and evaluated. A randomized controlled trial was carried out in the antenatal clinics of two major Australian maternity hospitals. Sixty pregnant women < 20 weeks gestation who had used asthma medications in the previous year were recruited. Participants were randomized to either an intervention or a usual care group and followed prospectively throughout pregnancy. The primary outcome was Asthma Control Questionnaire (ACQ) score. Mean changes in ACQ scores from baseline were compared between groups at 3 and 6 months to evaluate intervention efficacy. The ACQ score in the intervention group (n = 29) decreased by a mean ± SD of 0.46 ± 1.05 at 3 months and 0.89 ± 0.98 at 6 months. The control group (n = 29) had a mean decrease of 0.15 ± 0.63 at 3 months and 0.18 ± 0.73 at 6 months. The difference between groups, adjusting for baseline, was -0.22 (95% CI, -0.54 to 0.10) at 3 months and -0.60 (95% CI, -0.85 to -0.36) at 6 months. The difference at 6 months was statistically significant (P < .001) and clinically significant (> 0.5). No asthma-related oral corticosteroid use, hospital admissions, emergency visits, or days off from work were reported during the trial. A multidisciplinary model of care for asthma management involving education and regular monitoring could potentially improve maternal asthma outcomes and be widely implemented in clinical practice. Australian and New Zealand Clinical Trials Registry; No.: ACTRN12612000681853; URL: www.anzctr.org.au.