WorldWideScience

Sample records for randomly selected sample

  1. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  2. Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling

    Science.gov (United States)

    Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah

    2014-01-01

    Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…

  3. SNP selection and classification of genome-wide SNP data using stratified sampling random forests.

    Science.gov (United States)

    Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K

    2012-09-01

    For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.

  4. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection

    OpenAIRE

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-01-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential fea...

  5. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection.

    Science.gov (United States)

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-06-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.

  6. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling

    Directory of Open Access Journals (Sweden)

    Fuqun Zhou

    2016-10-01

    Full Text Available Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS. It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  7. Optimal Subset Selection of Time-Series MODIS Images and Sample Data Transfer with Random Forests for Supervised Classification Modelling.

    Science.gov (United States)

    Zhou, Fuqun; Zhang, Aining

    2016-10-25

    Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.

  8. Strategic Sample Selection

    DEFF Research Database (Denmark)

    Di Tillio, Alfredo; Ottaviani, Marco; Sørensen, Peter Norman

    2017-01-01

    is double logconvex, as with normal noise. The results are applied to the analysis of strategic sample selection by a biased researcher and extended to the case of uncertain and unanticipated selection. Our theoretical analysis offers applied research a new angle on the problem of selection in empirical......What is the impact of sample selection on the inference payoff of an evaluator testing a simple hypothesis based on the outcome of a location experiment? We show that anticipated selection locally reduces noise dispersion and thus increases informativeness if and only if the noise distribution...

  9. Blocked Randomization with Randomly Selected Block Sizes

    Directory of Open Access Journals (Sweden)

    Jimmy Efird

    2010-12-01

    Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  10. Blocked randomization with randomly selected block sizes.

    Science.gov (United States)

    Efird, Jimmy

    2011-01-01

    When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  11. Randomized selection on the GPU

    Energy Technology Data Exchange (ETDEWEB)

    Monroe, Laura Marie [Los Alamos National Laboratory; Wendelberger, Joanne R [Los Alamos National Laboratory; Michalak, Sarah E [Los Alamos National Laboratory

    2011-01-13

    We implement here a fast and memory-sparing probabilistic top N selection algorithm on the GPU. To our knowledge, this is the first direct selection in the literature for the GPU. The algorithm proceeds via a probabilistic-guess-and-chcck process searching for the Nth element. It always gives a correct result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces the average time required for the algorithm. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be well suited to more general parallel processors with limited amounts of fast memory.

  12. Sample size estimation and sampling techniques for selecting a representative sample

    Directory of Open Access Journals (Sweden)

    Aamir Omair

    2014-01-01

    Full Text Available Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect the outcome of the study. Important factors to consider for estimating the sample size include the size of the study population, confidence level, expected proportion of the outcome variable (for categorical variables/standard deviation of the outcome variable (for numerical variables, and the required precision (margin of accuracy from the study. The more the precision required, the greater is the required sample size. Sampling Techniques: The probability sampling techniques applied for health related research include simple random sampling, systematic random sampling, stratified random sampling, cluster sampling, and multistage sampling. These are more recommended than the nonprobability sampling techniques, because the results of the study can be generalized to the target population.

  13. k-Means: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.

  14. Performance of Random Effects Model Estimators under Complex Sampling Designs

    Science.gov (United States)

    Jia, Yue; Stokes, Lynne; Harris, Ian; Wang, Yan

    2011-01-01

    In this article, we consider estimation of parameters of random effects models from samples collected via complex multistage designs. Incorporation of sampling weights is one way to reduce estimation bias due to unequal probabilities of selection. Several weighting methods have been proposed in the literature for estimating the parameters of…

  15. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  16. K-Median: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. K-Median: Random Sampling Procedure. Sample a set of 1/ + 1 points from P. Let Q = first 1/ points, p = last point. Let T = Avg. 1-Median cost of P, c=1-Median. Let B1 = B(c,T/ 2), B2 = B(p, T). Let P' = points in B1.

  17. Random selection of Borel sets

    Directory of Open Access Journals (Sweden)

    Bernd Günther

    2010-10-01

    Full Text Available A theory of random Borel sets is presented, based on dyadic resolutions of compact metric spaces. The conditional expectation of the intersection of two independent random Borel sets is investigated. An example based on an embedding of Sierpinski’s universal curve into the space of Borel sets is given.

  18. A random spatial sampling method in a rural developing nation.

    Science.gov (United States)

    Kondo, Michelle C; Bream, Kent D W; Barg, Frances K; Branas, Charles C

    2014-04-10

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available.

  19. Sampling large random knots in a confined space

    Energy Technology Data Exchange (ETDEWEB)

    Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)

    2007-09-28

    DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.

  20. Are most samples of animals systematically biased? Consistent individual trait differences bias samples despite random sampling.

    Science.gov (United States)

    Biro, Peter A

    2013-02-01

    Sampling animals from the wild for study is something nearly every biologist has done, but despite our best efforts to obtain random samples of animals, 'hidden' trait biases may still exist. For example, consistent behavioral traits can affect trappability/catchability, independent of obvious factors such as size and gender, and these traits are often correlated with other repeatable physiological and/or life history traits. If so, systematic sampling bias may exist for any of these traits. The extent to which this is a problem, of course, depends on the magnitude of bias, which is presently unknown because the underlying trait distributions in populations are usually unknown, or unknowable. Indeed, our present knowledge about sampling bias comes from samples (not complete population censuses), which can possess bias to begin with. I had the unique opportunity to create naturalized populations of fish by seeding each of four small fishless lakes with equal densities of slow-, intermediate-, and fast-growing fish. Using sampling methods that are not size-selective, I observed that fast-growing fish were up to two-times more likely to be sampled than slower-growing fish. This indicates substantial and systematic bias with respect to an important life history trait (growth rate). If correlations between behavioral, physiological and life-history traits are as widespread as the literature suggests, then many animal samples may be systematically biased with respect to these traits (e.g., when collecting animals for laboratory use), and affect our inferences about population structure and abundance. I conclude with a discussion on ways to minimize sampling bias for particular physiological/behavioral/life-history types within animal populations.

  1. GSAMPLE: Stata module to draw a random sample

    OpenAIRE

    Jann, Ben

    2006-01-01

    gsample draws a random sample from the data in memory. Simple random sampling (SRS) is supported, as well as unequal probability sampling (UPS), of which sampling with probabilities proportional to size (PPS) is a special case. Both methods, SRS and UPS/PPS, provide sampling with replacement and sampling without replacement. Furthermore, stratified sampling and cluster sampling is supported.

  2. Sharp Bounds on Causal Effects under Sample Selection

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni

    2015-01-01

    In many empirical problems, the evaluation of treatment effects is complicated by sample selection so that the outcome is only observed for a non-random subpopulation. In the absence of instruments and/or tight parametric assumptions, treatment effects are not point identified, but can be bounded...

  3. Generation and Analysis of Constrained Random Sampling Patterns

    DEFF Research Database (Denmark)

    Pierzchlewski, Jacek; Arildsen, Thomas

    2016-01-01

    Random sampling is a technique for signal acquisition which is gaining popularity in practical signal processing systems. Nowadays, event-driven analog-to-digital converters make random sampling feasible in practical applications. A process of random sampling is defined by a sampling pattern, whi...

  4. Testing exclusion restrictions and additive separability in sample selection models

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni

    2014-01-01

    Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...

  5. Species selection and random drift in macroevolution.

    Science.gov (United States)

    Chevin, Luis-Miguel

    2016-03-01

    Species selection resulting from trait-dependent speciation and extinction is increasingly recognized as an important mechanism of phenotypic macroevolution. However, the recent bloom in statistical methods quantifying this process faces a scarcity of dynamical theory for their interpretation, notably regarding the relative contributions of deterministic versus stochastic evolutionary forces. I use simple diffusion approximations of birth-death processes to investigate how the expected and random components of macroevolutionary change depend on phenotype-dependent speciation and extinction rates, as can be estimated empirically. I show that the species selection coefficient for a binary trait, and selection differential for a quantitative trait, depend not only on differences in net diversification rates (speciation minus extinction), but also on differences in species turnover rates (speciation plus extinction), especially in small clades. The randomness in speciation and extinction events also produces a species-level equivalent to random genetic drift, which is stronger for higher turnover rates. I then show how microevolutionary processes including mutation, organismic selection, and random genetic drift cause state transitions at the species level, allowing comparison of evolutionary forces across levels. A key parameter that would be needed to apply this theory is the distribution and rate of origination of new optimum phenotypes along a phylogeny. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  6. Computer Corner: A Note on Pascal's Triangle and Simple Random Sampling.

    Science.gov (United States)

    Wright, Tommy

    1989-01-01

    Describes the algorithm used to select a simple random sample of certain size without having to list all possible samples and a justification based on Pascal's triangle. Provides testing results by various computers. (YP)

  7. Sample size estimation and sampling techniques for selecting a representative sample

    OpenAIRE

    Aamir Omair

    2014-01-01

    Introduction: The purpose of this article is to provide a general understanding of the concepts of sampling as applied to health-related research. Sample Size Estimation: It is important to select a representative sample in quantitative research in order to be able to generalize the results to the target population. The sample should be of the required sample size and must be selected using an appropriate probability sampling technique. There are many hidden biases which can adversely affect ...

  8. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  9. Improving randomness characterization through Bayesian model selection.

    Science.gov (United States)

    Díaz Hernández Rojas, Rafael; Solís, Aldo; Angulo Martínez, Alí M; U'Ren, Alfred B; Hirsch, Jorge G; Marsili, Matteo; Pérez Castillo, Isaac

    2017-06-08

    Random number generation plays an essential role in technology with important applications in areas ranging from cryptography to Monte Carlo methods, and other probabilistic algorithms. All such applications require high-quality sources of random numbers, yet effective methods for assessing whether a source produce truly random sequences are still missing. Current methods either do not rely on a formal description of randomness (NIST test suite) on the one hand, or are inapplicable in principle (the characterization derived from the Algorithmic Theory of Information), on the other, for they require testing all the possible computer programs that could produce the sequence to be analysed. Here we present a rigorous method that overcomes these problems based on Bayesian model selection. We derive analytic expressions for a model's likelihood which is then used to compute its posterior distribution. Our method proves to be more rigorous than NIST's suite and Borel-Normality criterion and its implementation is straightforward. We applied our method to an experimental device based on the process of spontaneous parametric downconversion to confirm it behaves as a genuine quantum random number generator. As our approach relies on Bayesian inference our scheme transcends individual sequence analysis, leading to a characterization of the source itself.

  10. 32 CFR 1624.1 - Random selection procedures for induction.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Random selection procedures for induction. 1624... SYSTEM INDUCTIONS § 1624.1 Random selection procedures for induction. (a) The Director of Selective Service shall from time to time establish a random selection sequence for induction by a drawing to be...

  11. Local randomization in neighbor selection improves PRM roadmap quality

    KAUST Repository

    McMahon, Troy

    2012-10-01

    Probabilistic Roadmap Methods (PRMs) are one of the most used classes of motion planning methods. These sampling-based methods generate robot configurations (nodes) and then connect them to form a graph (roadmap) containing representative feasible pathways. A key step in PRM roadmap construction involves identifying a set of candidate neighbors for each node. Traditionally, these candidates are chosen to be the k-closest nodes based on a given distance metric. In this paper, we propose a new neighbor selection policy called LocalRand(k,K\\'), that first computes the K\\' closest nodes to a specified node and then selects k of those nodes at random. Intuitively, LocalRand attempts to benefit from random sampling while maintaining the higher levels of local planner success inherent to selecting more local neighbors. We provide a methodology for selecting the parameters k and K\\'. We perform an experimental comparison which shows that for both rigid and articulated robots, LocalRand results in roadmaps that are better connected than the traditional k-closest policy or a purely random neighbor selection policy. The cost required to achieve these results is shown to be comparable to k-closest. © 2012 IEEE.

  12. A random spatial sampling method in a rural developing nation

    Science.gov (United States)

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  13. Privacy problems in the small sample selection

    Directory of Open Access Journals (Sweden)

    Loredana Cerbara

    2013-05-01

    Full Text Available The side of social research that uses small samples for the production of micro data, today finds some operating difficulties due to the privacy law. The privacy code is a really important and necessary law because it guarantees the Italian citizen’s rights, as already happens in other Countries of the world. However it does not seem appropriate to limit once more the possibilities of the data production of the national centres of research. That possibilities are already moreover compromised due to insufficient founds is a common problem becoming more and more frequent in the research field. It would be necessary, therefore, to include in the law the possibility to use telephonic lists to select samples useful for activities directly of interest and importance to the citizen, such as the collection of the data carried out on the basis of opinion polls by the centres of research of the Italian CNR and some universities.

  14. Power Spectrum Estimation of Randomly Sampled Signals

    DEFF Research Database (Denmark)

    Velte, Clara M.; Buchhave, Preben; K. George, William

    2014-01-01

    with high data rate and low inherent bias, respectively, while residence time weighting provides non-biased estimates regardless of setting. The free-running processor was also tested and compared to residence time weighting using actual LDA measurements in a turbulent round jet. Power spectra from...... of alternative methods attempting to produce correct power spectra have been invented andtested. The objective of the current study is to create a simple computer generated signal for baseline testing of residence time weighting and some of the most commonly proposed algorithms (or algorithms which most...... modernalgorithms ultimately are based on), sample-and-hold and the direct spectral estimator without residence time weighting, and compare how they perform in relation to power spectra based on the equidistantly sampled reference signal. The computer generated signal is a Poisson process with a sample rate...

  15. Mineral Composition of Selected Serbian Propolis Samples

    Directory of Open Access Journals (Sweden)

    Tosic Snezana

    2017-06-01

    Full Text Available The aim of this work was to determine the content of 22 macro- and microelements in ten raw Serbian propolis samples which differ in geographical and botanical origin as well as in polluted agent contents by atomic emission spectrometry with inductively coupled plasma (ICP-OES. The macroelements were more common and present Ca content was the highest while Na content the lowest. Among the studied essential trace elements Fe was the most common element. The levels of toxic elements (Pb, Cd, As and Hg were also analyzed, since they were possible environmental contaminants that could be transferred into propolis products for human consumption. As and Hg were not detected in any of the analyzed samples but a high level of Pb (2.0-9.7 mg/kg was detected and only selected portions of raw propolis could be used to produce natural medicines and dietary supplements for humans. Obtained results were statistically analyzed, and the examined samples showed a wide range of element content.

  16. Stratified sampling using cluster analysis: a sample selection strategy for improved generalizations from experiments.

    Science.gov (United States)

    Tipton, Elizabeth

    2013-04-01

    An important question in the design of experiments is how to ensure that the findings from the experiment are generalizable to a larger population. This concern with generalizability is particularly important when treatment effects are heterogeneous and when selecting units into the experiment using random sampling is not possible-two conditions commonly met in large-scale educational experiments. This article introduces a model-based balanced-sampling framework for improving generalizations, with a focus on developing methods that are robust to model misspecification. Additionally, the article provides a new method for sample selection within this framework: First units in an inference population are divided into relatively homogenous strata using cluster analysis, and then the sample is selected using distance rankings. In order to demonstrate and evaluate the method, a reanalysis of a completed experiment is conducted. This example compares samples selected using the new method with the actual sample used in the experiment. Results indicate that even under high nonresponse, balance is better on most covariates and that fewer coverage errors result. The article concludes with a discussion of additional benefits and limitations of the method.

  17. Random constraint sampling and duality for convex optimization

    OpenAIRE

    Haskell, William B.; Pengqian, Yu

    2016-01-01

    We are interested in solving convex optimization problems with large numbers of constraints. Randomized algorithms, such as random constraint sampling, have been very successful in giving nearly optimal solutions to such problems. In this paper, we combine random constraint sampling with the classical primal-dual algorithm for convex optimization problems with large numbers of constraints, and we give a convergence rate analysis. We then report numerical experiments that verify the effectiven...

  18. Random number datasets generated from statistical analysis of randomly sampled GSM recharge cards.

    Science.gov (United States)

    Okagbue, Hilary I; Opanuga, Abiodun A; Oguntunde, Pelumi E; Ugwoke, Paulinus O

    2017-02-01

    In this article, a random number of datasets was generated from random samples of used GSM (Global Systems for Mobile Communications) recharge cards. Statistical analyses were performed to refine the raw data to random number datasets arranged in table. A detailed description of the method and relevant tests of randomness were also discussed.

  19. Power Spectrum Estimation of Randomly Sampled Signals

    DEFF Research Database (Denmark)

    Velte, C. M.; Buchhave, P.; K. George, W.

    . Residence time weighting provides non-biased estimates regardless of setting. The free-running processor was also tested and compared to residence time weighting using actual LDA measurements in a turbulent round jet. Power spectra from measurements on the jet centerline and the outer part of the jet...... sine waves. The primary signal and the corresponding power spectrum are shown in Figure 1. The conventional spectrum shows multiple erroneous mixing frequencies and the peak values are too low. The residence time weighted spectrum is correct. The sample-and-hold spectrum has lower power than...... the correct spectrum, and the f -2-filtering effect appearing for low data densities is evident (Adrian and Yao 1987). The remaining tests also show that sample-and-hold and the free-running processor perform well only under very particular circumstances with high data rate and low inherent bias, respectively...

  20. Selection for altruism through random drift in variable size populations.

    Science.gov (United States)

    Houchmandzadeh, Bahram; Vallade, Marcel

    2012-05-10

    Altruistic behavior is defined as helping others at a cost to oneself and a lowered fitness. The lower fitness implies that altruists should be selected against, which is in contradiction with their widespread presence is nature. Present models of selection for altruism (kin or multilevel) show that altruistic behaviors can have 'hidden' advantages if the 'common good' produced by altruists is restricted to some related or unrelated groups. These models are mostly deterministic, or assume a frequency dependent fitness. Evolutionary dynamics is a competition between deterministic selection pressure and stochastic events due to random sampling from one generation to the next. We show here that an altruistic allele extending the carrying capacity of the habitat can win by increasing the random drift of "selfish" alleles. In other terms, the fixation probability of altruistic genes can be higher than those of a selfish ones, even though altruists have a smaller fitness. Moreover when populations are geographically structured, the altruists advantage can be highly amplified and the fixation probability of selfish genes can tend toward zero. The above results are obtained both by numerical and analytical calculations. Analytical results are obtained in the limit of large populations. The theory we present does not involve kin or multilevel selection, but is based on the existence of random drift in variable size populations. The model is a generalization of the original Fisher-Wright and Moran models where the carrying capacity depends on the number of altruists.

  1. Selection for altruism through random drift in variable size populations

    Directory of Open Access Journals (Sweden)

    Houchmandzadeh Bahram

    2012-05-01

    Full Text Available Abstract Background Altruistic behavior is defined as helping others at a cost to oneself and a lowered fitness. The lower fitness implies that altruists should be selected against, which is in contradiction with their widespread presence is nature. Present models of selection for altruism (kin or multilevel show that altruistic behaviors can have ‘hidden’ advantages if the ‘common good’ produced by altruists is restricted to some related or unrelated groups. These models are mostly deterministic, or assume a frequency dependent fitness. Results Evolutionary dynamics is a competition between deterministic selection pressure and stochastic events due to random sampling from one generation to the next. We show here that an altruistic allele extending the carrying capacity of the habitat can win by increasing the random drift of “selfish” alleles. In other terms, the fixation probability of altruistic genes can be higher than those of a selfish ones, even though altruists have a smaller fitness. Moreover when populations are geographically structured, the altruists advantage can be highly amplified and the fixation probability of selfish genes can tend toward zero. The above results are obtained both by numerical and analytical calculations. Analytical results are obtained in the limit of large populations. Conclusions The theory we present does not involve kin or multilevel selection, but is based on the existence of random drift in variable size populations. The model is a generalization of the original Fisher-Wright and Moran models where the carrying capacity depends on the number of altruists.

  2. SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2010-09-01

    Full Text Available In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957for various types of systematic sampling schemes available in literature, namely(i  Balanced Systematic Sampling (BSS of  Sethi (1965 and (ii Modified Systematic Sampling (MSS of Singh, Jindal, and Garg  (1968. Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic sampling (LSS with two random starts using appropriate super population models with the  help of R package for statistical computing.

  3. Efficient sampling of complex network with modified random walk strategies

    Science.gov (United States)

    Xie, Yunya; Chang, Shuhua; Zhang, Zhipeng; Zhang, Mi; Yang, Lei

    2018-02-01

    We present two novel random walk strategies, choosing seed node (CSN) random walk and no-retracing (NR) random walk. Different from the classical random walk sampling, the CSN and NR strategies focus on the influences of the seed node choice and path overlap, respectively. Three random walk samplings are applied in the Erdös-Rényi (ER), Barabási-Albert (BA), Watts-Strogatz (WS), and the weighted USAir networks, respectively. Then, the major properties of sampled subnets, such as sampling efficiency, degree distributions, average degree and average clustering coefficient, are studied. The similar conclusions can be reached with these three random walk strategies. Firstly, the networks with small scales and simple structures are conducive to the sampling. Secondly, the average degree and the average clustering coefficient of the sampled subnet tend to the corresponding values of original networks with limited steps. And thirdly, all the degree distributions of the subnets are slightly biased to the high degree side. However, the NR strategy performs better for the average clustering coefficient of the subnet. In the real weighted USAir networks, some obvious characters like the larger clustering coefficient and the fluctuation of degree distribution are reproduced well by these random walk strategies.

  4. Selection of representative calibration sample sets for near-infrared reflectance spectroscopy to predict nitrogen concentration in grasses

    DEFF Research Database (Denmark)

    Shetty, Nisha; Rinnan, Åsmund; Gislum, René

    2012-01-01

    squares regression (PLSR), a chemometric method, has been applied on NIR spectroscopy data for the determination of the nitrogen (N) concentration in these grass samples. The sample selection method based on NIR spectral data proposed by Puchwein and the CADEX (computer aided design of experiments......) algorithm were used and compared. Both Puchwein and CADEX methods provide a calibration set equally distributed in space, and both methods require a minimum prior of knowledge. The samples were also selected randomly using complete random, cultivar random (year fixed), year random (cultivar fixed......) and interaction (cultivar × year fixed) random procedures to see the influence of different factors on sample selection. Puchwein's method performed best with lowest RMSEP followed by CADEX, interaction random, year random, cultivar random and complete random. Out of 118 samples of the complete calibration set...

  5. CTEPP STANDARD OPERATING PROCEDURE FOR SAMPLE SELECTION (SOP-1.10)

    Science.gov (United States)

    The procedures for selecting CTEPP study subjects are described in the SOP. The primary, county-level stratification is by region and urbanicity. Six sample counties in each of the two states (North Carolina and Ohio) are selected using stratified random sampling and reflect ...

  6. Spatial Random Sampling: A Structure-Preserving Data Sketching Tool

    Science.gov (United States)

    Rahmani, Mostafa; Atia, George K.

    2017-09-01

    Random column sampling is not guaranteed to yield data sketches that preserve the underlying structures of the data and may not sample sufficiently from less-populated data clusters. Also, adaptive sampling can often provide accurate low rank approximations, yet may fall short of producing descriptive data sketches, especially when the cluster centers are linearly dependent. Motivated by that, this paper introduces a novel randomized column sampling tool dubbed Spatial Random Sampling (SRS), in which data points are sampled based on their proximity to randomly sampled points on the unit sphere. The most compelling feature of SRS is that the corresponding probability of sampling from a given data cluster is proportional to the surface area the cluster occupies on the unit sphere, independently from the size of the cluster population. Although it is fully randomized, SRS is shown to provide descriptive and balanced data representations. The proposed idea addresses a pressing need in data science and holds potential to inspire many novel approaches for analysis of big data.

  7. Methods for sample size determination in cluster randomized trials.

    Science.gov (United States)

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-06-01

    The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. © The Author 2015. Published by Oxford University Press on behalf of the International Epidemiological Association.

  8. SOME SYSTEMATIC SAMPLING STRATEGIES USING MULTIPLE RANDOM STARTS

    OpenAIRE

    Sampath Sundaram; Ammani Sivaraman

    2010-01-01

    In this paper an attempt is made to extend linear systematic sampling using multiple random starts due to Gautschi(1957)for various types of systematic sampling schemes available in literature, namely(i)  Balanced Systematic Sampling (BSS) of  Sethi (1965) and (ii) Modified Systematic Sampling (MSS) of Singh, Jindal, and Garg  (1968). Further, the proposed methods were compared with Yates corrected estimator developed with reference to Gautschi’s Linear systematic samplin...

  9. Sequential time interleaved random equivalent sampling for repetitive signal

    Science.gov (United States)

    Zhao, Yijiu; Liu, Jingjing

    2016-12-01

    Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.

  10. Optimum allocation in multivariate stratified random sampling: Stochastic matrix optimisation

    OpenAIRE

    Diaz-Garcia, Jose A.; Ramos-Quiroga, Rogelio

    2011-01-01

    The allocation problem for multivariate stratified random sampling as a problem of stochastic matrix integer mathematical programming is considered. With these aims the asymptotic normality of sample covariance matrices for each strata is established. Some alternative approaches are suggested for its solution. An example is solved by applying the proposed techniques.

  11. Heterogeneous Causal Effects and Sample Selection Bias

    DEFF Research Database (Denmark)

    Breen, Richard; Choi, Seongsoo; Holm, Anders

    2015-01-01

    causal effects might vary over individuals or groups. In this paper we point out one of the under-appreciated hazards of seeking to estimate heterogeneous causal effects: conventional selection bias (that is, selection on baseline differences) can easily be mistaken for heterogeneity of causal effects....... This might lead us to find heterogeneous effects when the true effect is homogenous, or to wrongly estimate not only the magnitude but also the sign of heterogeneous effects. We apply a test for the robustness of heterogeneous causal effects in the face of varying degrees and patterns of selection bias...

  12. Raman spectroscopy of selected carbonaceous samples

    Energy Technology Data Exchange (ETDEWEB)

    Kwiecinska, Barbara [University of Science and Technology-AGH, Faculty of Geology, Geophysics and Environmental Protection, Krakow (Poland); Suarez-Ruiz, Isabel [Instituto Nacional del Carbon, (INCAR-CSIC), Oviedo (Spain); Paluszkiewicz, Czeslawa [University of Science and Technology-AGH, Faculty of Materials Science and Technology, Krakow (Poland); Rodriques, Sandra [Universidade do Porto, Faculdade de Ciencias, Dept. de Geologia (Portugal)

    2010-12-01

    This paper presents the results of Raman spectra measured on carbonaceous materials ranging from greenschist facies to granulite-facies graphite (Anchimetamorphism and Epimetamorphism zones). Raman spectroscopy has come to be regarded as a more appropriate tool than X-ray diffraction for study of highly ordered carbon materials, including chondritic matter, soot, polycyclic aromatic hydrocarbons and evolved coal samples. This work demonstrates the usefulness of the Raman spectroscopy analysis in determining internal crystallographic structure (disordered lattice, heterogeneity). Moreover, this methodology permits the detection of differences within the meta-anthracite rank, semi-graphite and graphite stages for the samples included in this study. In the first order Raman spectra, the bands located near to c.a. 1350 cm{sup -1} (defects and disorder mode A{sub 1g}) and 1580 cm{sup -1} (in plane E{sub 2g} zone - centre mode) contribute to the characterization and determination of the degree of structural evolution and graphitization of the carbonaceous samples. The data from Raman spectroscopy were compared with parameters obtained by means of structural, chemical and optical microscopic analysis carried out on the same carbonaceous samples. The results revealed some positive and significant relationships, although the use of reflectance as a parameter for following the increase in structural order in natural graphitized samples was subject to limitations. (author)

  13. In-Place Randomized Slope Selection

    DEFF Research Database (Denmark)

    Blunck, Henrik; Vahrenhold, Jan

    2006-01-01

    Slope selection is a well-known algorithmic tool used in the context of computing robust estimators for fitting a line to a collection P of n points in the plane. We demonstrate that it is possible to perform slope selection in expected O(nlogn) time using only constant extra space in addition to...

  14. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest ...

  15. The signature of positive selection at randomly chosen loci.

    Science.gov (United States)

    Przeworski, Molly

    2002-03-01

    In Drosophila and humans, there are accumulating examples of loci with a significant excess of high-frequency-derived alleles or high levels of linkage disequilibrium, relative to a neutral model of a random-mating population of constant size. These are features expected after a recent selective sweep. Their prevalence suggests that positive directional selection may be widespread in both species. However, as I show here, these features do not persist long after the sweep ends: The high-frequency alleles drift to fixation and no longer contribute to polymorphism, while linkage disequilibrium is broken down by recombination. As a result, loci chosen without independent evidence of recent selection are not expected to exhibit either of these features, even if they have been affected by numerous sweeps in their genealogical history. How then can we explain the patterns in the data? One possibility is population structure, with unequal sampling from different subpopulations. Alternatively, positive selection may not operate as is commonly modeled. In particular, the rate of fixation of advantageous mutations may have increased in the recent past.

  16. 40 CFR 205.171-3 - Test motorcycle sample selection.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Test motorcycle sample selection. 205... ABATEMENT PROGRAMS TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Motorcycle Exhaust Systems § 205.171-3 Test motorcycle sample selection. A test motorcycle to be used for selective enforcement audit testing...

  17. Thermal properties of selected cheeses samples

    Directory of Open Access Journals (Sweden)

    Monika BOŽIKOVÁ

    2016-02-01

    Full Text Available The thermophysical parameters of selected cheeses (processed cheese and half hard cheese are presented in the article. Cheese is a generic term for a diverse group of milk-based food products. Cheese is produced throughout the world in wide-ranging flavors, textures, and forms. Cheese goes during processing through the thermal and mechanical manipulation, so thermal properties are one of the most important. Knowledge about thermal parameters of cheeses could be used in the process of quality evaluation. Based on the presented facts thermal properties of selected cheeses which are produced by Slovak producers were measured. Theoretical part of article contains description of cheese and description of plane source method which was used for thermal parameters detection. Thermophysical parameters as thermal conductivity, thermal diffusivity and volume specific heat were measured during the temperature stabilisation. The results are presented as relations of thermophysical parameters to the temperature in temperature range from 13.5°C to 24°C. Every point of graphic relation was obtained as arithmetic average from measured values for the same temperature. Obtained results were statistically processed. Presented graphical relations were chosen according to the results of statistical evaluation and also according to the coefficients of determination for every relation. The results of thermal parameters are in good agreement with values measured by other authors for similar types of cheeses.

  18. Random sampling and validation of covariance matrices of resonance parameters

    Science.gov (United States)

    Plevnik, Lucijan; Zerovnik, Gašper

    2017-09-01

    Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices) in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  19. Generalized and synthetic regression estimators for randomized branch sampling

    Science.gov (United States)

    David L. R. Affleck; Timothy G. Gregoire

    2015-01-01

    In felled-tree studies, ratio and regression estimators are commonly used to convert more readily measured branch characteristics to dry crown mass estimates. In some cases, data from multiple trees are pooled to form these estimates. This research evaluates the utility of both tactics in the estimation of crown biomass following randomized branch sampling (...

  20. Effective sampling of random surfaces by baby universe surgery

    NARCIS (Netherlands)

    Ambjørn, J.; Białas, P.; Jurkiewicz, J.; Burda, Z.; Petersson, B.

    1994-01-01

    We propose a new, very efficient algorithm for sampling of random surfaces in the Monte Carlo simulations, based on so-called baby universe surgery, i.e. cutting and pasting of baby universe. It drastically reduces slowing down as compared to the standard local flip algorithm, thereby allowing

  1. UNLABELED SELECTED SAMPLES IN FEATURE EXTRACTION FOR CLASSIFICATION OF HYPERSPECTRAL IMAGES WITH LIMITED TRAINING SAMPLES

    Directory of Open Access Journals (Sweden)

    A. Kianisarkaleh

    2015-12-01

    Full Text Available Feature extraction plays a key role in hyperspectral images classification. Using unlabeled samples, often unlimitedly available, unsupervised and semisupervised feature extraction methods show better performance when limited number of training samples exists. This paper illustrates the importance of selecting appropriate unlabeled samples that used in feature extraction methods. Also proposes a new method for unlabeled samples selection using spectral and spatial information. The proposed method has four parts including: PCA, prior classification, posterior classification and sample selection. As hyperspectral image passes these parts, selected unlabeled samples can be used in arbitrary feature extraction methods. The effectiveness of the proposed unlabeled selected samples in unsupervised and semisupervised feature extraction is demonstrated using two real hyperspectral datasets. Results show that through selecting appropriate unlabeled samples, the proposed method can improve the performance of feature extraction methods and increase classification accuracy.

  2. Sequential selection of random vectors under a sum constraint

    OpenAIRE

    Stanke, Mario

    2004-01-01

    We observe a sequence X1,X2,...,Xn of independent and identically distributed coordinatewise nonnegative d-dimensional random vectors. When a vector is observed it can either be selected or rejected but once made this decision is final. In each coordinate the sum of the selected vectors must not exceed a given constant. The problem is to find a selection policy that maximizes the expected number of selected vectors. For a general absolutely continuous distribution of t...

  3. Random sampling and validation of covariance matrices of resonance parameters

    Directory of Open Access Journals (Sweden)

    Plevnik Lucijan

    2017-01-01

    Full Text Available Analytically exact methods for random sampling of arbitrary correlated parameters are presented. Emphasis is given on one hand on the possible inconsistencies in the covariance data, concentrating on the positive semi-definiteness and consistent sampling of correlated inherently positive parameters, and on the other hand on optimization of the implementation of the methods itself. The methods have been applied in the program ENDSAM, written in the Fortran language, which from a file from a nuclear data library of a chosen isotope in ENDF-6 format produces an arbitrary number of new files in ENDF-6 format which contain values of random samples of resonance parameters (in accordance with corresponding covariance matrices in places of original values. The source code for the program ENDSAM is available from the OECD/NEA Data Bank. The program works in the following steps: reads resonance parameters and their covariance data from nuclear data library, checks whether the covariance data is consistent, and produces random samples of resonance parameters. The code has been validated with both realistic and artificial data to show that the produced samples are statistically consistent. Additionally, the code was used to validate covariance data in existing nuclear data libraries. A list of inconsistencies, observed in covariance data of resonance parameters in ENDF-VII.1, JEFF-3.2 and JENDL-4.0 is presented. For now, the work has been limited to resonance parameters, however the methods presented are general and can in principle be extended to sampling and validation of any nuclear data.

  4. Risk Attitudes, Sample Selection and Attrition in a Longitudinal Field Experiment

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Lau, Morten Igel

    with respect to risk attitudes. Our design builds in explicit randomization on the incentives for participation. We show that there are significant sample selection effects on inferences about the extent of risk aversion, but that the effects of subsequent sample attrition are minimal. Ignoring sample...... temporal stability. We evaluate the hypothesis that risk preferences are stable over time using a remarkable data set combining administrative information from the Danish registry with longitudinal experimental data we designed to allow better identification of joint selection and attrition effects...... selection leads to inferences that subjects in the population are more risk averse than they actually are. Correcting for sample selection and attrition affects utility curvature, but does not affect inferences about probability weighting. Properly accounting for sample selection and attrition effects leads...

  5. Systematic sampling of discrete and continuous populations: sample selection and the choice of estimator

    Science.gov (United States)

    Harry T. Valentine; David L. R. Affleck; Timothy G. Gregoire

    2009-01-01

    Systematic sampling is easy, efficient, and widely used, though it is not generally recognized that a systematic sample may be drawn from the population of interest with or without restrictions on randomization. The restrictions or the lack of them determine which estimators are unbiased, when using the sampling design as the basis for inference. We describe the...

  6. Random Walks on Directed Networks: Inference and Respondent-driven Sampling

    CERN Document Server

    Malmros, Jens; Britton, Tom

    2013-01-01

    Respondent driven sampling (RDS) is a method often used to estimate population properties (e.g. sexual risk behavior) in hard-to-reach populations. It combines an effective modified snowball sampling methodology with an estimation procedure that yields unbiased population estimates under the assumption that the sampling process behaves like a random walk on the social network of the population. Current RDS estimation methodology assumes that the social network is undirected, i.e. that all edges are reciprocal. However, empirical social networks in general also have non-reciprocated edges. To account for this fact, we develop a new estimation method for RDS in the presence of directed edges on the basis of random walks on directed networks. We distinguish directed and undirected edges and consider the possibility that the random walk returns to its current position in two steps through an undirected edge. We derive estimators of the selection probabilities of individuals as a function of the number of outgoing...

  7. Stratified random sampling plan for an irrigation customer telephone survey

    Energy Technology Data Exchange (ETDEWEB)

    Johnston, J.W.; Davis, L.J.

    1986-05-01

    This report describes the procedures used to design and select a sample for a telephone survey of individuals who use electricity in irrigating agricultural cropland in the Pacific Northwest. The survey is intended to gather information on the irrigated agricultural sector that will be useful for conservation assessment, load forecasting, rate design, and other regional power planning activities.

  8. Sampling Polymorphs of Ionic Solids using Random Superlattices.

    Science.gov (United States)

    Stevanović, Vladan

    2016-02-19

    Polymorphism offers rich and virtually unexplored space for discovering novel functional materials. To harness this potential approaches capable of both exploring the space of polymorphs and assessing their realizability are needed. One such approach devised for partially ionic solids is presented. The structure prediction part is carried out by performing local density functional theory relaxations on a large set of random supperlattices (RSLs) with atoms distributed randomly over different planes in a way that favors cation-anion coordination. Applying the RSL sampling on MgO, ZnO, and SnO_{2} reveals that the resulting probability of occurrence of a given structure offers a measure of its realizability explaining fully the experimentally observed, metastable polymorphs in these three systems.

  9. Selectivity and sparseness in randomly connected balanced networks.

    Directory of Open Access Journals (Sweden)

    Cengiz Pehlevan

    Full Text Available Neurons in sensory cortex show stimulus selectivity and sparse population response, even in cases where no strong functionally specific structure in connectivity can be detected. This raises the question whether selectivity and sparseness can be generated and maintained in randomly connected networks. We consider a recurrent network of excitatory and inhibitory spiking neurons with random connectivity, driven by random projections from an input layer of stimulus selective neurons. In this architecture, the stimulus-to-stimulus and neuron-to-neuron modulation of total synaptic input is weak compared to the mean input. Surprisingly, we show that in the balanced state the network can still support high stimulus selectivity and sparse population response. In the balanced state, strong synapses amplify the variation in synaptic input and recurrent inhibition cancels the mean. Functional specificity in connectivity emerges due to the inhomogeneity caused by the generative statistical rule used to build the network. We further elucidate the mechanism behind and evaluate the effects of model parameters on population sparseness and stimulus selectivity. Network response to mixtures of stimuli is investigated. It is shown that a balanced state with unselective inhibition can be achieved with densely connected input to inhibitory population. Balanced networks exhibit the "paradoxical" effect: an increase in excitatory drive to inhibition leads to decreased inhibitory population firing rate. We compare and contrast selectivity and sparseness generated by the balanced network to randomly connected unbalanced networks. Finally, we discuss our results in light of experiments.

  10. Analysis of a global random stratified sample of nurse legislation.

    Science.gov (United States)

    Benton, D C; Fernández-Fernández, M P; González-Jurado, M A; Beneit-Montesinos, J V

    2015-06-01

    To identify, compare and contrast the major component parts of heterogeneous stratified sample of nursing legislation. Nursing legislation varies from one jurisdiction to another. Up until now no research exists into whether the variations of such legislation are random or if variations are related to a set of key attributes. This mixed method study used a random stratified sample of legislation to map through documentary analysis the content of 14 nursing acts and then explored, using quantitative techniques, whether the material contained relates to a number of key attributes. These attributes include: legal tradition of the jurisdiction; model of regulation; administrative approach; area of the world; and the economic status of the jurisdiction. Twelve component parts of nursing legislation were identified. These were remarkably similar irrespective of attributes of interest. However, not all component parts were specified in the same level of detail and the manner by which the elements were addressed did vary. A number of potential relationships between the structure of the legislation and the key attributes of interest were identified. This study generated a comprehensive and integrated map of a global sample of nursing legislation. It provides a set of descriptors to be used to undertake further quantitative work and provides an important policy tool to facilitate dialogue between regulatory bodies. At the individual nurse level it offers insights that can help nurses pursue recognition of credentials across jurisdictions. © 2015 International Council of Nurses.

  11. Random sampling of elementary flux modes in large-scale metabolic networks.

    Science.gov (United States)

    Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel

    2012-09-15

    The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.

  12. Using maximum entropy modeling for optimal selection of sampling sites for monitoring networks

    Science.gov (United States)

    Stohlgren, Thomas J.; Kumar, Sunil; Barnett, David T.; Evangelista, Paul H.

    2011-01-01

    Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2) of the National Ecological Observatory Network (NEON). We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint), within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  13. Using Maximum Entropy Modeling for Optimal Selection of Sampling Sites for Monitoring Networks

    Directory of Open Access Journals (Sweden)

    Paul H. Evangelista

    2011-05-01

    Full Text Available Environmental monitoring programs must efficiently describe state shifts. We propose using maximum entropy modeling to select dissimilar sampling sites to capture environmental variability at low cost, and demonstrate a specific application: sample site selection for the Central Plains domain (453,490 km2 of the National Ecological Observatory Network (NEON. We relied on four environmental factors: mean annual temperature and precipitation, elevation, and vegetation type. A “sample site” was defined as a 20 km × 20 km area (equal to NEON’s airborne observation platform [AOP] footprint, within which each 1 km2 cell was evaluated for each environmental factor. After each model run, the most environmentally dissimilar site was selected from all potential sample sites. The iterative selection of eight sites captured approximately 80% of the environmental envelope of the domain, an improvement over stratified random sampling and simple random designs for sample site selection. This approach can be widely used for cost-efficient selection of survey and monitoring sites.

  14. Comparison of kriging interpolation precision between grid sampling scheme and simple random sampling scheme for precision agriculture

    Directory of Open Access Journals (Sweden)

    Jiang Houlong

    2016-01-01

    Full Text Available Sampling methods are important factors that can potentially limit the accuracy of predictions of spatial distribution patterns. A 10 ha tobacco-planted field was selected to compared the accuracy in predicting the spatial distribution of soil properties by using ordinary kriging and cross validation methods between grid sampling and simple random sampling scheme (SRS. To achieve this objective, we collected soil samples from the topsoil (0-20 cm in March 2012. Sample numbers of grid sampling and SRS were both 115 points each. Accuracies of spatial interpolation using the two sampling schemes were then evaluated based on validation samples (36 points and deviations of the estimates. The results suggested that soil pH and nitrate-N (NO3-N had low variation, whereas all other soil properties exhibited medium variation. Soil pH, organic matter (OM, total nitrogen (TN, cation exchange capacity (CEC, total phosphorus (TP and available phosphorus (AP matched the spherical model, whereas the remaining variables fit an exponential model with both sampling methods. The interpolation error of soil pH, TP, and AP was the lowest in SRS. The errors of interpolation for OM, CEC, TN, available potassium (AK and total potassium (TK were the lowest for grid sampling. The interpolation precisions of the soil NO3-N showed no significant differences between the two sampling schemes. Considering our data on interpolation precision and the importance of minerals for cultivation of flue-cured tobacco, the grid-sampling scheme should be used in tobacco-planted fields to determine the spatial distribution of soil properties. The grid-sampling method can be applied in a practical and cost-effective manner to facilitate soil sampling in tobacco-planted field.

  15. Fast, Randomized Join-Order Selection - Why Use Transformations?

    NARCIS (Netherlands)

    C.A. Galindo-Legaria; A.J. Pellenkoft (Jan); M.L. Kersten (Martin)

    1994-01-01

    textabstractWe study the effectiveness of probabilistic selection of join-query evaluation plans, without reliance on tree transformation rules. Instead, each candidate plan is chosen uniformly at random from the space of valid evaluation orders. This leads to a transformation-free strategy where a

  16. The reliability of randomly selected final year pharmacy students in ...

    African Journals Online (AJOL)

    Employing ANOVA, factorial experimental analysis, and the theory of error, reliability studies were conducted on the assessment of the drug product chloroquine phosphate tablets. The G–Study employed equal numbers of the factors for uniform control, and involved three analysts (randomly selected final year Pharmacy ...

  17. Randomly Sampled-Data Control Systems. Ph.D. Thesis

    Science.gov (United States)

    Han, Kuoruey

    1990-01-01

    The purpose is to solve the Linear Quadratic Regulator (LQR) problem with random time sampling. Such a sampling scheme may arise from imperfect instrumentation as in the case of sampling jitter. It can also model the stochastic information exchange among decentralized controllers to name just a few. A practical suboptimal controller is proposed with the nice property of mean square stability. The proposed controller is suboptimal in the sense that the control structure is limited to be linear. Because of i. i. d. assumption, this does not seem unreasonable. Once the control structure is fixed, the stochastic discrete optimal control problem is transformed into an equivalent deterministic optimal control problem with dynamics described by the matrix difference equation. The N-horizon control problem is solved using the Lagrange's multiplier method. The infinite horizon control problem is formulated as a classical minimization problem. Assuming existence of solution to the minimization problem, the total system is shown to be mean square stable under certain observability conditions. Computer simulations are performed to illustrate these conditions.

  18. A Table-Based Random Sampling Simulation for Bioluminescence Tomography

    Directory of Open Access Journals (Sweden)

    Xiaomeng Zhang

    2006-01-01

    Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.

  19. Selecting a phoneme-to-grapheme mapping: Random or weighted selection?

    Directory of Open Access Journals (Sweden)

    Binna Lee

    2015-05-01

    Our findings demonstrate that random selection underestimates MOA’s PG correspondences whereas weighted selection predicts higher PG correspondences than he produces. To explain his intermediate spelling performance on PPEs, we will test additional approaches to weighing the relative probability of PG mappings, including using log frequencies, separating consonant and vowel status, and considering the number of grapheme options in each phoneme.

  20. Selection of sampling rate for digital control of aircrafts

    Science.gov (United States)

    Katz, P.; Powell, J. D.

    1974-01-01

    The considerations in selecting the sample rates for digital control of aircrafts are identified and evaluated using the optimal discrete method. A high performance aircraft model which includes a bending mode and wind gusts was studied. The following factors which influence the selection of the sampling rates were identified: (1) the time and roughness response to control inputs; (2) the response to external disturbances; and (3) the sensitivity to variations of parameters. It was found that the time response to a control input and the response to external disturbances limit the selection of the sampling rate. The optimal discrete regulator, the steady state Kalman filter, and the mean response to external disturbances are calculated.

  1. Analysis of training sample selection strategies for regression-based quantitative landslide susceptibility mapping methods

    Science.gov (United States)

    Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem

    2017-07-01

    All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.

  2. A comparison of methods for representing sparsely sampled random quantities.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua

    2013-09-01

    This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.

  3. Trivariate Probit with Double Sample Selection: Theory and Application

    OpenAIRE

    Víctor Gerardo Carreón Rodríguez; Jorge L. García García-Menéndez

    2011-01-01

    We develop the trivariate probit model in which the sample incidentally truncates twice —i.e. in the first and in the second equations—, which is not solved in the literature. The model is analogue to the so called Bivariate Probit with Sample Selection (also referred as Bivariate Probit with Partial Partial Observabilty, Censored Probit or Heckman Probit) but in this case there are three equations and two truncations. We also present an application that shows the estimation biases when the i...

  4. Personal name in Igbo Culture: A dataset on randomly selected personal names and their statistical analysis.

    Science.gov (United States)

    Okagbue, Hilary I; Opanuga, Abiodun A; Adamu, Muminu O; Ugwoke, Paulinus O; Obasi, Emmanuela C M; Eze, Grace A

    2017-12-01

    This data article contains the statistical analysis of Igbo personal names and a sample of randomly selected of such names. This was presented as the following: 1). A simple random sampling of some Igbo personal names and their respective gender associated with each name. 2). The distribution of the vowels, consonants and letters of alphabets of the personal names. 3). The distribution of name length. 4). The distribution of initial and terminal letters of Igbo personal names. The significance of the data was discussed.

  5. Does self-selection affect samples' representativeness in online surveys? An investigation in online video game research.

    Science.gov (United States)

    Khazaal, Yasser; van Singer, Mathias; Chatton, Anne; Achab, Sophia; Zullino, Daniele; Rothen, Stephane; Khan, Riaz; Billieux, Joel; Thorens, Gabriel

    2014-07-07

    The number of medical studies performed through online surveys has increased dramatically in recent years. Despite their numerous advantages (eg, sample size, facilitated access to individuals presenting stigmatizing issues), selection bias may exist in online surveys. However, evidence on the representativeness of self-selected samples in online studies is patchy. Our objective was to explore the representativeness of a self-selected sample of online gamers using online players' virtual characters (avatars). All avatars belonged to individuals playing World of Warcraft (WoW), currently the most widely used online game. Avatars' characteristics were defined using various games' scores, reported on the WoW's official website, and two self-selected samples from previous studies were compared with a randomly selected sample of avatars. We used scores linked to 1240 avatars (762 from the self-selected samples and 478 from the random sample). The two self-selected samples of avatars had higher scores on most of the assessed variables (except for guild membership and exploration). Furthermore, some guilds were overrepresented in the self-selected samples. Our results suggest that more proficient players or players more involved in the game may be more likely to participate in online surveys. Caution is needed in the interpretation of studies based on online surveys that used a self-selection recruitment procedure. Epidemiological evidence on the reduced representativeness of sample of online surveys is warranted.

  6. Delay line length selection in generating fast random numbers with a chaotic laser.

    Science.gov (United States)

    Zhang, Jianzhong; Wang, Yuncai; Xue, Lugang; Hou, Jiayin; Zhang, Beibei; Wang, Anbang; Zhang, Mingjiang

    2012-04-10

    The chaotic light signals generated by an external cavity semiconductor laser have been experimentally demonstrated to extract fast random numbers. However, the photon round-trip time in the external cavity can cause the occurrence of the periodicity in random sequences. To overcome it, the exclusive-or operation on corresponding random bits in samples of the chaotic signal and its time-delay signal from a chaotic laser is required. In this scheme, the proper selection of delay length is a key issue. By doing a large number of experiments and theoretically analyzing the interplay between the Runs test and the threshold value of the autocorrelation function, we find when the corresponding delay time of autocorrelation trace with the correlation coefficient of less than 0.007 is considered as the delay time between the chaotic signal and its time-delay signal, streams of random numbers can be generated with verified randomness.

  7. 40 CFR 205.57-2 - Test vehicle sample selection.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Test vehicle sample selection. 205.57-2 Section 205.57-2 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) NOISE ABATEMENT PROGRAMS TRANSPORTATION EQUIPMENT NOISE EMISSION CONTROLS Medium and Heavy Trucks § 205.57-2 Test...

  8. Relativistic effects on galaxy redshift samples due to target selection

    Science.gov (United States)

    Alam, Shadab; Croft, Rupert A. C.; Ho, Shirley; Zhu, Hongyu; Giusarma, Elena

    2017-10-01

    In a galaxy redshift survey, the objects to be targeted for spectra are selected from a photometrically observed sample. The observed magnitudes and colours of galaxies in this parent sample will be affected by their peculiar velocities, through relativistic Doppler and relativistic beaming effects. In this paper, we compute the resulting expected changes in galaxy photometry. The magnitudes of the relativistic effects are a function of redshift, stellar mass, galaxy velocity and velocity direction. We focus on the CMASS sample from the Sloan Digital Sky Survey (SDSS) and Baryon Oscillation Spectroscopic Survey (BOSS), which is selected on the basis of colour and magnitude. We find that 0.10 per cent of the sample (∼585 galaxies) has been scattered into the targeted region of colour-magnitude space by relativistic effects, and conversely 0.09 per cent of the sample (∼532 galaxies) has been scattered out. Observational consequences of these effects include an asymmetry in clustering statistics, which we explore in a companion paper. Here, we compute a set of weights that can be used to remove the effect of modulations introduced into the density field inferred from a galaxy sample. We conclude by investigating the possible effects of these relativistic modulation on large-scale clustering of the galaxy sample.

  9. The effect of curriculum sample selection for medical school.

    Science.gov (United States)

    de Visser, Marieke; Fluit, Cornelia; Fransen, Jaap; Latijnhouwers, Mieke; Cohen-Schotanus, Janke; Laan, Roland

    2017-03-01

    In the Netherlands, students are admitted to medical school through (1) selection, (2) direct access by high pre-university Grade Point Average (pu-GPA), (3) lottery after being rejected in the selection procedure, or (4) lottery. At Radboud University Medical Center, 2010 was the first year we selected applicants. We designed a procedure based on tasks mimicking the reality of early medical school. Applicants took an online course followed by an on-site exam, resembling courses and exams in early medical school. Based on the exam scores, applicants were selected or rejected. The aim of our study is to determine whether curriculum sample selection explains performance in medical school and is preferable compared to selection based on performance in secondary school. We gathered data on the performance of students of three consecutive cohorts (2010-2012, N = 954). We compared medical school performance (course credits and grade points) of selected students to the three groups admitted in other ways, especially lottery admissions. In regression analyses, we controlled for out of context cognitive performance by adjusting for pu-GPA. Selection-admitted students outperformed lottery-admitted students on most outcome measures, unadjusted as well as adjusted for pu-GPA (p ≤ 0.05). They had higher grade points than non-selected lottery students, both unadjusted and adjusted for pu-GPA (p ≤ 0.025). Adjusted for pu-GPA, selection-admitted students and high-pu-GPA students performed equally. We recommend this selection procedure as it adds to secondary school cognitive performance for the general population of students, is efficient for large numbers of applicants and not labour-intensive.

  10. Improvements in Sample Selection Methods for Image Classification

    Directory of Open Access Journals (Sweden)

    Thales Sehn Körting

    2014-08-01

    Full Text Available Traditional image classification algorithms are mainly divided into unsupervised and supervised paradigms. In the first paradigm, algorithms are designed to automatically estimate the classes’ distributions in the feature space. The second paradigm depends on the knowledge of a domain expert to identify representative examples from the image to be used for estimating the classification model. Recent improvements in human-computer interaction (HCI enable the construction of more intuitive graphic user interfaces (GUIs to help users obtain desired results. In remote sensing image classification, GUIs still need advancements. In this work, we describe our efforts to develop an improved GUI for selecting the representative samples needed to estimate the classification model. The idea is to identify changes in the common strategies for sample selection to create a user-driven sample selection, which focuses on different views of each sample, and to help domain experts identify explicit classification rules, which is a well-established technique in geographic object-based image analysis (GEOBIA. We also propose the use of the well-known nearest neighbor algorithm to identify similar samples and accelerate the classification.

  11. Risk Attitudes, Sample Selection and Attrition in a Longitudinal Field Experiment

    DEFF Research Database (Denmark)

    Harrison, Glenn W.; Lau, Morten; Yoo, Hong Il

    incentives can affect sample response rates and help one identify the effects of selection. Correcting for endogenous sample selection and panel attrition changes inferences about risk preferences in an economically and statistically significant manner. We draw mixed conclusions on temporal stability of risk......Longitudinal experiments allow one to evaluate the temporal stability of latent preferences, but raise concerns about sample selection and attrition that may confound inferences about temporal stability. We evaluate the hypothesis of temporal stability in risk preferences using a remarkable data...... set that combines socio-demographic information from the Danish Civil Registry with information on risk attitudes from a longitudinal field experiment. Our experimental design builds in explicit randomization on the incentives for participation. The results show that the use of different participation...

  12. Patch-based visual tracking with online representative sample selection

    Science.gov (United States)

    Ou, Weihua; Yuan, Di; Li, Donghao; Liu, Bin; Xia, Daoxun; Zeng, Wu

    2017-05-01

    Occlusion is one of the most challenging problems in visual object tracking. Recently, a lot of discriminative methods have been proposed to deal with this problem. For the discriminative methods, it is difficult to select the representative samples for the target template updating. In general, the holistic bounding boxes that contain tracked results are selected as the positive samples. However, when the objects are occluded, this simple strategy easily introduces the noises into the training data set and the target template and then leads the tracker to drift away from the target seriously. To address this problem, we propose a robust patch-based visual tracker with online representative sample selection. Different from previous works, we divide the object and the candidates into several patches uniformly and propose a score function to calculate the score of each patch independently. Then, the average score is adopted to determine the optimal candidate. Finally, we utilize the non-negative least square method to find the representative samples, which are used to update the target template. The experimental results on the object tracking benchmark 2013 and on the 13 challenging sequences show that the proposed method is robust to the occlusion and achieves promising results.

  13. Interference-aware random beam selection for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed M.

    2012-09-01

    Spectrum sharing systems have been introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this paper, we develop interference-aware random beam selection schemes that provide enhanced throughput for the secondary link under the condition that the interference observed at the primary link is within a predetermined acceptable value. For a secondary transmitter equipped with multiple antennas, our schemes select a random beam, among a set of power- optimized orthogonal random beams, that maximizes the capacity of the secondary link while satisfying the interference constraint at the primary receiver for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the signal-to-noise and interference ratio (SINR) statistics as well as the capacity of the secondary link. Finally, we present numerical results that study the effect of system parameters including number of beams and the maximum transmission power on the capacity of the secondary link attained using the proposed schemes. © 2012 IEEE.

  14. Biochemical and nutritional components of selected honey samples.

    Science.gov (United States)

    Chua, Lee Suan; Adnan, Nur Ardawati

    2014-01-01

    The purpose of this study was to investigate the relationship of biochemical (enzymes) and nutritional components in the selected honey samples from Malaysia. The relationship is important to estimate the quality of honey based on the concentration of these nutritious components. Such a study is limited for honey samples from tropical countries with heavy rainfall throughout the year. A number of six honey samples that commonly consumed by local people were collected for the study. Both the biochemical and nutritional components were analysed by using standard methods from Association of Official Analytical Chemists (AOAC). Individual monosaccharides, disaccharides and 17 amino acids in honey were determined by using liquid chromatographic method. The results showed that the peroxide activity was positively correlated with moisture content (r = 0.8264), but negatively correlated with carbohydrate content (r = 0.7755) in honey. The chromatographic sugar and free amino acid profiles showed that the honey samples could be clustered based on the type and maturity of honey. Proline explained for 64.9% of the total variance in principle component analysis (PCA). The correlation between honey components and honey quality has been established for the selected honey samples based on their biochemical and nutritional concentrations. PCA results revealed that the ratio of sucrose to maltose could be used to measure honey maturity, whereas proline was the marker compound used to distinguish honey either as floral or honeydew.

  15. Sample selection and taste correlation in discrete choice transport modelling

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2008-01-01

    many issues that deserve attention. This thesis investigates how sample selection can affect estimation of discrete choice models and how taste correlation should be incorporated into applied mixed logit estimation. Sampling in transport modelling is often based on an observed trip. This may cause...... explain counterintuitive results in value of travel time estimation. However, the results also point at the difficulty of finding suitable instruments for the selection mechanism. Taste heterogeneity is another important aspect of discrete choice modelling. Mixed logit models are designed to capture...... observed as well as unobserved heterogeneity in tastes. But just as there are many reasons to expect unobserved heterogeneity, there is no reason to expect these tastes for different things to be independent. This is rarely accounted for in transportation research. Here three separate investigations...

  16. Selecting samples for Mars sample return: Triage by pyrolysis-FTIR

    Science.gov (United States)

    Sephton, Mark A.; Court, Richard W.; Lewis, James M.; Wright, Miriam C.; Gordon, Peter R.

    2013-04-01

    A future Mars Sample Return mission will deliver samples of the red planet to Earth laboratories for detailed analysis. A successful mission will require selection of the best samples that can be used to address the highest priority science objectives including assessment of past habitability and evidence of life. Pyrolysis is a commonly used method for extracting organic information from rocks but is most often coupled with complex analytical steps such as gas chromatography and mass spectrometry. Pyrolysis-Fourier transform infrared spectroscopy is a less resource demanding method that still allows sample characterisation. Here we demonstrate how pyrolysis-Fourier transform infrared spectroscopy could be used to triage samples destined to return to Earth, thereby maximising the scientific return from future sample return missions.

  17. Multiwavelength studies of X-ray selected extragalactic sample

    OpenAIRE

    Mickaelian, A. M.; Paronyan, G. M.; Harutyunyan, G. S.; Abrahamyan, H. V.; Gyulzadyan, M. V.

    2015-01-01

    The joint catalogue of Active Galactic Nuclei selected from optical identifications of X-ray sources was created as a combination of two samples: Hamburg-ROSAT Catalogue (HRC) and Byurakan-Hamburg-ROSAT Catalogue (BHRC). Both are based on optical identifications of X-ray sources from ROSAT catalogues using low-dispersion spectra of Hamburg Quasar Survey (HQS). However, HRC and BHRC contain a number of misidentifications and using the recent optical and multiwavelength (MW) catalogues we have ...

  18. An efficient sampling strategy for selection of biobank samples using risk scores.

    Science.gov (United States)

    Björk, Jonas; Malmqvist, Ebba; Rylander, Lars; Rignell-Hydbom, Anna

    2017-07-01

    The aim of this study was to suggest a new sample-selection strategy based on risk scores in case-control studies with biobank data. An ongoing Swedish case-control study on fetal exposure to endocrine disruptors and overweight in early childhood was used as the empirical example. Cases were defined as children with a body mass index (BMI) ⩾18 kg/m(2) ( n=545) at four years of age, and controls as children with a BMI of ⩽17 kg/m(2) ( n=4472 available). The risk of being overweight was modelled using logistic regression based on available covariates from the health examination and prior to selecting samples from the biobank. A risk score was estimated for each child and categorised as low (0-5%), medium (6-13%) or high (⩾14%) risk of being overweight. The final risk-score model, with smoking during pregnancy ( p=0.001), birth weight ( poperating characteristic curve of 67% ( n=3945 with complete data). The case group ( n=416) had the following risk-score profile: low (12%), medium (46%) and high risk (43%). Twice as many controls were selected from each risk group, with further matching on sex. Computer simulations showed that the proposed selection strategy with stratification on risk scores yielded consistent improvements in statistical precision. Using risk scores based on available survey or register data as a basis for sample selection may improve possibilities to study heterogeneity of exposure effects in biobank-based studies.

  19. Selection of patient samples and genes for outcome prediction.

    Science.gov (United States)

    Liu, Huiqing; Li, Jinyan; Wong, Limsoon

    2004-01-01

    Gene expression profiles with clinical outcome data enable monitoring of disease progression and prediction of patient survival at the molecular level. We present a new computational method for outcome prediction. Our idea is to use an informative subset of original training samples. This subset consists of only short-term survivors who died within a short period and long-term survivors who were still alive after a long follow-up time. These extreme training samples yield a clear platform to identify genes whose expression is related to survival. To find relevant genes, we combine two feature selection methods -- entropy measure and Wilcoxon rank sum test -- so that a set of sharp discriminating features are identified. The selected training samples and genes are then integrated by a support vector machine to build a prediction model, by which each validation sample is assigned a survival/relapse risk score for drawing Kaplan-Meier survival curves. We apply this method to two data sets: diffuse large-B-cell lymphoma (DLBCL) and primary lung adenocarcinoma. In both cases, patients in high and low risk groups stratified by our risk scores are clearly distinguishable. We also compare our risk scores to some clinical factors, such as International Prognostic Index score for DLBCL analysis and tumor stage information for lung adenocarcinoma. Our results indicate that gene expression profiles combined with carefully chosen learning algorithms can predict patient survival for certain diseases.

  20. The frequency of drugs in randomly selected drivers in Denmark

    DEFF Research Database (Denmark)

    Simonsen, Kirsten Wiese; Steentoft, Anni; Hels, Tove

    Introduction Driving under the influence of alcohol and drugs is a global problem. In Denmark as well as in other countries there is an increasing focus on impaired driving. Little is known about the occurrence of psychoactive drugs in the general traffic. Therefore the European commission...... initiated the DRUID project. This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Methods Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme...... stratified by time, season, and road type. The oral fluid samples were screened for 29 illegal and legal psychoactive substances and metabolites as well as ethanol. Results Fourteen (0.5%) drivers were positive for ethanol (alone or in combination with drugs) at concentrations above 0.53 g/l, which...

  1. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    Science.gov (United States)

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  2. An Optically Selected Sample of BL Lac Objects

    Science.gov (United States)

    Londish, D. M.; Croom, S. M.; Boyle, B. J.; Sadler, E. M.

    2004-08-01

    BL Lac objects are thought to be the beamed counterparts of FRI/FRII radio galaxies (cf. review by Urry & Padovani 1995, PASP, 107,803); at optical wavelengths these objects are dominated by Doppler boosted, synchrotron radiation, the resultant featureless continuum making BL Lacs all but impossible to target in optical surveys. To date, therefore, all BL Lac samples have been initially identified in radio and/or X-ray surveys, thus these objects are naturally found to be emitters at these frequencies. The first optically selected sample of BL Lac objects was identified from scrutiny of spectra in two redshift surveys (2QZ and 6QZ) using the 2dF and 6dF instruments at Siding Spring, NSW, Australia (Croom et al. 2001, MNRAS, 325, 483; MNRAS, 2004, 349, 1397). Of 52 featureless continuum objects identified, only 14 objects have radio flux densitites > 0.15mJy. Five of these 14 also have detectable X-ray emission. With optical b J magnitudes in the range 16.4 Rector et al. 2000, ApJ, 120, 1626) and the radio-selected 1Jy BL Lac Sample (z=0.6, Stickel et al. 1991, ApJ, 374, 431; Rector & Stocke 2001, AJ, 122,565). This raises the question as to why such X-ray-quiet, radio-quiet BL Lacs (lineless QSOs?) have not been found at lower redshifts, and what mechanisms are responsible for the lack of such detectable X-ray and radio emission in these optically selected BL Lacs. DL acknowledges support from the Science Foundation, University of Sydney.

  3. Blind Measurement Selection: A Random Matrix Theory Approach

    KAUST Repository

    Elkhalil, Khalil

    2016-12-14

    This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter vector. The exhaustive search inspecting each of the $n\\\\choose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.

  4. Sample size calculations for 3-level cluster randomized trials

    NARCIS (Netherlands)

    Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

    2008-01-01

    BACKGROUND: The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health

  5. Sample size calculations for 3-level cluster randomized trials

    NARCIS (Netherlands)

    Teerenstra, S.; Moerbeek, M.; Achterberg, T. van; Pelzer, B.J.; Borm, G.F.

    2008-01-01

    Background The first applications of cluster randomized trials with three instead of two levels are beginning to appear in health research, for instance, in trials where different strategies to implement best-practice guidelines are compared. In such trials, the strategy is implemented in health

  6. Improved estimator of finite population mean using auxiliary attribute in stratified random sampling

    OpenAIRE

    Verma, Hemant K.; Sharma, Prayas; Singh, Rajesh

    2014-01-01

    The present study discuss the problem of estimating the finite population mean using auxiliary attribute in stratified random sampling. In this paper taking the advantage of point bi-serial correlation between the study variable and auxiliary attribute, we have improved the estimation of population mean in stratified random sampling. The expressions for Bias and Mean square error have been derived under stratified random sampling. In addition, an empirical study has been carried out to examin...

  7. Pediatric selective mutism therapy: a randomized controlled trial.

    Science.gov (United States)

    Esposito, Maria; Gimigliano, Francesca; Barillari, Maria R; Precenzano, Francesco; Ruberto, Maria; Sepe, Joseph; Barillari, Umberto; Gimigliano, Raffaele; Militerni, Roberto; Messina, Giovanni; Carotenuto, Marco

    2017-10-01

    Selective mutism (SM) is a rare disease in children coded by DSM-5 as an anxiety disorder. Despite the disabling nature of the disease, there is still no specific treatment. The aims of this study were to verify the efficacy of six-month standard psychomotor treatment and the positive changes in lifestyle, in a population of children affected by SM. Randomized controlled trial registered in the European Clinical Trials Registry (EuDract 2015-001161-36). University third level Centre (Child and Adolescent Neuropsychiatry Clinic). Study population was composed by 67 children in group A (psychomotricity treatment) (35 M, mean age 7.84±1.15) and 71 children in group B (behavioral and educational counseling) (37 M, mean age 7.75±1.36). Psychomotor treatment was administered by trained child therapists in residential settings three times per week. Each child was treated for the whole period by the same therapist and all the therapists shared the same protocol. The standard psychomotor session length is of 45 minutes. At T0 and after 6 months (T1) of treatments, patients underwent a behavioral and SM severity assessment. To verify the effects of the psychomotor management, the Child Behavior Checklist questionnaire (CBCL) and Selective Mutism Questionnaire (SMQ) were administered to the parents. After 6 months of psychomotor treatment SM children showed a significant reduction among CBCL scores such as in social relations, anxious/depressed, social problems and total problems (Ppsychomotricity a safe and efficacy therapy for pediatric selective mutism.

  8. A Family of Estimators of a Sensitive Variable Using Auxiliary Information in Stratified Random Sampling

    Directory of Open Access Journals (Sweden)

    Nadia Mushtaq

    2017-03-01

    Full Text Available In this article, a combined general family of estimators is proposed for estimating finite population mean of a sensitive variable in stratified random sampling with non-sensitive auxiliary variable based on randomized response technique. Under stratified random sampling without replacement scheme, the expression of bias and mean square error (MSE up to the first-order approximations are derived. Theoretical and empirical results through a simulation study show that the proposed class of estimators is more efficient than the existing estimators, i.e., usual stratified random sample mean estimator, Sousa et al (2014 ratio and regression estimator of the sensitive variable in stratified sampling.

  9. Optimizing Event Selection with the Random Grid Search

    Energy Technology Data Exchange (ETDEWEB)

    Bhat, Pushpalatha C. [Fermilab; Prosper, Harrison B. [Florida State U.; Sekmen, Sezen [Kyungpook Natl. U.; Stewart, Chip [Broad Inst., Cambridge

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  10. Sample selection and preservation techniques for the Mars sample return mission

    Science.gov (United States)

    Tsay, Fun-Dow

    1988-01-01

    It is proposed that a miniaturized electron spin resonance (ESR) spectrometer be developed as an effective, nondestructivew sample selection and characterization instrument for the Mars Rover Sample Return mission. The ESR instrument can meet rover science payload requirements and yet has the capability and versatility to perform the following in situ Martian sample analyses: (1) detection of active oxygen species, and characterization of Martian surface chemistry and photocatalytic oxidation processes; (2) determination of paramagnetic Fe(3+) in clay silicate minerals, Mn(2+) in carbonates, and ferromagnetic centers of magnetite, maghemite and hematite; (3) search for organic compounds in the form of free radicals in subsoil, and detection of Martian fossil organic matter likely to be associated with carbonate and other sedimentary deposits. The proposed instrument is further detailed.

  11. [An optimal selection method of samples of calibration set and validation set for spectral multivariate analysis].

    Science.gov (United States)

    Liu, Wei; Zhao, Zhong; Yuan, Hong-Fu; Song, Chun-Feng; Li, Xiao-Yu

    2014-04-01

    The side effects in spectral multivariate modeling caused by the uneven distribution of sample numbers in the region of the calibration set and validation set were analyzed, and the "average" phenomenon that samples with small property values are predicted with larger values, and those with large property values are predicted with less values in spectral multivariate calibration is showed in this paper. Considering the distribution feature of spectral space and property space simultaneously, a new method of optimal sample selection named Rank-KS is proposed. Rank-KS aims at improving the uniformity of calibration set and validation set. Y-space was divided into some regions uniformly, samples of calibration set and validation set were extracted by Kennard-Stone (KS) and Random-Select (RS) algorithm respectively in every region, so the calibration set was distributed evenly and had a strong presentation. The proposed method were applied to the prediction of dimethylcarbonate (DMC) content in gasoline with infrared spectra and dimethylsulfoxide in its aqueous solution with near infrared spectra. The "average" phenomenon showed in the prediction of multiple linear regression (MLR) model of dimethylsulfoxide was weakened effectively by Rank-KS. For comparison, the MLR models and PLS1 models of MDC and dimethylsulfoxide were constructed by using RS, KS, Rank-Select, sample set partitioning based on joint X- and Y-blocks (SPXY) and proposed Rank-KS algorithms to select the calibration set, respectively. Application results verified that the best prediction was achieved by using Rank-KS. Especially, for the distribution of sample set with more in the middle and less on the boundaries, or none in the local, prediction of the model constructed by calibration set selected using Rank-KS can be improved obviously.

  12. Sampling dynamics: an alternative to payoff-monotone selection dynamics

    DEFF Research Database (Denmark)

    Berkemer, Rainer

    payoff-monotone nor payoff-positive which has interesting consequences. This can be demonstrated by application to the travelers dilemma, a deliberately constructed social dilemma. The game has just one symmetric Nash equilibrium which is Pareto inefficient. Especially when the travelers have many...... derive a threshold k(m) such that above k(m) the Jacobean of the dynamical system, evaluated for the Nash equilibrium, can only have eigenvalues with negative real parts. One might well argue that for biological systems payoff-monotonicity of selection dynamics should be better preserved. For social......'' of the standard game theory result. Both, analytical tools and agent based simulation are used to investigate the dynamic stability of sampling equilibria in a generalized travelers dilemma. Two parameters are of interest: the number of strategy options (m) available to each traveler and an experience parameter...

  13. RANdom SAmple Consensus (RANSAC) algorithm for material-informatics: application to photovoltaic solar cells.

    Science.gov (United States)

    Kaspi, Omer; Yosipof, Abraham; Senderowitz, Hanoch

    2017-06-06

    An important aspect of chemoinformatics and material-informatics is the usage of machine learning algorithms to build Quantitative Structure Activity Relationship (QSAR) models. The RANdom SAmple Consensus (RANSAC) algorithm is a predictive modeling tool widely used in the image processing field for cleaning datasets from noise. RANSAC could be used as a "one stop shop" algorithm for developing and validating QSAR models, performing outlier removal, descriptors selection, model development and predictions for test set samples using applicability domain. For "future" predictions (i.e., for samples not included in the original test set) RANSAC provides a statistical estimate for the probability of obtaining reliable predictions, i.e., predictions within a pre-defined number of standard deviations from the true values. In this work we describe the first application of RNASAC in material informatics, focusing on the analysis of solar cells. We demonstrate that for three datasets representing different metal oxide (MO) based solar cell libraries RANSAC-derived models select descriptors previously shown to correlate with key photovoltaic properties and lead to good predictive statistics for these properties. These models were subsequently used to predict the properties of virtual solar cells libraries highlighting interesting dependencies of PV properties on MO compositions.

  14. Sample size and power for a stratified doubly randomized preference design.

    Science.gov (United States)

    Cameron, Briana; Esserman, Denise A

    2016-11-21

    The two-stage (or doubly) randomized preference trial design is an important tool for researchers seeking to disentangle the role of patient treatment preference on treatment response through estimation of selection and preference effects. Up until now, these designs have been limited by their assumption of equal preference rates and effect sizes across the entire study population. We propose a stratified two-stage randomized trial design that addresses this limitation. We begin by deriving stratified test statistics for the treatment, preference, and selection effects. Next, we develop a sample size formula for the number of patients required to detect each effect. The properties of the model and the efficiency of the design are established using a series of simulation studies. We demonstrate the applicability of the design using a study of Hepatitis C treatment modality, specialty clinic versus mobile medical clinic. In this example, a stratified preference design (stratified by alcohol/drug use) may more closely capture the true distribution of patient preferences and allow for a more efficient design than a design which ignores these differences (unstratified version). © The Author(s) 2016.

  15. Exponential ratio-product type estimators under second order approximation in stratified random sampling

    OpenAIRE

    Singh, Rajesh; Sharma, Prayas; Smarandache, Florentin

    2014-01-01

    Singh et al (20009) introduced a family of exponential ratio and product type estimators in stratified random sampling. Under stratified random sampling without replacement scheme, the expressions of bias and mean square error (MSE) of Singh et al (2009) and some other estimators, up to the first- and second-order approximations are derived. Also, the theoretical findings are supported by a numerical example.

  16. Query-Based Sampling: Can we do Better than Random?

    NARCIS (Netherlands)

    Tigelaar, A.S.; Hiemstra, Djoerd

    2010-01-01

    Many servers on the web offer content that is only accessible via a search interface. These are part of the deep web. Using conventional crawling to index the content of these remote servers is impossible without some form of cooperation. Query-based sampling provides an alternative to crawling

  17. Selection Component Analysis of Natural Polymorphisms using Population Samples Including Mother-Offspring Combinations, II

    DEFF Research Database (Denmark)

    Jarmer, Hanne Østergaard; Christiansen, Freddy Bugge

    1981-01-01

    Population samples including mother-offspring combinations provide information on the selection components: zygotic selection, sexual selection, gametic seletion and fecundity selection, on the mating pattern, and on the deviation from linkage equilibrium among the loci studied. The theory...

  18. Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster

    DEFF Research Database (Denmark)

    Schou, Mads Fristrup

    2013-01-01

    When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column and dimini......When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....

  19. A descriptive analysis of a representative sample of pediatric randomized controlled trials published in 2007

    Directory of Open Access Journals (Sweden)

    Thomson Denise

    2010-12-01

    Full Text Available Abstract Background Randomized controlled trials (RCTs are the gold standard for trials assessing the effects of therapeutic interventions; therefore it is important to understand how they are conducted. Our objectives were to provide an overview of a representative sample of pediatric RCTs published in 2007 and assess the validity of their results. Methods We searched Cochrane Central Register of Controlled Trials using a pediatric filter and randomly selected 300 RCTs published in 2007. We extracted data on trial characteristics; outcomes; methodological quality; reporting; and registration and protocol characteristics. Trial registration and protocol availability were determined for each study based on the publication, an Internet search and an author survey. Results Most studies (83% were efficacy trials, 40% evaluated drugs, and 30% were placebo-controlled. Primary outcomes were specified in 41%; 43% reported on adverse events. At least one statistically significant outcome was reported in 77% of trials; 63% favored the treatment group. Trial registration was declared in 12% of publications and 23% were found through an Internet search. Risk of bias (ROB was high in 59% of trials, unclear in 33%, and low in 8%. Registered trials were more likely to have low ROB than non-registered trials (16% vs. 5%; p = 0.008. Effect sizes tended to be larger for trials at high vs. low ROB (0.28, 95% CI 0.21,0.35 vs. 0.16, 95% CI 0.07,0.25. Among survey respondents (50% response rate, the most common reason for trial registration was a publication requirement and for non-registration, a lack of familiarity with the process. Conclusions More than half of this random sample of pediatric RCTs published in 2007 was at high ROB and three quarters of trials were not registered. There is an urgent need to improve the design, conduct, and reporting of child health research.

  20. Progressive sample processing of band selection for hyperspectral imagery

    Science.gov (United States)

    Liu, Keng-Hao; Chien, Hung-Chang; Chen, Shih-Yu

    2017-10-01

    Band selection (BS) is one of the most important topics in hyperspectral image (HSI) processing. The objective of BS is to find a set of representative bands that can represent the whole image with lower inter-band redundancy. Many types of BS algorithms were proposed in the past. However, most of them can be carried on in an off-line manner. It means that they can only be implemented on the pre-collected data. Those off-line based methods are sometime useless for those applications that are timeliness, particular in disaster prevention and target detection. To tackle this issue, a new concept, called progressive sample processing (PSP), was proposed recently. The PSP is an "on-line" framework where the specific type of algorithm can process the currently collected data during the data transmission under band-interleavedby-sample/pixel (BIS/BIP) protocol. This paper proposes an online BS method that integrates a sparse-based BS into PSP framework, called PSP-BS. In PSP-BS, the BS can be carried out by updating BS result recursively pixel by pixel in the same way that a Kalman filter does for updating data information in a recursive fashion. The sparse regression is solved by orthogonal matching pursuit (OMP) algorithm, and the recursive equations of PSP-BS are derived by using matrix decomposition. The experiments conducted on a real hyperspectral image show that the PSP-BS can progressively output the BS status with very low computing time. The convergence of BS results during the transmission can be quickly achieved by using a rearranged pixel transmission sequence. This significant advantage allows BS to be implemented in a real time manner when the HSI data is transmitted pixel by pixel.

  1. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  2. Detecting and Quantifying Changing Selection Intensities from Time-Sampled Polymorphism Data

    Directory of Open Access Journals (Sweden)

    Hyunjin Shim

    2016-04-01

    Full Text Available During his well-known debate with Fisher regarding the phenotypic dataset of Panaxia dominula, Wright suggested fluctuating selection as a potential explanation for the observed change in allele frequencies. This model has since been invoked in a number of analyses, with the focus of discussion centering mainly on random or oscillatory fluctuations of selection intensities. Here, we present a novel method to consider nonrandom changes in selection intensities using Wright-Fisher approximate Bayesian (ABC-based approaches, in order to detect and evaluate a change in selection strength from time-sampled data. This novel method jointly estimates the position of a change point as well as the strength of both corresponding selection coefficients (and dominance for diploid cases from the allele trajectory. The simulation studies of this method reveal the combinations of parameter ranges and input values that optimize performance, thus indicating optimal experimental design strategies. We apply this approach to both the historical dataset of P. dominula in order to shed light on this historical debate, as well as to whole-genome time-serial data from influenza virus in order to identify sites with changing selection intensities in response to drug treatment.

  3. Cosmology and astrophysics from relaxed galaxy clusters - I. Sample selection

    Science.gov (United States)

    Mantz, Adam B.; Allen, Steven W.; Morris, R. Glenn; Schmidt, Robert W.; von der Linden, Anja; Urban, Ondrej

    2015-05-01

    This is the first in a series of papers studying the astrophysics and cosmology of massive, dynamically relaxed galaxy clusters. Here we present a new, automated method for identifying relaxed clusters based on their morphologies in X-ray imaging data. While broadly similar to others in the literature, the morphological quantities that we measure are specifically designed to provide a fair basis for comparison across a range of data quality and cluster redshifts, to be robust against missing data due to point source masks and gaps between detectors, and to avoid strong assumptions about the cosmological background and cluster masses. Based on three morphological indicators - symmetry, peakiness, and alignment - we develop the symmetry-peakiness-alignment (SPA) criterion for relaxation. This analysis was applied to a large sample of cluster observations from the Chandra and ROSAT archives. Of the 361 clusters which received the SPA treatment, 57 (16 per cent) were subsequently found to be relaxed according to our criterion. We compare our measurements to similar estimators in the literature, as well as projected ellipticity and other image measures, and comment on trends in the relaxed cluster fraction with redshift, temperature, and survey selection method. Code implementing our morphological analysis will be made available on the web (http://www.slac.stanford.edu/amantz/work/morph14/).

  4. A Manual for Selecting Sampling Techniques in Research

    OpenAIRE

    Alvi, Mohsin

    2016-01-01

    The Manual for Sampling Techniques used in Social Sciences is an effort to describe various types of sampling methodologies that are used in researches of social sciences in an easy and understandable way. Characteristics, benefits, crucial issues/ draw backs, and examples of each sampling type are provided separately. The manual begins by describing What is Sampling and its Purposes then it moves forward discussing the two broader types: probability sampling and non-probability sampling. Lat...

  5. Autonomous site selection and instrument positioning for sample acquisition

    Science.gov (United States)

    Shaw, A.; Barnes, D.; Pugh, S.

    The European Space Agency Aurora Exploration Program aims to establish a European long-term programme for the exploration of Space, culminating in a human mission to space in the 2030 timeframe. Two flagship missions, namely Mars Sample Return and ExoMars, have been proposed as recognised steps along the way. The Exomars Rover is the first of these flagship missions and includes a rover carrying the Pasteur Payload, a mobile exobiology instrumentation package, and the Beagle 2 arm. The primary objective is the search for evidence of past or present life on mars, but the payload will also study the evolution of the planet and the atmosphere, look for evidence of seismological activity and survey the environment in preparation for future missions. The operation of rovers in unknown environments is complicated, and requires large resources not only on the planet but also in ground based operations. Currently, this can be very labour intensive, and costly, if large teams of scientists and engineers are required to assess mission progress, plan mission scenarios, and construct a sequence of events or goals for uplink. Furthermore, the constraints in communication imposed by the time delay involved over such large distances, and line-of-sight required, make autonomy paramount to mission success, affording the ability to operate in the event of communications outages and be opportunistic with respect to scientific discovery. As part of this drive to reduce mission costs and increase autonomy the Space Robotics group at the University of Wales, Aberystwyth is researching methods of autonomous site selection and instrument positioning, directly applicable to the ExoMars mission. The site selection technique used builds on the geometric reasoning algorithms used previously for localisation and navigation [Shaw 03]. It is proposed that a digital elevation model (DEM) of the local surface, generated during traverse and without interaction from ground based operators, can be

  6. The Effect of Curriculum Sample Selection for Medical School

    Science.gov (United States)

    de Visser, Marieke; Fluit, Cornelia; Fransen, Jaap; Latijnhouwers, Mieke; Cohen-Schotanus, Janke; Laan, Roland

    2017-01-01

    In the Netherlands, students are admitted to medical school through (1) selection, (2) direct access by high pre-university Grade Point Average (pu-GPA), (3) lottery after being rejected in the selection procedure, or (4) lottery. At Radboud University Medical Center, 2010 was the first year we selected applicants. We designed a procedure based on…

  7. Parameter Estimation in Stratified Cluster Sampling under Randomized Response Models for Sensitive Question Survey.

    Science.gov (United States)

    Pu, Xiangke; Gao, Ge; Fan, Yubo; Wang, Mian

    2016-01-01

    Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.

  8. Parameter Estimation in Stratified Cluster Sampling under Randomized Response Models for Sensitive Question Survey.

    Directory of Open Access Journals (Sweden)

    Xiangke Pu

    Full Text Available Randomized response is a research method to get accurate answers to sensitive questions in structured sample survey. Simple random sampling is widely used in surveys of sensitive questions but hard to apply on large targeted populations. On the other side, more sophisticated sampling regimes and corresponding formulas are seldom employed to sensitive question surveys. In this work, we developed a series of formulas for parameter estimation in cluster sampling and stratified cluster sampling under two kinds of randomized response models by using classic sampling theories and total probability formulas. The performances of the sampling methods and formulas in the survey of premarital sex and cheating on exams at Soochow University were also provided. The reliability of the survey methods and formulas for sensitive question survey was found to be high.

  9. A Unified Approach to Power Calculation and Sample Size Determination for Random Regression Models

    Science.gov (United States)

    Shieh, Gwowen

    2007-01-01

    The underlying statistical models for multiple regression analysis are typically attributed to two types of modeling: fixed and random. The procedures for calculating power and sample size under the fixed regression models are well known. However, the literature on random regression models is limited and has been confined to the case of all…

  10. Event selection with a Random Forest in IceCube

    Energy Technology Data Exchange (ETDEWEB)

    Ruhe, Tim [TU, Dortmund (Germany); Collaboration: IceCube-Collaboration

    2011-07-01

    The Random Forest method is a multivariate algorithm that can be used for classification and regression respectively. The Random Forest implemented in the RapidMiner learning environment has been used for training and validation on data and Monte Carlo simulations of the IceCube neutrino telescope. Latest results are presented.

  11. A Family of Estimators of a Sensitive Variable Using Auxiliary Information in Stratified Random Sampling

    National Research Council Canada - National Science Library

    Nadia Mushtaq; Noor Ul Amin; Muhammad Hanif

    2017-01-01

    In this article, a combined general family of estimators is proposed for estimating finite population mean of a sensitive variable in stratified random sampling with non-sensitive auxiliary variable...

  12. The effect of curriculum sample selection for medical school

    NARCIS (Netherlands)

    de Visser, Marieke; Fluit, Cornelia; Fransen, Jaap; Latijnhouwers, Mieke; Cohen-Schotanus, Janke; Laan, Roland F. J.

    In the Netherlands, students are admitted to medical school through (1) selection, (2) direct access by high pre-university Grade Point Average (pu-GPA), (3) lottery after being rejected in the selection procedure, or (4) lottery. At Radboud University Medical Center, 2010 was the first year we

  13. A New Estimator For Population Mean Using Two Auxiliary Variables in Stratified random Sampling

    OpenAIRE

    Singh, Rajesh; Malik, Sachin

    2014-01-01

    In this paper, we suggest an estimator using two auxiliary variables in stratified random sampling. The propose estimator has an improvement over mean per unit estimator as well as some other considered estimators. Expressions for bias and MSE of the estimator are derived up to first degree of approximation. Moreover, these theoretical findings are supported by a numerical example with original data. Key words: Study variable, auxiliary variable, stratified random sampling, bias and mean squa...

  14. Conflict-cost based random sampling design for parallel MRI with low rank constraints

    Science.gov (United States)

    Kim, Wan; Zhou, Yihang; Lyu, Jingyuan; Ying, Leslie

    2015-05-01

    In compressed sensing MRI, it is very important to design sampling pattern for random sampling. For example, SAKE (simultaneous auto-calibrating and k-space estimation) is a parallel MRI reconstruction method using random undersampling. It formulates image reconstruction as a structured low-rank matrix completion problem. Variable density (VD) Poisson discs are typically adopted for 2D random sampling. The basic concept of Poisson disc generation is to guarantee samples are neither too close to nor too far away from each other. However, it is difficult to meet such a condition especially in the high density region. Therefore the sampling becomes inefficient. In this paper, we present an improved random sampling pattern for SAKE reconstruction. The pattern is generated based on a conflict cost with a probability model. The conflict cost measures how many dense samples already assigned are around a target location, while the probability model adopts the generalized Gaussian distribution which includes uniform and Gaussian-like distributions as special cases. Our method preferentially assigns a sample to a k-space location with the least conflict cost on the circle of the highest probability. To evaluate the effectiveness of the proposed random pattern, we compare the performance of SAKEs using both VD Poisson discs and the proposed pattern. Experimental results for brain data show that the proposed pattern yields lower normalized mean square error (NMSE) than VD Poisson discs.

  15. Understanding Sample Surveys: Selective Learning about Social Science Research Methods

    Science.gov (United States)

    Currin-Percival, Mary; Johnson, Martin

    2010-01-01

    We investigate differences in what students learn about survey methodology in a class on public opinion presented in two critically different ways: with the inclusion or exclusion of an original research project using a random-digit-dial telephone survey. Using a quasi-experimental design and data obtained from pretests and posttests in two public…

  16. Misrepresenting random sampling? A systematic review of research papers in the Journal of Advanced Nursing.

    Science.gov (United States)

    Williamson, Graham R

    2003-11-01

    This paper discusses the theoretical limitations of the use of random sampling and probability theory in the production of a significance level (or P-value) in nursing research. Potential alternatives, in the form of randomization tests, are proposed. Research papers in nursing, medicine and psychology frequently misrepresent their statistical findings, as the P-values reported assume random sampling. In this systematic review of studies published between January 1995 and June 2002 in the Journal of Advanced Nursing, 89 (68%) studies broke this assumption because they used convenience samples or entire populations. As a result, some of the findings may be questionable. The key ideas of random sampling and probability theory for statistical testing (for generating a P-value) are outlined. The result of a systematic review of research papers published in the Journal of Advanced Nursing is then presented, showing how frequently random sampling appears to have been misrepresented. Useful alternative techniques that might overcome these limitations are then discussed. REVIEW LIMITATIONS: This review is limited in scope because it is applied to one journal, and so the findings cannot be generalized to other nursing journals or to nursing research in general. However, it is possible that other nursing journals are also publishing research articles based on the misrepresentation of random sampling. The review is also limited because in several of the articles the sampling method was not completely clearly stated, and in this circumstance a judgment has been made as to the sampling method employed, based on the indications given by author(s). Quantitative researchers in nursing should be very careful that the statistical techniques they use are appropriate for the design and sampling methods of their studies. If the techniques they employ are not appropriate, they run the risk of misinterpreting findings by using inappropriate, unrepresentative and biased samples.

  17. Study on MAX-MIN Ant System with Random Selection in Quadratic Assignment Problem

    Science.gov (United States)

    Iimura, Ichiro; Yoshida, Kenji; Ishibashi, Ken; Nakayama, Shigeru

    Ant Colony Optimization (ACO), which is a type of swarm intelligence inspired by ants' foraging behavior, has been studied extensively and its effectiveness has been shown by many researchers. The previous studies have reported that MAX-MIN Ant System (MMAS) is one of effective ACO algorithms. The MMAS maintains the balance of intensification and diversification concerning pheromone by limiting the quantity of pheromone to the range of minimum and maximum values. In this paper, we propose MAX-MIN Ant System with Random Selection (MMASRS) for improving the search performance even further. The MMASRS is a new ACO algorithm that is MMAS into which random selection was newly introduced. The random selection is one of the edgechoosing methods by agents (ants). In our experimental evaluation using ten quadratic assignment problems, we have proved that the proposed MMASRS with the random selection is superior to the conventional MMAS without the random selection in the viewpoint of the search performance.

  18. Enhanced Sampling and Analysis, Selection of Technology for Testing

    Energy Technology Data Exchange (ETDEWEB)

    Svoboda, John; Meikrantz, David

    2010-02-01

    The focus of this study includes the investigation of sampling technologies used in industry and their potential application to nuclear fuel processing. The goal is to identify innovative sampling methods using state of the art techniques that could evolve into the next generation sampling and analysis system for metallic elements. This report details the progress made in the first half of FY 2010 and includes a further consideration of the research focus and goals for this year. Our sampling options and focus for the next generation sampling method are presented along with the criteria used for choosing our path forward. We have decided to pursue the option of evaluating the feasibility of microcapillary based chips to remotely collect, transfer, track and supply microliters of sample solutions to analytical equipment in support of aqueous processes for used nuclear fuel cycles. Microchip vendors have been screened and a choice made for the development of a suitable microchip design followed by production of samples for evaluation by ANL, LANL, and INL on an independent basis.

  19. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10-4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.

  20. Calculating sample sizes for cluster randomized trials: we can keep it simple and efficient !

    NARCIS (Netherlands)

    van Breukelen, Gerard J.P.; Candel, Math J.J.M.

    2012-01-01

    Objective: Simple guidelines for efficient sample sizes in cluster randomized trials with unknown intraclass correlation and varying cluster sizes. Methods: A simple equation is given for the optimal number of clusters and sample size per cluster. Here, optimal means maximizing power for a given

  1. Aggregate impact testing of selected granite samples from sites in ...

    African Journals Online (AJOL)

    The Aggregate Impact Testing machine was used to measure the resistance to fa ilure of Rocks from five (5) selected granite quarries to a suddenly applied force using S ingapore standard. The results obtained show that brittleness (S20) value of the rocks were between 2 - 10. These values are less than the stated ...

  2. Survey research with a random digit dial national mobile phone sample in Ghana: Methods and sample quality

    Science.gov (United States)

    Sefa, Eunice; Adimazoya, Edward Akolgo; Yartey, Emmanuel; Lenzi, Rachel; Tarpo, Cindy; Heward-Mills, Nii Lante; Lew, Katherine; Ampeh, Yvonne

    2018-01-01

    Introduction Generating a nationally representative sample in low and middle income countries typically requires resource-intensive household level sampling with door-to-door data collection. High mobile phone penetration rates in developing countries provide new opportunities for alternative sampling and data collection methods, but there is limited information about response rates and sample biases in coverage and nonresponse using these methods. We utilized data from an interactive voice response, random-digit dial, national mobile phone survey in Ghana to calculate standardized response rates and assess representativeness of the obtained sample. Materials and methods The survey methodology was piloted in two rounds of data collection. The final survey included 18 demographic, media exposure, and health behavior questions. Call outcomes and response rates were calculated according to the American Association of Public Opinion Research guidelines. Sample characteristics, productivity, and costs per interview were calculated. Representativeness was assessed by comparing data to the Ghana Demographic and Health Survey and the National Population and Housing Census. Results The survey was fielded during a 27-day period in February-March 2017. There were 9,469 completed interviews and 3,547 partial interviews. Response, cooperation, refusal, and contact rates were 31%, 81%, 7%, and 39% respectively. Twenty-three calls were dialed to produce an eligible contact: nonresponse was substantial due to the automated calling system and dialing of many unassigned or non-working numbers. Younger, urban, better educated, and male respondents were overrepresented in the sample. Conclusions The innovative mobile phone data collection methodology yielded a large sample in a relatively short period. Response rates were comparable to other surveys, although substantial coverage bias resulted from fewer women, rural, and older residents completing the mobile phone survey in

  3. Random sample community-based health surveys: does the effort to reach participants matter?

    Science.gov (United States)

    Messiah, Antoine; Castro, Grettel; Rodríguez de la Vega, Pura; Acuna, Juan M

    2014-12-15

    Conducting health surveys with community-based random samples are essential to capture an otherwise unreachable population, but these surveys can be biased if the effort to reach participants is insufficient. This study determines the desirable amount of effort to minimise such bias. A household-based health survey with random sampling and face-to-face interviews. Up to 11 visits, organised by canvassing rounds, were made to obtain an interview. Single-family homes in an underserved and understudied population in North Miami-Dade County, Florida, USA. Of a probabilistic sample of 2200 household addresses, 30 corresponded to empty lots, 74 were abandoned houses, 625 households declined to participate and 265 could not be reached and interviewed within 11 attempts. Analyses were performed on the 1206 remaining households. Each household was asked if any of their members had been told by a doctor that they had high blood pressure, heart disease including heart attack, cancer, diabetes, anxiety/ depression, obesity or asthma. Responses to these questions were analysed by the number of visit attempts needed to obtain the interview. Return per visit fell below 10% after four attempts, below 5% after six attempts and below 2% after eight attempts. As the effort increased, household size decreased, while household income and the percentage of interviewees active and employed increased; proportion of the seven health conditions decreased, four of which did so significantly: heart disease 20.4-9.2%, high blood pressure 63.5-58.1%, anxiety/depression 24.4-9.2% and obesity 21.8-12.6%. Beyond the fifth attempt, however, cumulative percentages varied by less than 1% and precision varied by less than 0.1%. In spite of the early and steep drop, sustaining at least five attempts to reach participants is necessary to reduce selection bias. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  4. Stratified random sampling for estimating billing accuracy in health care systems.

    Science.gov (United States)

    Buddhakulsomsiri, Jirachai; Parthanadee, Parthana

    2008-03-01

    This paper presents a stratified random sampling plan for estimating accuracy of bill processing performance for the health care bills submitted to third party payers in health care systems. Bill processing accuracy is estimated with two measures: percent accuracy and total dollar accuracy. Difficulties in constructing a sampling plan arise when the population strata structure is unknown, and when the two measures require different sampling schemes. To efficiently utilize sample resource, the sampling plan is designed to effectively estimate both measures from the same sample. The sampling plan features a simple but efficient strata construction method, called rectangular method, and two accuracy estimation methods, one for each measure. The sampling plan is tested on actual populations from an insurance company. Accuracy estimates obtained are then used to compare the rectangular method to other potential clustering methods for strata construction, and compare the accuracy estimation methods to other eligible methods. Computational study results show effectiveness of the proposed sampling plan.

  5. Accelerating Markov chain Monte Carlo simulation by differential evolution with self-adaptive randomized subspace sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Hyman, James M [Los Alamos National Laboratory; Robinson, Bruce A [Los Alamos National Laboratory; Higdon, Dave [Los Alamos National Laboratory; Ter Braak, Cajo J F [NETHERLANDS; Diks, Cees G H [UNIV OF AMSTERDAM

    2008-01-01

    Markov chain Monte Carlo (MCMC) methods have found widespread use in many fields of study to estimate the average properties of complex systems, and for posterior inference in a Bayesian framework. Existing theory and experiments prove convergence of well constructed MCMC schemes to the appropriate limiting distribution under a variety of different conditions. In practice, however this convergence is often observed to be disturbingly slow. This is frequently caused by an inappropriate selection of the proposal distribution used to generate trial moves in the Markov Chain. Here we show that significant improvements to the efficiency of MCMC simulation can be made by using a self-adaptive Differential Evolution learning strategy within a population-based evolutionary framework. This scheme, entitled DiffeRential Evolution Adaptive Metropolis or DREAM, runs multiple different chains simultaneously for global exploration, and automatically tunes the scale and orientation of the proposal distribution in randomized subspaces during the search. Ergodicity of the algorithm is proved, and various examples involving nonlinearity, high-dimensionality, and multimodality show that DREAM is generally superior to other adaptive MCMC sampling approaches. The DREAM scheme significantly enhances the applicability of MCMC simulation to complex, multi-modal search problems.

  6. Mate selection preferences: gender differences examined in a national sample.

    Science.gov (United States)

    Sprecher, S; Sullivan, Q; Hatfield, E

    1994-06-01

    Social psychologists have devoted considerable theoretical and empirical attention to studying gender differences in traits desired in a mate. Most of the studies on mate preferences, however, have been conducted with small, nonrepresentative samples. In this study, we analyzed data collected from single adults in a national probability sample, the National Survey of Families and Households. Respondents were asked to consider 12 possible assets or liabilities in a marriage partner and to indicate their willingness to marry someone possessing each of these traits. These data extended previous research by comparing men's and women's mate preferences in a heterogeneous sample of the national population and by comparing gender differences in different sociodemographic groups. The gender differences found in this study were consistent with those secured in previous research (e.g., youth and physical attractiveness were found to be more important for men than for women; earning potential was found to be less important for men than for women) and were quite consistent across age groups and races. However, the various sociodemographic groups differed slightly in the magnitude of gender differences for some of the mate preferences.

  7. Analysis of Selected Legacy 85Kr Samples

    Energy Technology Data Exchange (ETDEWEB)

    Jubin, Robert Thomas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Bruffey, Stephanie H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-09-02

    Legacy samples composed of 85Kr encapsulated in solid zeolite 5A material and five small metal tubes containing a mixture of the zeolite combined with a glass matrix resulting from hot isostatic pressing have been preserved. The samples were a result of krypton R&D encapsulation efforts in the late 1970s performed at the Idaho Chemical Processing Plant. These samples were shipped to Oak Ridge National Laboratory (ORNL) in mid-FY 2014. Upon receipt the outer shipping package was opened, and the inner package, removed and placed in a radiological hood. The individual capsules were double bagged as they were removed from the inner shipping pig and placed into individual glass sample bottles for further analysis. The five capsules were then x-ray imaged. Capsules 1 and 4 appear intact and to contain an amorphous mass within the capsules. Capsule 2 clearly shows the saw marks on the capsule and a quantity of loose pellet or bead-like material remaining in the capsule. Capsule 3 shows similar bead-like material within the intact capsule. Capsule 5 had been opened at an undetermined time in the past. The end of this capsule appears to have been cut off, and there are additional saw marks on the side of the capsule. X-ray tomography allowed the capsules to be viewed along the three axes. Of most interest was determining whether there was any residual material in the closed end of Capsule 5. The images confirmed the presence of residual material within this capsule. The material appears to be compacted but still retains some of the bead-like morphology. Based on the nondestructive analysis (NDA) results, a proposed path forward was formulated to advance this effort toward the original goals of understanding the effects of extended storage on the waste form and package. Based on the initial NDA and the fact that there are at least two breached samples, it was proposed that exploratory tests be conducted with the breached specimens before opening the three intact

  8. Sample Selected Averaging Method for Analyzing the Event Related Potential

    Science.gov (United States)

    Taguchi, Akira; Ono, Youhei; Kimura, Tomoaki

    The event related potential (ERP) is often measured through the oddball task. On the oddball task, subjects are given “rare stimulus” and “frequent stimulus”. Measured ERPs were analyzed by the averaging technique. In the results, amplitude of the ERP P300 becomes large when the “rare stimulus” is given. However, measured ERPs are included samples without an original feature of ERP. Thus, it is necessary to reject unsuitable measured ERPs when using the averaging technique. In this paper, we propose the rejection method for unsuitable measured ERPs for the averaging technique. Moreover, we combine the proposed method and Woody's adaptive filter method.

  9. A strategy for sampling on a sphere applied to 3D selective RF pulse design.

    Science.gov (United States)

    Wong, S T; Roos, M S

    1994-12-01

    Conventional constant angular velocity sampling of the surface of a sphere results in a higher sampling density near the two poles relative to the equatorial region. More samples, and hence longer sampling time, are required to achieve a given sampling density in the equatorial region when compared with uniform sampling. This paper presents a simple expression for a continuous sample path through a nearly uniform distribution of points on the surface of a sphere. Sampling of concentric spherical shells in k-space with the new strategy is used to design 3D selective inversion and spin-echo pulses. These new 3D selective pulses have been implemented and verified experimentally.

  10. In vivo selection of randomly mutated retroviral genomes

    NARCIS (Netherlands)

    Berkhout, B.; Klaver, B.

    1993-01-01

    Darwinian evolution, that is the outgrowth of the fittest variants in a population, usually applies to living organisms over long periods of time. Recently, in vitro selection/amplification techniques have been developed that allow for the rapid evolution of functionally active nucleic acids from a

  11. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    Science.gov (United States)

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  12. Sampling methodology and site selection in the National Eye Health Survey: an Australian population-based prevalence study.

    Science.gov (United States)

    Foreman, Joshua; Keel, Stuart; Dunn, Ross; van Wijngaarden, Peter; Taylor, Hugh R; Dirani, Mohamed

    2017-05-01

    This paper presents the sampling methodology of the National Eye Health Survey that aimed to determine the prevalence of vision impairment and blindness in Australia. The National Eye Health Survey is a cross-sectional population-based survey. Indigenous Australians aged 40 years and older and non-Indigenous Australians aged 50 years and older residing in all levels of geographic remoteness in Australia. Using multistage, random-cluster sampling, 30 geographic areas were selected to provide samples of 3000 non-Indigenous Australians and 1400 Indigenous Australians. Sampling involved (i) selecting Statistical Area- Level 2 sites, stratified by remoteness; (ii) selecting Statistical Area- Level 1 sites within Statistical Area- Level 2 sites to provide targeted samples; and (iii) grouping of contiguous Statistical Area- Level 1 sites or replacing Statistical Area- Level 1 sites to provide sufficient samples. The main outcome measures involved Sites sites selected and participants sampled in the survey. Thirty sites were generated, including 12 Major City sites, 6 Inner Regional sites, 6 Outer Regional sites, 4 Remote sites and 2 Very Remote sites. Three thousand ninety-eight non-Indigenous participants and 1738 Indigenous participants were recruited. Selection of Statistical Area- Level 1 site overestimated the number of eligible residents in all sites. About 20% (6/30) of Statistical Area- Level 1 sites were situated in non-residential bushland, and 26.67% (8/30) of Statistical Area- Level 1 populations had low eligibility or accessibility, requiring replacement. Representative samples of Indigenous and non-Indigenous Australians were selected, recruited and tested, providing the first national data on the prevalence of vision impairment and blindness in Australia. © 2016 Royal Australian and New Zealand College of Ophthalmologists.

  13. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  14. Estimation of Sensitive Proportion by Randomized Response Data in Successive Sampling

    Directory of Open Access Journals (Sweden)

    Bo Yu

    2015-01-01

    Full Text Available This paper considers the problem of estimation for binomial proportions of sensitive or stigmatizing attributes in the population of interest. Randomized response techniques are suggested for protecting the privacy of respondents and reducing the response bias while eliciting information on sensitive attributes. In many sensitive question surveys, the same population is often sampled repeatedly on each occasion. In this paper, we apply successive sampling scheme to improve the estimation of the sensitive proportion on current occasion.

  15. The relevance sample-feature machine: a sparse Bayesian learning approach to joint feature-sample selection.

    Science.gov (United States)

    Mohsenzadeh, Yalda; Sheikhzadeh, Hamid; Reza, Ali M; Bathaee, Najmehsadat; Kalayeh, Mahdi M

    2013-12-01

    This paper introduces a novel sparse Bayesian machine-learning algorithm for embedded feature selection in classification tasks. Our proposed algorithm, called the relevance sample feature machine (RSFM), is able to simultaneously choose the relevance samples and also the relevance features for regression or classification problems. We propose a separable model in feature and sample domains. Adopting a Bayesian approach and using Gaussian priors, the learned model by RSFM is sparse in both sample and feature domains. The proposed algorithm is an extension of the standard RVM algorithm, which only opts for sparsity in the sample domain. Experimental comparisons on synthetic as well as benchmark data sets show that RSFM is successful in both feature selection (eliminating the irrelevant features) and accurate classification. The main advantages of our proposed algorithm are: less system complexity, better generalization and avoiding overfitting, and less computational cost during the testing stage.

  16. Changes in selected biochemical indices resulting from various pre-sampling handling techniques in broilers

    National Research Council Canada - National Science Library

    Chloupek, Petr; Bedanova, Iveta; Chloupek, Jan; Vecerek, Vladimir

    2011-01-01

    .... This study focused on detection of changes in the corticosterone level and concentrations of other selected biochemical parameters in broilers handled in two different manners during blood sampling...

  17. Pseudo cluster randomization dealt with selection bias and contamination in clinical trials

    NARCIS (Netherlands)

    Teerenstra, S.; Melis, R.J.F.; Peer, P.G.M.; Borm, G.F.

    2006-01-01

    BACKGROUND AND OBJECTIVES: When contamination is present, randomization on a patient level leads to dilution of the treatment effect. The usual solution is to randomize on a cluster level, but at the cost of efficiency and more importantly, this may introduce selection bias. Furthermore, it may slow

  18. Selective outcome reporting and sponsorship in randomized controlled trials in IVF and ICSI.

    Science.gov (United States)

    Braakhekke, M; Scholten, I; Mol, F; Limpens, J; Mol, B W; van der Veen, F

    2017-10-01

    Are randomized controlled trials (RCTs) on IVF and ICSI subject to selective outcome reporting and is this related to sponsorship? There are inconsistencies, independent from sponsorship, in the reporting of primary outcome measures in the majority of IVF and ICSI trials, indicating selective outcome reporting. RCTs are subject to bias at various levels. Of these biases, selective outcome reporting is particularly relevant to IVF and ICSI trials since there is a wide variety of outcome measures to choose from. An established cause of reporting bias is sponsorship. It is, at present, unknown whether RCTs in IVF/ICSI are subject to selective outcome reporting and whether this is related with sponsorship. We systematically searched RCTs on IVF and ICSI published between January 2009 and March 2016 in MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials and the publisher subset of PubMed. We analysed 415 RCTs. Per included RCT, we extracted data on impact factor of the journal, sample size, power calculation, and trial registry and thereafter data on primary outcome measure, the direction of trial results and sponsorship. Of the 415 identified RCTs, 235 were excluded for our primary analysis, because the sponsorship was not reported. Of the 180 RCTs included in our analysis, 7 trials did not report on any primary outcome measure and 107 of the remaining 173 trials (62%) reported on surrogate primary outcome measures. Of the 114 registered trials, 21 trials (18%) provided primary outcomes in their manuscript that were different from those in the trial registry. This indicates selective outcome reporting. We found no association between selective outcome reporting and sponsorship. We ran additional analyses to include the trials that had not reported sponsorship and found no outcomes that differed from our primary analysis. Since the majority of the trials did not report on sponsorship, there is a risk on sampling bias. IVF and ICSI trials are subject, to

  19. Power and sample size calculations for Mendelian randomization studies using one genetic instrument.

    Science.gov (United States)

    Freeman, Guy; Cowling, Benjamin J; Schooling, C Mary

    2013-08-01

    Mendelian randomization, which is instrumental variable analysis using genetic variants as instruments, is an increasingly popular method of making causal inferences from observational studies. In order to design efficient Mendelian randomization studies, it is essential to calculate the sample sizes required. We present formulas for calculating the power of a Mendelian randomization study using one genetic instrument to detect an effect of a given size, and the minimum sample size required to detect effects for given levels of significance and power, using asymptotic statistical theory. We apply the formulas to some example data and compare the results with those from simulation methods. Power and sample size calculations using these formulas should be more straightforward to carry out than simulation approaches. These formulas make explicit that the sample size needed for Mendelian randomization study is inversely proportional to the square of the correlation between the genetic instrument and the exposure and proportional to the residual variance of the outcome after removing the effect of the exposure, as well as inversely proportional to the square of the effect size.

  20. Sampling versus Random Binning for Multiple Descriptions of a Bandlimited Source

    DEFF Research Database (Denmark)

    Mashiach, Adam; Østergaard, Jan; Zamir, Ram

    2013-01-01

    Random binning is an efficient, yet complex, coding technique for the symmetric L-description source coding problem. We propose an alternative approach, that uses the quantized samples of a bandlimited source as "descriptions". By the Nyquist condition, the source can be reconstructed if enough s...

  1. Recidivism among Child Sexual Abusers: Initial Results of a 13-Year Longitudinal Random Sample

    Science.gov (United States)

    Patrick, Steven; Marsh, Robert

    2009-01-01

    In the initial analysis of data from a random sample of all those charged with child sexual abuse in Idaho over a 13-year period, only one predictive variable was found that related to recidivism of those convicted. Variables such as ethnicity, relationship, gender, and age differences did not show a significant or even large association with…

  2. HABITAT ASSESSMENT USING A RANDOM PROBABILITY BASED SAMPLING DESIGN: ESCAMBIA RIVER DELTA, FLORIDA

    Science.gov (United States)

    Smith, Lisa M., Darrin D. Dantin and Steve Jordan. In press. Habitat Assessment Using a Random Probability Based Sampling Design: Escambia River Delta, Florida (Abstract). To be presented at the SWS/GERS Fall Joint Society Meeting: Communication and Collaboration: Coastal Systems...

  3. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  4. Flexible sampling large-scale social networks by self-adjustable random walk

    Science.gov (United States)

    Xu, Xiao-Ke; Zhu, Jonathan J. H.

    2016-12-01

    Online social networks (OSNs) have become an increasingly attractive gold mine for academic and commercial researchers. However, research on OSNs faces a number of difficult challenges. One bottleneck lies in the massive quantity and often unavailability of OSN population data. Sampling perhaps becomes the only feasible solution to the problems. How to draw samples that can represent the underlying OSNs has remained a formidable task because of a number of conceptual and methodological reasons. Especially, most of the empirically-driven studies on network sampling are confined to simulated data or sub-graph data, which are fundamentally different from real and complete-graph OSNs. In the current study, we propose a flexible sampling method, called Self-Adjustable Random Walk (SARW), and test it against with the population data of a real large-scale OSN. We evaluate the strengths of the sampling method in comparison with four prevailing methods, including uniform, breadth-first search (BFS), random walk (RW), and revised RW (i.e., MHRW) sampling. We try to mix both induced-edge and external-edge information of sampled nodes together in the same sampling process. Our results show that the SARW sampling method has been able to generate unbiased samples of OSNs with maximal precision and minimal cost. The study is helpful for the practice of OSN research by providing a highly needed sampling tools, for the methodological development of large-scale network sampling by comparative evaluations of existing sampling methods, and for the theoretical understanding of human networks by highlighting discrepancies and contradictions between existing knowledge/assumptions of large-scale real OSN data.

  5. A Novel Method of Failure Sample Selection for Electrical Systems Using Ant Colony Optimization.

    Science.gov (United States)

    Xiong, Jian; Tian, Shulin; Yang, Chenglin; Liu, Cheng

    2016-01-01

    The influence of failure propagation is ignored in failure sample selection based on traditional testability demonstration experiment method. Traditional failure sample selection generally causes the omission of some failures during the selection and this phenomenon could lead to some fearful risks of usage because these failures will lead to serious propagation failures. This paper proposes a new failure sample selection method to solve the problem. First, the method uses a directed graph and ant colony optimization (ACO) to obtain a subsequent failure propagation set (SFPS) based on failure propagation model and then we propose a new failure sample selection method on the basis of the number of SFPS. Compared with traditional sampling plan, this method is able to improve the coverage of testing failure samples, increase the capacity of diagnosis, and decrease the risk of using.

  6. Using ArcMap, Google Earth, and Global Positioning Systems to select and locate random households in rural Haiti

    Directory of Open Access Journals (Sweden)

    Wampler Peter J

    2013-01-01

    Full Text Available Abstract Background A remote sensing technique was developed which combines a Geographic Information System (GIS; Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. Methods The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. Results A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. Conclusions The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only

  7. Using ArcMap, Google Earth, and Global Positioning Systems to select and locate random households in rural Haiti.

    Science.gov (United States)

    Wampler, Peter J; Rediske, Richard R; Molla, Azizur R

    2013-01-18

    A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This

  8. Sample size calculations for micro-randomized trials in mHealth.

    Science.gov (United States)

    Liao, Peng; Klasnja, Predrag; Tewari, Ambuj; Murphy, Susan A

    2016-05-30

    The use and development of mobile interventions are experiencing rapid growth. In "just-in-time" mobile interventions, treatments are provided via a mobile device, and they are intended to help an individual make healthy decisions 'in the moment,' and thus have a proximal, near future impact. Currently, the development of mobile interventions is proceeding at a much faster pace than that of associated data science methods. A first step toward developing data-based methods is to provide an experimental design for testing the proximal effects of these just-in-time treatments. In this paper, we propose a 'micro-randomized' trial design for this purpose. In a micro-randomized trial, treatments are sequentially randomized throughout the conduct of the study, with the result that each participant may be randomized at the 100s or 1000s of occasions at which a treatment might be provided. Further, we develop a test statistic for assessing the proximal effect of a treatment as well as an associated sample size calculator. We conduct simulation evaluations of the sample size calculator in various settings. Rules of thumb that might be used in designing a micro-randomized trial are discussed. This work is motivated by our collaboration on the HeartSteps mobile application designed to increase physical activity. Copyright © 2015 John Wiley & Sons, Ltd. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Occupational position and its relation to mental distress in a random sample of Danish residents

    DEFF Research Database (Denmark)

    Rugulies, Reiner Ernst; Madsen, Ida E H; Nielsen, Maj Britt D

    2010-01-01

    somatization symptoms (OR = 6.28, 95% CI = 1.39-28.46). CONCLUSIONS: Unskilled manual workers, the unemployed, and, to a lesser extent, the low-grade self-employed showed an increased level of mental distress. Activities to promote mental health in the Danish population should be directed toward these groups.......PURPOSE: To analyze the distribution of depressive, anxiety, and somatization symptoms across different occupational positions in a random sample of Danish residents. METHODS: The study sample consisted of 591 Danish residents (50% women), aged 20-65, drawn from an age- and gender-stratified random...... sample of the Danish population. Participants filled out a survey that included the 92 item version of the Hopkins Symptom Checklist (SCL-92). We categorized occupational position into seven groups: high- and low-grade non-manual workers, skilled and unskilled manual workers, high- and low-grade self...

  10. A Comparison of Dietary Habits between Recreational Runners and a Randomly Selected Adult Population in Slovenia.

    Science.gov (United States)

    Škof, Branko; Rotovnik Kozjek, Nada

    2015-09-01

    The aim of the study was to compare the dietary habits of recreational runners with those of a random sample of the general population. We also wanted to determine the influence of gender, age and sports performance of recreational runners on their basic diet and compliance with recommendations in sports nutrition. The study population consisted of 1,212 adult Slovenian recreational runners and 774 randomly selected residents of Slovenia between the ages of 18 and 65 years. The data on the dietary habits of our subjects was gathered by means of two questionnaires. The following parameters were evaluated: the type of diet, a food pattern, and the frequency of consumption of individual food groups, the use of dietary supplements, fluid intake, and alcohol consumption. Recreational runners had better compliance with recommendations for healthy nutrition than the general population. This pattern increased with the runner's age and performance level. Compared to male runners, female runners ate more regularly and had a more frequent consumption of food groups associated with a healthy diet (fruit, vegetables, whole grain foods, and low-fat dairy products). The consumption of simple sugars and use of nutritional supplements by well-trained runners was inadequate with values recommended for physically active individuals. Recreational runners are an exemplary population group that actively seeks to adopt a healthier lifestyle.

  11. Use of protein: creatinine ratio in a random spot urine sample for predicting significant proteinuria in diabetes mellitus.

    Science.gov (United States)

    Yadav, B K; Adhikari, S; Gyawali, P; Shrestha, R; Poudel, B; Khanal, M

    2010-06-01

    Present study was undertaken during a period of 6 months (September 2008-February 2009) to see an correlation of 24 hours urine protein estimation with random spot protein-creatinine (P:C) ratio among a diabetic patients. The study comprised of 144 patients aged 30-70 years, recruited from Kantipur hospital, Kathmandu. The 24-hr urine sample was collected, followed by spot random urine sample. Both samples were analyzed for protein and creatinine excretion. An informed consent was taken from all participants. Sixteen inadequately collected urine samples as defined by (predicted creatinine--measured creatinine)/predicted creatinine > 0.2 were excluded from analysis. The Spearman's rank correlation between the spot urine P:C ratio and 24-hr total protein were performed by the Statistical Package for Social Service. At the P:C ratio cutoff of 0.15 and reference method (24-hr urine protein) cutoff of 150 mg/day, the correlation coefficient was found to be 0.892 (p urine collection but the cutoff should be carefully selected for different patients group under different laboratory procedures and settings.

  12. [Selection of sentinel sites for death surveillance, using cluster or unequal probability sampling].

    Science.gov (United States)

    Lian, Heng-li; Xu, Yong-yong; Guo, Ling-xia; Tan, Zhi-jun; Liu, Dan-hong; Rao, Ke-qin

    2010-04-01

    To compare the sampling errors from cluster or unequal probability sampling designs and to adopt the unequal probability sampling method to be used for death surveillance. Taking 107 areas from the county level in Shaanxi province as the sampling frame, a set of samples are drawn by equal probability cluster sampling and unequal probability designs methodologies. Sampling error and effect of each design are estimated according to their complex sample plans. Both the sampling errors depend on the sampling plan and the errors of equal probability in stratified cluster sampling appears to be less than simple cluster sampling. The design effects of unequal probability stratified cluster sampling, such as piPS design, are slightly lower than those of equal probability stratified cluster sampling, but the unequal probability stratified cluster sampling can cover a wider scope of monitoring population. Results from the analysis of sampling data can not be conducted without consideration of the sampling plan when the sampling frame is finite and a given sampling plan and parameters, such as sampling proportion and population weights, are assigned in advance. Unequal probability cluster sampling designs seems to be more appropriate in selecting the national death surveillance sites since more available monitoring data can be obtained and having more weight in estimating the mortality for the whole province or the municipality to be selected.

  13. Random sampling for a mental health survey in a deprived multi-ethnic area of Berlin.

    Science.gov (United States)

    Mundt, Adrian P; Aichberger, Marion C; Kliewe, Thomas; Ignatyev, Yuriy; Yayla, Seda; Heimann, Hannah; Schouler-Ocak, Meryam; Busch, Markus; Rapp, Michael; Heinz, Andreas; Ströhle, Andreas

    2012-12-01

    The aim of the study was to assess the response to random sampling for a mental health survey in a deprived multi-ethnic area of Berlin, Germany, with a large Turkish-speaking population. A random list from the registration office with 1,000 persons stratified by age and gender was retrieved from the population registry and these persons were contacted using a three-stage design including written information, telephone calls and personal contact at home. A female bilingual interviewer contacted persons with Turkish names. Of the persons on the list, 202 were not living in the area, one was deceased, 502 did not respond. Of the 295 responders, 152 explicitly refused(51.5%) to participate. We retained a sample of 143 participants(48.5%) representing the rate of multi-ethnicity in the area (52.1% migrants in the sample vs. 53.5% in the population). Turkish migrants were over-represented(28.9% in the sample vs. 18.6% in the population). Polish migrants (2.1 vs. 5.3% in the population) and persons from the former Yugoslavia (1.4 vs. 4.8% in the population)were under-represented. Bilingual contact procedures can improve the response rates of the most common migrant populations to random sampling if migrants of the same origin gate the contact. High non-contact and non-response rates for migrant and non-migrant populations in deprived urban areas remain a challenge for obtaining representative random samples.

  14. Assessment of proteinuria by using protein: creatinine index in random urine sample.

    Science.gov (United States)

    Khan, Dilshad Ahmed; Ahmad, Tariq Mahmood; Qureshil, Ayaz Hussain; Halim, Abdul; Ahmad, Mumtaz; Afzal, Saeed

    2005-10-01

    To assess the quantitative measurement of proteinuria by using random urine protein:creatinine index/ratio in comparison with 24 hours urinary protein excretion in patients of renal diseases having normal glomerular filtration rate. One hundred and thirty patients, 94 males and 36 females, with an age range of 5 to 60 years; having proteinuria of more than 150 mg/day were included in this study. Qualitative urinary protein estimation was done on random urine specimen by dipstick. Quantitative measurement of protein in the random and 24 hours urine specimens were carried out by a method based on the formation of a red complex of protein with pyrogallal red in acid medium on Micro lab 200 (Merck). Estimation of creatinine was done on Selectra -2 (Merck) by Jaffe's reaction. The urine protein:creatinine index and ratio were calculated by dividing the urine protein concentration (mg/L) by urine creatinine concentration (mmol/L) multilplied by 10 and mg/mg respectively. The protein:creatinine index and ratio of more than 140 and 0.18 respectively in a random urine sample indicated pathological proteinuria. An excellent correlation (r=0.96) was found between random urine protein:creatinine index/ratio and standard 24 hours urinary protein excretion in these patients (pprotein:creatinine index in random urine is a convenient, quick and reliable method of estimation of proteinuria as compared to 24 hours of urinary protein excretion for diagnosis and monitoring of renal diseases in our medical setup.

  15. Generalized essential energy space random walks to more effectively accelerate solute sampling in aqueous environment.

    Science.gov (United States)

    Lv, Chao; Zheng, Lianqing; Yang, Wei

    2012-01-28

    Molecular dynamics sampling can be enhanced via the promoting of potential energy fluctuations, for instance, based on a Hamiltonian modified with the addition of a potential-energy-dependent biasing term. To overcome the diffusion sampling issue, which reveals the fact that enlargement of event-irrelevant energy fluctuations may abolish sampling efficiency, the essential energy space random walk (EESRW) approach was proposed earlier. To more effectively accelerate the sampling of solute conformations in aqueous environment, in the current work, we generalized the EESRW method to a two-dimension-EESRW (2D-EESRW) strategy. Specifically, the essential internal energy component of a focused region and the essential interaction energy component between the focused region and the environmental region are employed to define the two-dimensional essential energy space. This proposal is motivated by the general observation that in different conformational events, the two essential energy components have distinctive interplays. Model studies on the alanine dipeptide and the aspartate-arginine peptide demonstrate sampling improvement over the original one-dimension-EESRW strategy; with the same biasing level, the present generalization allows more effective acceleration of the sampling of conformational transitions in aqueous solution. The 2D-EESRW generalization is readily extended to higher dimension schemes and employed in more advanced enhanced-sampling schemes, such as the recent orthogonal space random walk method. © 2012 American Institute of Physics

  16. Multilayer pixel super-resolution lensless in-line holographic microscope with random sample movement.

    Science.gov (United States)

    Wang, Mingjun; Feng, Shaodong; Wu, Jigang

    2017-10-06

    We report a multilayer lensless in-line holographic microscope (LIHM) with improved imaging resolution by using the pixel super-resolution technique and random sample movement. In our imaging system, a laser beam illuminated the sample and a CMOS imaging sensor located behind the sample recorded the in-line hologram for image reconstruction. During the imaging process, the sample was moved by hand randomly and the in-line holograms were acquired sequentially. Then the sample image was reconstructed from an enhanced-resolution hologram obtained from multiple low-resolution in-line holograms by applying the pixel super-resolution (PSR) technique. We studied the resolution enhancement effects by using the U.S. Air Force (USAF) target as the sample in numerical simulation and experiment. We also showed that multilayer pixel super-resolution images can be obtained by imaging a triple-layer sample made with the filamentous algae on the middle layer and microspheres with diameter of 2 μm on the top and bottom layers. Our pixel super-resolution LIHM provides a compact and low-cost solution for microscopic imaging and is promising for many biomedical applications.

  17. RANDOM FORESTS-BASED FEATURE SELECTION FOR LAND-USE CLASSIFICATION USING LIDAR DATA AND ORTHOIMAGERY

    Directory of Open Access Journals (Sweden)

    H. Guan

    2012-07-01

    Full Text Available The development of lidar system, especially incorporated with high-resolution camera components, has shown great potential for urban classification. However, how to automatically select the best features for land-use classification is challenging. Random Forests, a newly developed machine learning algorithm, is receiving considerable attention in the field of image classification and pattern recognition. Especially, it can provide the measure of variable importance. Thus, in this study the performance of the Random Forests-based feature selection for urban areas was explored. First, we extract features from lidar data, including height-based, intensity-based GLCM measures; other spectral features can be obtained from imagery, such as Red, Blue and Green three bands, and GLCM-based measures. Finally, Random Forests is used to automatically select the optimal and uncorrelated features for landuse classification. 0.5-meter resolution lidar data and aerial imagery are used to assess the feature selection performance of Random Forests in the study area located in Mannheim, Germany. The results clearly demonstrate that the use of Random Forests-based feature selection can improve the classification performance by the selected features.

  18. Randomized controlled trials 5: Determining the sample size and power for clinical trials and cohort studies.

    Science.gov (United States)

    Greene, Tom

    2015-01-01

    Performing well-powered randomized controlled trials is of fundamental importance in clinical research. The goal of sample size calculations is to assure that statistical power is acceptable while maintaining a small probability of a type I error. This chapter overviews the fundamentals of sample size calculation for standard types of outcomes for two-group studies. It considers (1) the problems of determining the size of the treatment effect that the studies will be designed to detect, (2) the modifications to sample size calculations to account for loss to follow-up and nonadherence, (3) the options when initial calculations indicate that the feasible sample size is insufficient to provide adequate power, and (4) the implication of using multiple primary endpoints. Sample size estimates for longitudinal cohort studies must take account of confounding by baseline factors.

  19. Characterization of Electron Microscopes with Binary Pseudo-random Multilayer Test Samples

    Energy Technology Data Exchange (ETDEWEB)

    V Yashchuk; R Conley; E Anderson; S Barber; N Bouet; W McKinney; P Takacs; D Voronov

    2011-12-31

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested [1] and [2] and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi2/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  20. Characterization of electron microscopes with binary pseudo-random multilayer test samples

    Energy Technology Data Exchange (ETDEWEB)

    Yashchuk, Valeriy V., E-mail: VVYashchuk@lbl.gov [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Conley, Raymond [NSLS-II, Brookhaven National Laboratory, Upton, NY 11973 (United States); Anderson, Erik H. [Center for X-ray Optics, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Barber, Samuel K. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Bouet, Nathalie [NSLS-II, Brookhaven National Laboratory, Upton, NY 11973 (United States); McKinney, Wayne R. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Takacs, Peter Z. [Brookhaven National Laboratory, Upton, NY 11973 (United States); Voronov, Dmitriy L. [Advanced Light Source, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2011-09-01

    Verification of the reliability of metrology data from high quality X-ray optics requires that adequate methods for test and calibration of the instruments be developed. For such verification for optical surface profilometers in the spatial frequency domain, a modulation transfer function (MTF) calibration method based on binary pseudo-random (BPR) gratings and arrays has been suggested and proven to be an effective calibration method for a number of interferometric microscopes, a phase shifting Fizeau interferometer, and a scatterometer [5]. Here we describe the details of development of binary pseudo-random multilayer (BPRML) test samples suitable for characterization of scanning (SEM) and transmission (TEM) electron microscopes. We discuss the results of TEM measurements with the BPRML test samples fabricated from a WiSi{sub 2}/Si multilayer coating with pseudo-randomly distributed layers. In particular, we demonstrate that significant information about the metrological reliability of the TEM measurements can be extracted even when the fundamental frequency of the BPRML sample is smaller than the Nyquist frequency of the measurements. The measurements demonstrate a number of problems related to the interpretation of the SEM and TEM data. Note that similar BPRML test samples can be used to characterize X-ray microscopes. Corresponding work with X-ray microscopes is in progress.

  1. Generation of Aptamers from A Primer-Free Randomized ssDNA Library Using Magnetic-Assisted Rapid Aptamer Selection

    Science.gov (United States)

    Tsao, Shih-Ming; Lai, Ji-Ching; Horng, Horng-Er; Liu, Tu-Chen; Hong, Chin-Yih

    2017-04-01

    Aptamers are oligonucleotides that can bind to specific target molecules. Most aptamers are generated using random libraries in the standard systematic evolution of ligands by exponential enrichment (SELEX). Each random library contains oligonucleotides with a randomized central region and two fixed primer regions at both ends. The fixed primer regions are necessary for amplifying target-bound sequences by PCR. However, these extra-sequences may cause non-specific bindings, which potentially interfere with good binding for random sequences. The Magnetic-Assisted Rapid Aptamer Selection (MARAS) is a newly developed protocol for generating single-strand DNA aptamers. No repeat selection cycle is required in the protocol. This study proposes and demonstrates a method to isolate aptamers for C-reactive proteins (CRP) from a randomized ssDNA library containing no fixed sequences at 5‧ and 3‧ termini using the MARAS platform. Furthermore, the isolated primer-free aptamer was sequenced and binding affinity for CRP was analyzed. The specificity of the obtained aptamer was validated using blind serum samples. The result was consistent with monoclonal antibody-based nephelometry analysis, which indicated that a primer-free aptamer has high specificity toward targets. MARAS is a feasible platform for efficiently generating primer-free aptamers for clinical diagnoses.

  2. Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.

    Science.gov (United States)

    Sztepanacz, Jacqueline L; Blows, Mark W

    2017-07-01

    The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.

  3. On analysis-based two-step interpolation methods for randomly sampled seismic data

    Science.gov (United States)

    Yang, Pengliang; Gao, Jinghuai; Chen, Wenchao

    2013-02-01

    Interpolating the missing traces of regularly or irregularly sampled seismic record is an exceedingly important issue in the geophysical community. Many modern acquisition and reconstruction methods are designed to exploit the transform domain sparsity of the few randomly recorded but informative seismic data using thresholding techniques. In this paper, to regularize randomly sampled seismic data, we introduce two accelerated, analysis-based two-step interpolation algorithms, the analysis-based FISTA (fast iterative shrinkage-thresholding algorithm) and the FPOCS (fast projection onto convex sets) algorithm from the IST (iterative shrinkage-thresholding) algorithm and the POCS (projection onto convex sets) algorithm. A MATLAB package is developed for the implementation of these thresholding-related interpolation methods. Based on this package, we compare the reconstruction performance of these algorithms, using synthetic and real seismic data. Combined with several thresholding strategies, the accelerated convergence of the proposed methods is also highlighted.

  4. Hemodynamic and glucometabolic factors fail to predict renal function in a random population sample

    DEFF Research Database (Denmark)

    Pareek, M.; Nielsen, M.; Olesen, Thomas Bastholm

    2015-01-01

    Objective: To determine whether baseline hemodynamic and/or glucometabolic risk factors could predict renal function at follow-up, independently of baseline serum creatinine, in survivors from a random population sample. Design and method: We examined associations between baseline serum creatinine...... indices of beta-cell function (HOMA-2B), insulin sensitivity (HOMA-2S), and insulin resistance (HOMA-2IR)), traditional cardiovascular risk factors (age, sex, smoking status, body mass index, diabetes mellitus, total serum cholesterol), and later renal function determined as serum cystatin C in 238 men...... and 7 women aged 38 to 49 years at the time of inclusion, using multivariable linear regression analysis (p-entry 0.05, p-removal 0.20). Study subjects came from a random population based sample and were included 1974-1992, whilst the follow-up with cystatin C measurement was performed 2002...

  5. An inversion method based on random sampling for real-time MEG neuroimaging

    CERN Document Server

    Pascarella, Annalisa

    2016-01-01

    The MagnetoEncephaloGraphy (MEG) has gained great interest in neurorehabilitation training due to its high temporal resolution. The challenge is to localize the active regions of the brain in a fast and accurate way. In this paper we use an inversion method based on random spatial sampling to solve the real-time MEG inverse problem. Several numerical tests on synthetic but realistic data show that the method takes just a few hundredths of a second on a laptop to produce an accurate map of the electric activity inside the brain. Moreover, it requires very little memory storage. For this reasons the random sampling method is particularly attractive in real-time MEG applications.

  6. Modified Exponential Type Estimator for Population Mean Using Auxiliary Variables in Stratified Random Sampling

    OpenAIRE

    Özel, Gamze

    2015-01-01

    In this paper, a new exponential type estimator is developed in the stratified random sampling for the population mean using auxiliary variable information. In order to evaluate efficiency of the introduced estimator, we first review some estimators and study the optimum property of the suggested strategy. To judge the merits of the suggested class of estimators over others under the optimal condition, simulation study and real data applications are conducted. The results show that the introduc...

  7. Effectiveness of hand hygiene education among a random sample of women from the community

    OpenAIRE

    Ubheeram, J.; Biranjia-Hurdoyal, S.D.

    2017-01-01

    Summary Objective. The effectiveness of hand hygiene education was investigated by studying the hand hygiene awareness and bacterial hand contamination among a random sample of 170 women in the community. Methods. Questionnaire was used to assess the hand hygiene awareness score, followed by swabbing of the dominant hand. Bacterial identification was done by conventional biochemical tests. Results. Better hand hygiene awareness score was significantly associated with age, scarce bacterial gro...

  8. Control Capacity and A Random Sampling Method in Exploring Controllability of Complex Networks

    OpenAIRE

    Jia, Tao; Barab?si, Albert-L?szl?

    2013-01-01

    Controlling complex systems is a fundamental challenge of network science. Recent advances indicate that control over the system can be achieved through a minimum driver node set (MDS). The existence of multiple MDS's suggests that nodes do not participate in control equally, prompting us to quantify their participations. Here we introduce control capacity quantifying the likelihood that a node is a driver node. To efficiently measure this quantity, we develop a random sampling algorithm. Thi...

  9. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    Science.gov (United States)

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with

  10. Determining optimal sample sizes for multi-stage randomized clinical trials using value of information methods.

    Science.gov (United States)

    Willan, Andrew; Kowgier, Matthew

    2008-01-01

    Traditional sample size calculations for randomized clinical trials depend on somewhat arbitrarily chosen factors, such as Type I and II errors. An effectiveness trial (otherwise known as a pragmatic trial or management trial) is essentially an effort to inform decision-making, i.e., should treatment be adopted over standard? Taking a societal perspective and using Bayesian decision theory, Willan and Pinto (Stat. Med. 2005; 24:1791-1806 and Stat. Med. 2006; 25:720) show how to determine the sample size that maximizes the expected net gain, i.e., the difference between the cost of doing the trial and the value of the information gained from the results. These methods are extended to include multi-stage adaptive designs, with a solution given for a two-stage design. The methods are applied to two examples. As demonstrated by the two examples, substantial increases in the expected net gain (ENG) can be realized by using multi-stage adaptive designs based on expected value of information methods. In addition, the expected sample size and total cost may be reduced. Exact solutions have been provided for the two-stage design. Solutions for higher-order designs may prove to be prohibitively complex and approximate solutions may be required. The use of multi-stage adaptive designs for randomized clinical trials based on expected value of sample information methods leads to substantial gains in the ENG and reductions in the expected sample size and total cost.

  11. Sample size calculations for pilot randomized trials: a confidence interval approach.

    Science.gov (United States)

    Cocks, Kim; Torgerson, David J

    2013-02-01

    To describe a method using confidence intervals (CIs) to estimate the sample size for a pilot randomized trial. Using one-sided CIs and the estimated effect size that would be sought in a large trial, we calculated the sample size needed for pilot trials. Using an 80% one-sided CI, we estimated that a pilot trial should have at least 9% of the sample size of the main planned trial. Using the estimated effect size difference for the main trial and using a one-sided CI, this allows us to calculate a sample size for a pilot trial, which will make its results more useful than at present. Copyright © 2013 Elsevier Inc. All rights reserved.

  12. An efficient method of wavelength interval selection based on random frog for multivariate spectral calibration

    Science.gov (United States)

    Yun, Yong-Huan; Li, Hong-Dong; Wood, Leslie R. E.; Fan, Wei; Wang, Jia-Jun; Cao, Dong-Sheng; Xu, Qing-Song; Liang, Yi-Zeng

    2013-07-01

    Wavelength selection is a critical step for producing better prediction performance when applied to spectral data. Considering the fact that the vibrational and rotational spectra have continuous features of spectral bands, we propose a novel method of wavelength interval selection based on random frog, called interval random frog (iRF). To obtain all the possible continuous intervals, spectra are first divided into intervals by moving window of a fix width over the whole spectra. These overlapping intervals are ranked applying random frog coupled with PLS and the optimal ones are chosen. This method has been applied to two near-infrared spectral datasets displaying higher efficiency in wavelength interval selection than others. The source code of iRF can be freely downloaded for academy research at the website: http://code.google.com/p/multivariate-calibration/downloads/list.

  13. Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae

    Science.gov (United States)

    Huillet, Thierry E.

    2017-07-01

    We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.

  14. Estimating the Size of a Large Network and its Communities from a Random Sample.

    Science.gov (United States)

    Chen, Lin; Karbasi, Amin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V, E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W ⊆ V and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that accurately estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhaustive set of experiments to study the effects of sample size, K, and SBM model parameters on the accuracy of the estimates. The experimental results also demonstrate that PULSE significantly outperforms a widely-used method called the network scale-up estimator in a wide variety of scenarios.

  15. On the selection of sampling points for myocardial T1 mapping.

    Science.gov (United States)

    Akçakaya, Mehmet; Weingärtner, Sebastian; Roujol, Sébastien; Nezafat, Reza

    2015-05-01

    To provide a method for the optimal selection of sampling points for myocardial T1 mapping, and to evaluate how this selection affects the precision. The Cramér-Rao lower bound on the variance of the unbiased estimator was derived for the sampling of the longitudinal magnetization curve, as a function of T1 , signal-to-noise ratio, and noise mean. The bound was then minimized numerically over a search space of possible sampling points to find the optimal selection of sampling points. Numerical simulations were carried out for a saturation recovery-based T1 mapping sequence, comparing the proposed point selection method to a uniform distribution of sampling points along the recovery curve for various T1 ranges of interest, as well as number of sampling points. Phantom imaging was performed to replicate the scenarios in numerical simulations. In vivo imaging for myocardial T1 mapping was also performed in healthy subjects. Numerical simulations show that the precision can be improved by 13-25% by selecting the sampling points according to the target T1 values of interest. Results of the phantom imaging were not significantly different than the theoretical predictions for different sampling strategies, signal-to-noise ratio and number of sampling points. In vivo imaging showed precision can be improved in myocardial T1 mapping using the proposed point selection method as predicted by theory. The framework presented can be used to select the sampling points to improve the precision without penalties on accuracy or scan time. © 2014 Wiley Periodicals, Inc.

  16. Selecting Sample Preparation Workflows for Mass Spectrometry-Based Proteomic and Phosphoproteomic Analysis of Patient Samples with Acute Myeloid Leukemia

    Directory of Open Access Journals (Sweden)

    Maria Hernandez-Valladares

    2016-08-01

    Full Text Available Global mass spectrometry (MS-based proteomic and phosphoproteomic studies of acute myeloid leukemia (AML biomarkers represent a powerful strategy to identify and confirm proteins and their phosphorylated modifications that could be applied in diagnosis and prognosis, as a support for individual treatment regimens and selection of patients for bone marrow transplant. MS-based studies require optimal and reproducible workflows that allow a satisfactory coverage of the proteome and its modifications. Preparation of samples for global MS analysis is a crucial step and it usually requires method testing, tuning and optimization. Different proteomic workflows that have been used to prepare AML patient samples for global MS analysis usually include a standard protein in-solution digestion procedure with a urea-based lysis buffer. The enrichment of phosphopeptides from AML patient samples has previously been carried out either with immobilized metal affinity chromatography (IMAC or metal oxide affinity chromatography (MOAC. We have recently tested several methods of sample preparation for MS analysis of the AML proteome and phosphoproteome and introduced filter-aided sample preparation (FASP as a superior methodology for the sensitive and reproducible generation of peptides from patient samples. FASP-prepared peptides can be further fractionated or IMAC-enriched for proteome or phosphoproteome analyses. Herein, we will review both in-solution and FASP-based sample preparation workflows and encourage the use of the latter for the highest protein and phosphorylation coverage and reproducibility.

  17. RESULTS OF THE SELECTION OF BREEDING SAMPLES OF CARROT BASED ON BIOCHEMICAL COMPOSITION

    OpenAIRE

    V. K. Cherkasova; O. N. Shabetya

    2014-01-01

    12 samples of carrot were analyzed for biochemical components in roots. 5 genotypes with high content of vitamin C, β-carotene, and total sugar were selected as genetic sources of high biochemical components.

  18. Empirical aspects about Heckman Procedure Application: Is there sample selection bias in the Brazilian Industry

    OpenAIRE

    Flávio Kaue Fiuza-Moura; Katy Maia

    2015-01-01

    There are several labor market researches whose main goal is to analyze the probability of employment and the structure of wage determination and, for empirical purposes, most of these researches deploy Heckman sample selection bias hazard detection and correction procedure. However, few Brazilian studies are focused in this procedure applicability, especially concerning specific industries. This paper aims to approach these issues by testing the existence of sample selection bias in Brazilia...

  19. Nicotine therapy sampling to induce quit attempts among smokers unmotivated to quit: a randomized clinical trial.

    Science.gov (United States)

    Carpenter, Matthew J; Hughes, John R; Gray, Kevin M; Wahlquist, Amy E; Saladin, Michael E; Alberg, Anthony J

    2011-11-28

    Rates of smoking cessation have not changed in a decade, accentuating the need for novel approaches to prompt quit attempts. Within a nationwide randomized clinical trial (N = 849) to induce further quit attempts and cessation, smokers currently unmotivated to quit were randomized to a practice quit attempt (PQA) alone or to nicotine replacement therapy (hereafter referred to as nicotine therapy), sampling within the context of a PQA. Following a 6-week intervention period, participants were followed up for 6 months to assess outcomes. The PQA intervention was designed to increase motivation, confidence, and coping skills. The combination of a PQA plus nicotine therapy sampling added samples of nicotine lozenges to enhance attitudes toward pharmacotherapy and to promote the use of additional cessation resources. Primary outcomes included the incidence of any ever occurring self-defined quit attempt and 24-hour quit attempt. Secondary measures included 7-day point prevalence abstinence at any time during the study (ie, floating abstinence) and at the final follow-up assessment. Compared with PQA intervention, nicotine therapy sampling was associated with a significantly higher incidence of any quit attempt (49% vs 40%; relative risk [RR], 1.2; 95% CI, 1.1-1.4) and any 24-hour quit attempt (43% vs 34%; 1.3; 1.1-1.5). Nicotine therapy sampling was marginally more likely to promote floating abstinence (19% vs 15%; RR, 1.3; 95% CI, 1.0-1.7); 6-month point prevalence abstinence rates were no different between groups (16% vs 14%; 1.2; 0.9-1.6). Nicotine therapy sampling during a PQA represents a novel strategy to motivate smokers to make a quit attempt. clinicaltrials.gov Identifier: NCT00706979.

  20. Location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling

    Directory of Open Access Journals (Sweden)

    Alireza Goli

    2015-09-01

    Full Text Available Distribution and optimum allocation of emergency resources are the most important tasks, which need to be accomplished during crisis. When a natural disaster such as earthquake, flood, etc. takes place, it is necessary to deliver rescue efforts as quickly as possible. Therefore, it is important to find optimum location and distribution of emergency relief resources. When a natural disaster occurs, it is not possible to reach some damaged areas. In this paper, location and multi-depot vehicle routing for emergency vehicles using tour coverage and random sampling is investigated. In this study, there is no need to visit all the places and some demand points receive their needs from the nearest possible location. The proposed study is implemented for some randomly generated numbers in different sizes. The preliminary results indicate that the proposed method was capable of reaching desirable solutions in reasonable amount of time.

  1. ESTIMATION OF FINITE POPULATION MEAN USING RANDOM NON–RESPONSE IN SURVEY SAMPLING

    Directory of Open Access Journals (Sweden)

    Housila P. Singh

    2010-12-01

    Full Text Available This paper consider the problem of estimating the population mean under three different situations of random non–response envisaged by Singh et al (2000. Some ratio and product type estimators have been proposed and their properties are studied under an assumption that the number of sampling units on which information can not be obtained owing to random non–response follows some distribution. The suggested estimators are compared with the usual ratio and product estimators. An empirical study is carried out to show the performance of the suggested estimators over usual unbiased estimator, ratio and product estimators. A generalized version of the proposed ratio and product estimators is also given.

  2. Sensitivity and selectivity of cultivation methods to recovery a specific Salmonella serogroup from hatchery plenum samples

    Science.gov (United States)

    Studies have shown that Salmonella serotypes exhibit different growth characteristics in the same enrichment (selective or non-selective) and this can cause certain serotypes like S. Enteritidis to go undetected in a sample. The objectives of this study were to evaluate the serogroup diversity reco...

  3. Two-year Randomized Clinical Trial Of Self-etching Adhesives And Selective Enamel Etching

    OpenAIRE

    Pena, MR; Rodrigues CE; JA; Ely; Giannini, C.; Reis, M; AF

    2016-01-01

    Objective: The aim of this randomized, controlled prospective clinical trial was to evaluate the clinical effectiveness of restoring noncarious cervical lesions with two self-etching adhesive systems applied with or without selective enamel etching. Methods: A one-step self-etching adhesive (Xeno V+) and a two-step self-etching system (Clearfil SE Bond) were used. The effectiveness of phosphoric acid selective etching of enamel margins was also evaluated. Fifty-six cavities were restored with...

  4. Chi-Squared Test of Fit and Sample Size-A Comparison between a Random Sample Approach and a Chi-Square Value Adjustment Method.

    Science.gov (United States)

    Bergh, Daniel

    2015-01-01

    Chi-square statistics are commonly used for tests of fit of measurement models. Chi-square is also sensitive to sample size, which is why several approaches to handle large samples in test of fit analysis have been developed. One strategy to handle the sample size problem may be to adjust the sample size in the analysis of fit. An alternative is to adopt a random sample approach. The purpose of this study was to analyze and to compare these two strategies using simulated data. Given an original sample size of 21,000, for reductions of sample sizes down to the order of 5,000 the adjusted sample size function works as good as the random sample approach. In contrast, when applying adjustments to sample sizes of lower order the adjustment function is less effective at approximating the chi-square value for an actual random sample of the relevant size. Hence, the fit is exaggerated and misfit under-estimated using the adjusted sample size function. Although there are big differences in chi-square values between the two approaches at lower sample sizes, the inferences based on the p-values may be the same.

  5. Randomized controlled trial on timing and number of sampling for bile aspiration cytology.

    Science.gov (United States)

    Tsuchiya, Tomonori; Yokoyama, Yukihiro; Ebata, Tomoki; Igami, Tsuyoshi; Sugawara, Gen; Kato, Katsuyuki; Shimoyama, Yoshie; Nagino, Masato

    2014-06-01

    The issue on timing and number of bile sampling for exfoliative bile cytology is still unsettled. A total of 100 patients with cholangiocarcinoma undergoing resection after external biliary drainage were randomized into two groups: a 2-day group where bile was sampled five times per day for 2 days; and a 10-day group where bile was sampled once per day for 10 days (registered University Hospital Medical Information Network/ID 000005983). The outcome of 87 patients who underwent laparotomy was analyzed, 44 in the 2-day group and 43 in the 10-day group. There were no significant differences in patient characteristics between the two groups. Positivity after one sampling session was significantly lower in the 2-day group than in the 10-day group (17.0 ± 3.7% vs. 20.7 ± 3.5%, P = 0.034). However, cumulative positivity curves were similar and overlapped each other between both groups. The final cumulative positivity by the 10th sampling session was 52.3% in the 2-day group and 51.2% in the 10-day group. We observed a small increase in cumulative positivity after the 5th or 6th session in both groups. Bile cytology positivity is unlikely to be affected by sample time. © 2013 Japanese Society of Hepato-Biliary-Pancreatic Surgery.

  6. Estimating the Size of a Large Network and its Communities from a Random Sample

    CERN Document Server

    Chen, Lin; Crawford, Forrest W

    2016-01-01

    Most real-world networks are too large to be measured or studied directly and there is substantial interest in estimating global network properties from smaller sub-samples. One of the most important global properties is the number of vertices/nodes in the network. Estimating the number of vertices in a large network is a major challenge in computer science, epidemiology, demography, and intelligence analysis. In this paper we consider a population random graph G = (V;E) from the stochastic block model (SBM) with K communities/blocks. A sample is obtained by randomly choosing a subset W and letting G(W) be the induced subgraph in G of the vertices in W. In addition to G(W), we observe the total degree of each sampled vertex and its block membership. Given this partial information, we propose an efficient PopULation Size Estimation algorithm, called PULSE, that correctly estimates the size of the whole population as well as the size of each community. To support our theoretical analysis, we perform an exhausti...

  7. Studies on spectral analysis of randomly sampled signals: Application to laser velocimetry data

    Science.gov (United States)

    Sree, David

    1992-01-01

    Spectral analysis is very useful in determining the frequency characteristics of many turbulent flows, for example, vortex flows, tail buffeting, and other pulsating flows. It is also used for obtaining turbulence spectra from which the time and length scales associated with the turbulence structure can be estimated. These estimates, in turn, can be helpful for validation of theoretical/numerical flow turbulence models. Laser velocimetry (LV) is being extensively used in the experimental investigation of different types of flows, because of its inherent advantages; nonintrusive probing, high frequency response, no calibration requirements, etc. Typically, the output of an individual realization laser velocimeter is a set of randomly sampled velocity data. Spectral analysis of such data requires special techniques to obtain reliable estimates of correlation and power spectral density functions that describe the flow characteristics. FORTRAN codes for obtaining the autocorrelation and power spectral density estimates using the correlation-based slotting technique were developed. Extensive studies have been conducted on simulated first-order spectrum and sine signals to improve the spectral estimates. A first-order spectrum was chosen because it represents the characteristics of a typical one-dimensional turbulence spectrum. Digital prefiltering techniques, to improve the spectral estimates from randomly sampled data were applied. Studies show that the spectral estimates can be increased up to about five times the mean sampling rate.

  8. Prevalence and correlates of problematic smartphone use in a large random sample of Chinese undergraduates.

    Science.gov (United States)

    Long, Jiang; Liu, Tie-Qiao; Liao, Yan-Hui; Qi, Chang; He, Hao-Yu; Chen, Shu-Bao; Billieux, Joël

    2016-11-17

    Smartphones are becoming a daily necessity for most undergraduates in Mainland China. Because the present scenario of problematic smartphone use (PSU) is largely unexplored, in the current study we aimed to estimate the prevalence of PSU and to screen suitable predictors for PSU among Chinese undergraduates in the framework of the stress-coping theory. A sample of 1062 undergraduate smartphone users was recruited by means of the stratified cluster random sampling strategy between April and May 2015. The Problematic Cellular Phone Use Questionnaire was used to identify PSU. We evaluated five candidate risk factors for PSU by using logistic regression analysis while controlling for demographic characteristics and specific features of smartphone use. The prevalence of PSU among Chinese undergraduates was estimated to be 21.3%. The risk factors for PSU were majoring in the humanities, high monthly income from the family (≥1500 RMB), serious emotional symptoms, high perceived stress, and perfectionism-related factors (high doubts about actions, high parental expectations). PSU among undergraduates appears to be ubiquitous and thus constitutes a public health issue in Mainland China. Although further longitudinal studies are required to test whether PSU is a transient phenomenon or a chronic and progressive condition, our study successfully identified socio-demographic and psychological risk factors for PSU. These results, obtained from a random and thus representative sample of undergraduates, opens up new avenues in terms of prevention and regulation policies.

  9. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.

    Science.gov (United States)

    Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano

    2017-11-08

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed

  10. Training Program Efficacy in Developing Health Life Skills among Sample Selected from Kindergarten Children

    Science.gov (United States)

    Al Mohtadi, Reham Mohammad; Al Zboon, Habis Sa'ad

    2017-01-01

    This study drove at identifying the training program efficacy in developing the health life skills among sample selected from Kindergarten children. Study sample consisted of 60 children of both genders, ages of which are ranged from 5-6 years old. We have applied herein the pre and post dimension of health life skills scale; consisting of 28…

  11. Optimal Selection of the Sampling Interval for Estimation of Modal Parameters by an ARMA- Model

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning

    1993-01-01

    Optimal selection of the sampling interval for estimation of the modal parameters by an ARMA-model for a white noise loaded structure modelled as a single degree of- freedom linear mechanical system is considered. An analytical solution for an optimal uniform sampling interval, which is optimal...

  12. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, Janus; Zhang, Qi; Fitzek, Frank

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...

  13. Protein/creatinine ratio on random urine samples for prediction of proteinuria in preeclampsia.

    Science.gov (United States)

    Roudsari, F Vahid; Ayati, S; Ayatollahi, H; Shakeri, M T

    2012-01-01

    To evaluate Protein/Creatinine ratio on random urine samples for prediction of proteinuria in preeclampsia. This study was performed on 150 pregnant women who were hospitalized as preeclampsia in Ghaem Hospital during 2006. At first, a 24-hours urine sample was collected for each patient to determine protein/creatinine ratio. Then, 24-hours urine collection was analyzed for the evaluation of proteinuria. Statistical analysis was performed with SPSS software. A total of 150 patients entered the study. There was a significant relation between the 24-hours urine protein and protein/creatinine ratio (r = 0.659, P < 0.001). Since the measurement of protein/creatinine ratio is more accurate, reliable, and cost-effective, it can be replaced by the method of measurement the 24-hours urine protein.

  14. [Acupuncture and moxibustion for peripheral facial palsy at different stages: multi-central large-sample randomized controlled trial].

    Science.gov (United States)

    Li, Ying; Li, Yan; Liu, Li-an; Zhao, Ling; Hu, Ka-ming; Wu, Xi; Chen, Xiao-qin; Li, Gui-ping; Mang, Ling-ling; Qi, Qi-hua

    2011-04-01

    To explore the best intervention time of acupuncture and moxibustion for peripheral facial palsy (Bell's palsy) and the clinical advantage program of selective treatment with acupuncture and moxibustion. Multi-central large-sample randomized controlled trial was carried out. Nine hundreds cases of Bell's palsy were randomized into 5 treatment groups, named selective filiform needle group (group A), selective acupuncture + moxibustion group (group B), selective acupuncture + electroacupuncture (group C), selective acupuncture + line-up needling on muscle region of meridian group (group D) and non-selective filiform needle group (group E). Four sessions of treatment were required in each group. Separately, during the enrollment, after 4 sessions of treatment, in 1 month and 3 months of follow-up after treatment, House-Brackmann Scale, Facial Disability Index Scale and Degree of Facial Nerve Paralysis (NFNP) were adopted for efficacy assessment. And the efficacy systematic analysis was provided in view of the intervention time and nerve localization of disease separately. The curative rates of intervention in acute stage and resting stage were 50.1% (223/445) and 52.1% (162/311), which were superior to recovery stage (25.9%, 35/135) separately. There were no statistical significant differences in efficacy in comparison among 5 treatment programs at the same stage (all P > 0.05). The efficacy of intervention of group A and group E in acute stage was superior to that in recovery stage (both P < 0.01). The difference was significant statistically between the efficacy on the localization above chorda tympani nerve and that on the localization below the nerve in group D (P < 0.01). The efficacy on the localization below chorda tympani nerve was superior to the localization above the nerve. The best intervention time for the treatment of Bell's palsy is in acute stage and resting stage, meaning 1 to 3 weeks after occurrence. All of the 5 treatment programs are advantageous

  15. Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method

    Science.gov (United States)

    Shamsoddini, A.; Aboodi, M. R.; Karami, J.

    2017-09-01

    Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  16. TEHRAN AIR POLLUTANTS PREDICTION BASED ON RANDOM FOREST FEATURE SELECTION METHOD

    Directory of Open Access Journals (Sweden)

    A. Shamsoddini

    2017-09-01

    Full Text Available Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  17. LOD score exclusion analyses for candidate QTLs using random population samples.

    Science.gov (United States)

    Deng, Hong-Wen

    2003-11-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes as putative QTLs using random population samples. Previously, we developed an LOD score exclusion mapping approach for candidate genes for complex diseases. Here, we extend this LOD score approach for exclusion analyses of candidate genes for quantitative traits. Under this approach, specific genetic effects (as reflected by heritability) and inheritance models at candidate QTLs can be analyzed and if an LOD score is < or = -2.0, the locus can be excluded from having a heritability larger than that specified. Simulations show that this approach has high power to exclude a candidate gene from having moderate genetic effects if it is not a QTL and is robust to population admixture. Our exclusion analysis complements association analysis for candidate genes as putative QTLs in random population samples. The approach is applied to test the importance of Vitamin D receptor (VDR) gene as a potential QTL underlying the variation of bone mass, an important determinant of osteoporosis.

  18. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    Directory of Open Access Journals (Sweden)

    Ying Yan

    2017-01-01

    Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.

  19. Uncertainty Of Stream Nutrient Transport Estimates Using Random Sampling Of Storm Events From High Resolution Water Quality And Discharge Data

    Science.gov (United States)

    Scholefield, P. A.; Arnscheidt, J.; Jordan, P.; Beven, K.; Heathwaite, L.

    2007-12-01

    The uncertainties associated with stream nutrient transport estimates are frequently overlooked and the sampling strategy is rarely if ever investigated. Indeed, the impact of sampling strategy and estimation method on the bias and precision of stream phosphorus (P) transport calculations is little understood despite the use of such values in the calibration and testing of models of phosphorus transport. The objectives of this research were to investigate the variability and uncertainty in the estimates of total phosphorus transfers at an intensively monitored agricultural catchment. The Oona Water which is located in the Irish border region, is part of a long term monitoring program focusing on water quality. The Oona Water is a rural river catchment with grassland agriculture and scattered dwelling houses and has been monitored for total phosphorus (TP) at 10 min resolution for several years (Jordan et al, 2007). Concurrent sensitive measurements of discharge are also collected. The water quality and discharge data were provided at 1 hour resolution (averaged) and this meant that a robust estimate of the annual flow weighted concentration could be obtained by simple interpolation between points. A two-strata approach (Kronvang and Bruhn, 1996) was used to estimate flow weighted concentrations using randomly sampled storm events from the 400 identified within the time series and also base flow concentrations. Using a random stratified sampling approach for the selection of events, a series ranging from 10 through to the full 400 were used, each time generating a flow weighted mean using a load-discharge relationship identified through log-log regression and monte-carlo simulation. These values were then compared to the observed total phosphorus concentration for the catchment. Analysis of these results show the impact of sampling strategy, the inherent bias in any estimate of phosphorus concentrations and the uncertainty associated with such estimates. The

  20. Presence of psychoactive substances in oral fluid from randomly selected drivers in Denmark

    DEFF Research Database (Denmark)

    Simonsen, K. Wiese; Steentoft, A.; Hels, Tove

    2012-01-01

    This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season...... of narcotic drugs. It can be concluded that driving under the influence of drugs is as serious a road safety problem as drunk driving.......This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season...

  1. Presence of psychoactive substances in oral fluid from randomly selected drivers in Denmark

    DEFF Research Database (Denmark)

    Simonsen, Kirsten Wiese; Steentoft, Anni; Hels, Tove

    2012-01-01

    This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season....... It can be concluded that driving under the influence of drugs is as serious a road safety problem as drunk driving.......This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season...

  2. Not too big, not too small: a goldilocks approach to sample size selection.

    Science.gov (United States)

    Broglio, Kristine R; Connor, Jason T; Berry, Scott M

    2014-01-01

    We present a Bayesian adaptive design for a confirmatory trial to select a trial's sample size based on accumulating data. During accrual, frequent sample size selection analyses are made and predictive probabilities are used to determine whether the current sample size is sufficient or whether continuing accrual would be futile. The algorithm explicitly accounts for complete follow-up of all patients before the primary analysis is conducted. We refer to this as a Goldilocks trial design, as it is constantly asking the question, "Is the sample size too big, too small, or just right?" We describe the adaptive sample size algorithm, describe how the design parameters should be chosen, and show examples for dichotomous and time-to-event endpoints.

  3. Empirical aspects about Heckman Procedure Application: Is there sample selection bias in the Brazilian Industry

    Directory of Open Access Journals (Sweden)

    Flávio Kaue Fiuza-Moura

    2015-12-01

    Full Text Available There are several labor market researches whose main goal is to analyze the probability of employment and the structure of wage determination and, for empirical purposes, most of these researches deploy Heckman sample selection bias hazard detection and correction procedure. However, few Brazilian studies are focused in this procedure applicability, especially concerning specific industries. This paper aims to approach these issues by testing the existence of sample selection bias in Brazilian manufacturing industry, and to analyze the impact of the bias correction procedure over the estimated coefficients of OLS Mincer equations. We found sample selection bias hazard only in manufacturing segments which average wages are lower than market average and only in groups of workers which average wage level is below the market average (women, especially blacks. The analysis and comparison of Mincer equations with and without Heckman’s sample selection bias correction procedure brought up that the estimation’s coefficients related to wage differential for male over female workers and the wage differential for urban over non-urban workers tends to be overestimated in cases which the sample selection bias isn’t corrected.

  4. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  5. Bayesian dose selection design for a binary outcome using restricted response adaptive randomization.

    Science.gov (United States)

    Meinzer, Caitlyn; Martin, Renee; Suarez, Jose I

    2017-09-08

    In phase II trials, the most efficacious dose is usually not known. Moreover, given limited resources, it is difficult to robustly identify a dose while also testing for a signal of efficacy that would support a phase III trial. Recent designs have sought to be more efficient by exploring multiple doses through the use of adaptive strategies. However, the added flexibility may potentially increase the risk of making incorrect assumptions and reduce the total amount of information available across the dose range as a function of imbalanced sample size. To balance these challenges, a novel placebo-controlled design is presented in which a restricted Bayesian response adaptive randomization (RAR) is used to allocate a majority of subjects to the optimal dose of active drug, defined as the dose with the lowest probability of poor outcome. However, the allocation between subjects who receive active drug or placebo is held constant to retain the maximum possible power for a hypothesis test of overall efficacy comparing the optimal dose to placebo. The design properties and optimization of the design are presented in the context of a phase II trial for subarachnoid hemorrhage. For a fixed total sample size, a trade-off exists between the ability to select the optimal dose and the probability of rejecting the null hypothesis. This relationship is modified by the allocation ratio between active and control subjects, the choice of RAR algorithm, and the number of subjects allocated to an initial fixed allocation period. While a responsive RAR algorithm improves the ability to select the correct dose, there is an increased risk of assigning more subjects to a worse arm as a function of ephemeral trends in the data. A subarachnoid treatment trial is used to illustrate how this design can be customized for specific objectives and available data. Bayesian adaptive designs are a flexible approach to addressing multiple questions surrounding the optimal dose for treatment efficacy

  6. Inflammatory Biomarkers and Risk of Schizophrenia: A 2-Sample Mendelian Randomization Study.

    Science.gov (United States)

    Hartwig, Fernando Pires; Borges, Maria Carolina; Horta, Bernardo Lessa; Bowden, Jack; Davey Smith, George

    2017-12-01

    Positive associations between inflammatory biomarkers and risk of psychiatric disorders, including schizophrenia, have been reported in observational studies. However, conventional observational studies are prone to bias, such as reverse causation and residual confounding, thus limiting our understanding of the effect (if any) of inflammatory biomarkers on schizophrenia risk. To evaluate whether inflammatory biomarkers have an effect on the risk of developing schizophrenia. Two-sample mendelian randomization study using genetic variants associated with inflammatory biomarkers as instrumental variables to improve inference. Summary association results from large consortia of candidate gene or genome-wide association studies, including several epidemiologic studies with different designs, were used. Gene-inflammatory biomarker associations were estimated in pooled samples ranging from 1645 to more than 80 000 individuals, while gene-schizophrenia associations were estimated in more than 30 000 cases and more than 45 000 ancestry-matched controls. In most studies included in the consortia, participants were of European ancestry, and the prevalence of men was approximately 50%. All studies were conducted in adults, with a wide age range (18 to 80 years). Genetically elevated circulating levels of C-reactive protein (CRP), interleukin-1 receptor antagonist (IL-1Ra), and soluble interleukin-6 receptor (sIL-6R). Risk of developing schizophrenia. Individuals with schizophrenia or schizoaffective disorders were included as cases. Given that many studies contributed to the analyses, different diagnostic procedures were used. The pooled odds ratio estimate using 18 CRP genetic instruments was 0.90 (random effects 95% CI, 0.84-0.97; P = .005) per 2-fold increment in CRP levels; consistent results were obtained using different mendelian randomization methods and a more conservative set of instruments. The odds ratio for sIL-6R was 1.06 (95% CI, 1.01-1.12; P = .02

  7. A Selective Dynamic Sampling Back-Propagation Approach for Handling the Two-Class Imbalance Problem

    Directory of Open Access Journals (Sweden)

    Roberto Alejo

    2016-07-01

    Full Text Available In this work, we developed a Selective Dynamic Sampling Approach (SDSA to deal with the class imbalance problem. It is based on the idea of using only the most appropriate samples during the neural network training stage. The “average samples”are the best to train the neural network, they are neither hard, nor easy to learn, and they could improve the classifier performance. The experimental results show that the proposed method is a successful method to deal with the two-class imbalance problem. It is very competitive with respect to well-known over-sampling approaches and dynamic sampling approaches, even often outperforming the under-sampling and standard back-propagation methods. SDSA is a very simple method for automatically selecting the most appropriate samples (average samples during the training of the back-propagation, and it is very efficient. In the training stage, SDSA uses significantly fewer samples than the popular over-sampling approaches and even than the standard back-propagation trained with the original dataset.

  8. Principal Stratification in sample selection problems with non normal error terms

    DEFF Research Database (Denmark)

    Rocci, Roberto; Mellace, Giovanni

    The aim of the paper is to relax distributional assumptions on the error terms, often imposed in parametric sample selection models to estimate causal effects, when plausible exclusion restrictions are not available. Within the principal stratification framework, we approximate the true distribut......The aim of the paper is to relax distributional assumptions on the error terms, often imposed in parametric sample selection models to estimate causal effects, when plausible exclusion restrictions are not available. Within the principal stratification framework, we approximate the true...

  9. Electromembrane extraction as a rapid and selective miniaturized sample preparation technique for biological fluids

    DEFF Research Database (Denmark)

    Gjelstad, Astrid; Pedersen-Bjergaard, Stig; Seip, Knut Fredrik

    2015-01-01

    of organic solvent, and into an aqueous receiver solution. The extraction is promoted by application of an electrical field, causing electrokinetic migration of the charged analytes. The method has shown to perform excellent clean-up and selectivity from complicated aqueous matrices like biological fluids......This special report discusses the sample preparation method electromembrane extraction, which was introduced in 2006 as a rapid and selective miniaturized extraction method. The extraction principle is based on isolation of charged analytes extracted from an aqueous sample, across a thin film...

  10. X-ray reflection and scatter measurements on selected optical samples

    Science.gov (United States)

    Fields, S. A.; Reynolds, J. M.; Holland, R. L.

    1975-01-01

    The results from an experimental program to determine the reflection efficiency and scatter parameters of selected optical samples are presented. The measurements were made using 8.34A X-rays at various angles of incidence. Selected samples were contaminated after being measured and then remeasured to determine the effects of contamination. The instrumentation involved in taking the data, including the X-ray reflectometer and data processing equipment, is discussed in detail. The condition of the optical surfaces, the total reflection measurements, the scatter measurements, and the analysis are discussed.

  11. Validation of the 2008 Landsat Burned Area Ecv Product for North America Using Stratified Random Sampling

    Science.gov (United States)

    Brunner, N. M.; Mladinich, C. S.; Caldwell, M. K.; Beal, Y. J. G.

    2014-12-01

    The U.S. Geological Survey is generating a suite of Essential Climate Variables (ECVs) products, as defined by the Global Climate Observing System, from the Landsat data archive. Validation protocols for these products are being established, incorporating the Committee on Earth Observing Satellites Land Product Validation Subgroup's best practice guidelines and validation hierarchy stages. The sampling design and accuracy measures follow the methodology developed by the European Space Agency's Climate Change Initiative Fire Disturbance (fire_cci) project (Padilla and others, 2014). A rigorous validation was performed on the 2008 Burned Area ECV (BAECV) prototype product, using a stratified random sample of 48 Thiessen scene areas overlaying Landsat path/rows distributed across several terrestrial biomes throughout North America. The validation reference data consisted of fourteen sample sites acquired from the fire_cci project and the remaining new samples sites generated from a densification of the stratified sampling for North America. The reference burned area polygons were generated using the ABAMS (Automatic Burned Area Mapping) software (Bastarrika and others, 2011; Izagirre, 2014). Accuracy results will be presented indicating strengths and weaknesses of the BAECV algorithm.Bastarrika, A., Chuvieco, E., and Martín, M.P., 2011, Mapping burned areas from Landsat TM/ETM+ data with a two-phase algorithm: Balancing omission and commission errors: Remote Sensing of Environment, v. 115, no. 4, p. 1003-1012.Izagirre, A.B., 2014, Automatic Burned Area Mapping Software (ABAMS), Preliminary Documentation, Version 10 v4,: Vitoria-Gasteiz, Spain, University of Basque Country, p. 27.Padilla, M., Chuvieco, E., Hantson, S., Theis, R., and Sandow, C., 2014, D2.1 - Product Validation Plan: UAH - University of Alcalá de Henares (Spain), 37 p.

  12. Conic sampling: an efficient method for solving linear and quadratic programming by randomly linking constraints within the interior.

    Science.gov (United States)

    Serang, Oliver

    2012-01-01

    Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics.

  13. Improvement of sampling strategies for randomly distributed hotspots in soil applying a computerized simulation considering the concept of uncertainty.

    Science.gov (United States)

    Hildebrandt, Thomas; Pick, Denis; Einax, Jürgen W

    2012-02-01

    The pollution of soil and environment as a result of human activity is a major problem. Nowadays, the determination of local contaminations is of interest for environmental remediation. These hotspots can have various toxic effects on plants, animals, humans, and the whole ecological system. However, economical and juridical consequences are also possible, e.g., high costs for remediation measures. In this study three sampling strategies (simple random sampling, stratified sampling, and systematic sampling) were applied on randomly distributed hotspot contaminations to prove their efficiency in term of finding hotspots. The results were used for the validation of a computerized simulation. This application can simulate the contamination on a field, the sampling pattern, and a virtual sampling. A constant hit rate showed that none of the sampling patterns could reach better results than others. Furthermore, the uncertainty associated with the results is described by confidence intervals. It is to be considered that the uncertainty during sampling is enormous and will decrease slightly, even the number of samples applied was increased to an unreasonable amount. It is hardly possible to identify the exact number of randomly distributed hotspot contaminations by statistical sampling. But a range of possible results could be calculated. Depending on various parameters such as shape and size of the area, number of hotspots, and sample quantity, optimal sampling strategies could be derived. Furthermore, an estimation of bias arising from sampling methodology is possible. The developed computerized simulation is an innovative tool for optimizing sampling strategies in terrestrial compartments for hotspot distributions.

  14. A new selective enrichment procedure for isolating Pasteurella multocida from avian and environmental samples

    Science.gov (United States)

    Moore, M.K.; Cicnjak-Chubbs, L.; Gates, R.J.

    1994-01-01

    A selective enrichment procedure, using two new selective media, was developed to isolate Pasteurella multocida from wild birds and environmental samples. These media were developed by testing 15 selective agents with six isolates of P. multocida from wild avian origin and seven other bacteria representing genera frequently found in environmental and avian samples. The resulting media—Pasteurella multocida selective enrichment broth and Pasteurella multocida selective agar—consisted of a blood agar medium at pH 10 containing gentamicin, potassium tellurite, and amphotericin B. Media were tested to determine: 1) selectivity when attempting isolation from pond water and avian carcasses, 2) sensitivity for detection of low numbers of P. multocida from pure and mixed cultures, 3) host range specificity of the media, and 4) performance compared with standard blood agar. With the new selective enrichment procedure, P. multocida was isolated from inoculated (60 organisms/ml) pond water 84% of the time, whereas when standard blood agar was used, the recovery rate was 0%.

  15. Active Learning Not Associated with Student Learning in a Random Sample of College Biology Courses

    Science.gov (United States)

    Andrews, T. M.; Leonard, M. J.; Colgrove, C. A.; Kalinowski, S. T.

    2011-01-01

    Previous research has suggested that adding active learning to traditional college science lectures substantially improves student learning. However, this research predominantly studied courses taught by science education researchers, who are likely to have exceptional teaching expertise. The present study investigated introductory biology courses randomly selected from a list of prominent colleges and universities to include instructors representing a broader population. We examined the relationship between active learning and student learning in the subject area of natural selection. We found no association between student learning gains and the use of active-learning instruction. Although active learning has the potential to substantially improve student learning, this research suggests that active learning, as used by typical college biology instructors, is not associated with greater learning gains. We contend that most instructors lack the rich and nuanced understanding of teaching and learning that science education researchers have developed. Therefore, active learning as designed and implemented by typical college biology instructors may superficially resemble active learning used by education researchers, but lacks the constructivist elements necessary for improving learning. PMID:22135373

  16. Notes on interval estimation of the generalized odds ratio under stratified random sampling.

    Science.gov (United States)

    Lui, Kung-Jong; Chang, Kuang-Chao

    2013-05-01

    It is not rare to encounter the patient response on the ordinal scale in a randomized clinical trial (RCT). Under the assumption that the generalized odds ratio (GOR) is homogeneous across strata, we consider four asymptotic interval estimators for the GOR under stratified random sampling. These include the interval estimator using the weighted-least-squares (WLS) approach with the logarithmic transformation (WLSL), the interval estimator using the Mantel-Haenszel (MH) type of estimator with the logarithmic transformation (MHL), the interval estimator using Fieller's theorem with the MH weights (FTMH) and the interval estimator using Fieller's theorem with the WLS weights (FTWLS). We employ Monte Carlo simulation to evaluate the performance of these interval estimators by calculating the coverage probability and the average length. To study the bias of these interval estimators, we also calculate and compare the noncoverage probabilities in the two tails of the resulting confidence intervals. We find that WLSL and MHL can generally perform well, while FTMH and FTWLS can lose either precision or accuracy. We further find that MHL is likely the least biased. Finally, we use the data taken from a study of smoking status and breathing test among workers in certain industrial plants in Houston, Texas, during 1974 to 1975 to illustrate the use of these interval estimators.

  17. Control capacity and a random sampling method in exploring controllability of complex networks.

    Science.gov (United States)

    Jia, Tao; Barabási, Albert-László

    2013-01-01

    Controlling complex systems is a fundamental challenge of network science. Recent advances indicate that control over the system can be achieved through a minimum driver node set (MDS). The existence of multiple MDS's suggests that nodes do not participate in control equally, prompting us to quantify their participations. Here we introduce control capacity quantifying the likelihood that a node is a driver node. To efficiently measure this quantity, we develop a random sampling algorithm. This algorithm not only provides a statistical estimate of the control capacity, but also bridges the gap between multiple microscopic control configurations and macroscopic properties of the network under control. We demonstrate that the possibility of being a driver node decreases with a node's in-degree and is independent of its out-degree. Given the inherent multiplicity of MDS's, our findings offer tools to explore control in various complex systems.

  18. Simulated Performance Evaluation of a Selective Tracker Through Random Scenario Generation

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar

    2006-01-01

      The paper presents a simulation study on the performance of a target tracker using selective track splitting filter algorithm through a random scenario implemented on a digital signal processor.  In a typical track splitting filter all the observation which fall inside a likelihood ellipse...... are used for update, however, in our proposed selective track splitting filter less number of observations are used for track update.  Much of the previous performance work [1] has been done on specific (deterministic) scenarios. One of the reasons for considering the specific scenarios, which were...

  19. Randomized Controlled Trial of Attention Bias Modification in a Racially Diverse, Socially Anxious, Alcohol Dependent Sample

    Science.gov (United States)

    Clerkin, Elise M.; Magee, Joshua C.; Wells, Tony T.; Beard, Courtney; Barnett, Nancy P.

    2016-01-01

    Objective Attention biases may be an important treatment target for both alcohol dependence and social anxiety. This is the first ABM trial to investigate two (vs. one) targets of attention bias within a sample with co-occurring symptoms of social anxiety and alcohol dependence. Additionally, we used trial-level bias scores (TL-BS) to capture the phenomena of attention bias in a more ecologically valid, dynamic way compared to traditional attention bias scores. Method Adult participants (N=86; 41% Female; 52% African American; 40% White) with elevated social anxiety symptoms and alcohol dependence were randomly assigned to an 8-session training condition in this 2 (Social Anxiety ABM vs. Social Anxiety Control) by 2 (Alcohol ABM vs. Alcohol Control) design. Symptoms of social anxiety, alcohol dependence, and attention bias were assessed across time. Results Multilevel models estimated the trajectories for each measure within individuals, and tested whether these trajectories differed according to the randomized training conditions. Across time, there were significant or trending decreases in all attention TL-BS parameters (but not traditional attention bias scores) and most symptom measures. However, there were not significant differences in the trajectories of change between any ABM and control conditions for any symptom measures. Conclusions These findings add to previous evidence questioning the robustness of ABM and point to the need to extend the effects of ABM to samples that are racially diverse and/or have co-occurring psychopathology. The results also illustrate the potential importance of calculating trial-level attention bias scores rather than only including traditional bias scores. PMID:27591918

  20. Random sampling or geostatistical modelling? Choosing between design-based and model-based sampling strategies for soil (with discussion)

    NARCIS (Netherlands)

    Brus, D.J.; Gruijter, de J.J.

    1997-01-01

    Classical sampling theory has been repeatedly identified with classical statistics which assumes that data are identically and independently distributed. This explains the switch of many soil scientists from design-based sampling strategies, based on classical sampling theory, to the model-based

  1. Investigating causal associations between use of nicotine, alcohol, caffeine, and cannabis: A two-sample bidirectional Mendelian randomization study.

    Science.gov (United States)

    Verweij, Karin J H; Treur, Jorien L; Vink, Jacqueline M

    2018-01-15

    Epidemiological studies consistently show co-occurrence of use of different addictive substances. Whether these associations are causal or due to overlapping underlying influences remains an important question in addiction research. Methodological advances have made it possible to use published genetic associations to infer causal relationships between phenotypes. In this exploratory study, we used Mendelian randomization (MR) to examine the causality of well-established associations between nicotine, alcohol, caffeine, and cannabis use. Two-sample MR was employed to estimate bi-directional causal effects between four addictive substances: nicotine (smoking initiation and cigarettes smoked per day), caffeine (cups of coffee per day), alcohol (units per week), and cannabis (initiation). Based on existing genome-wide association results we selected genetic variants associated with the exposure measure as an instrument to estimate causal effects. Where possible we applied sensitivity analyses (MR-Egger and weighted median) more robust to horizontal pleiotropy. Most MR tests did not reveal causal associations. There was some weak evidence for a causal positive effect of genetically instrumented alcohol use on smoking initiation and of cigarettes per day on caffeine use, but these did not hold up with the sensitivity analyses. There was also some suggestive evidence for a positive effect of alcohol use on caffeine use (only with MR-Egger) and smoking initiation on cannabis initiation (only with weighted median). None of the suggestive causal associations survived corrections for multiple testing. Two-sample Mendelian randomization analyses found little evidence for causal relationships between nicotine, alcohol, caffeine, and cannabis use. This article is protected by copyright. All rights reserved.

  2. Methods and analysis of realizing randomized grouping.

    Science.gov (United States)

    Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi

    2011-07-01

    Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.

  3. Random Photon Absorption Model Elucidates How Early Gain Control in Fly Photoreceptors Arises from Quantal Sampling

    Science.gov (United States)

    Song, Zhuoyi; Zhou, Yu; Juusola, Mikko

    2016-01-01

    Many diurnal photoreceptors encode vast real-world light changes effectively, but how this performance originates from photon sampling is unclear. A 4-module biophysically-realistic fly photoreceptor model, in which information capture is limited by the number of its sampling units (microvilli) and their photon-hit recovery time (refractoriness), can accurately simulate real recordings and their information content. However, sublinear summation in quantum bump production (quantum-gain-nonlinearity) may also cause adaptation by reducing the bump/photon gain when multiple photons hit the same microvillus simultaneously. Here, we use a Random Photon Absorption Model (RandPAM), which is the 1st module of the 4-module fly photoreceptor model, to quantify the contribution of quantum-gain-nonlinearity in light adaptation. We show how quantum-gain-nonlinearity already results from photon sampling alone. In the extreme case, when two or more simultaneous photon-hits reduce to a single sublinear value, quantum-gain-nonlinearity is preset before the phototransduction reactions adapt the quantum bump waveform. However, the contribution of quantum-gain-nonlinearity in light adaptation depends upon the likelihood of multi-photon-hits, which is strictly determined by the number of microvilli and light intensity. Specifically, its contribution to light-adaptation is marginal (≤ 1%) in fly photoreceptors with many thousands of microvilli, because the probability of simultaneous multi-photon-hits on any one microvillus is low even during daylight conditions. However, in cells with fewer sampling units, the impact of quantum-gain-nonlinearity increases with brightening light. PMID:27445779

  4. The Planck-ATCA Co-eval Observations (PACO) project: the spectrally-selected sample

    OpenAIRE

    Bonaldi, A.; Bonavera, Laura; Massardi, Marcella; de Zotti, G.

    2012-01-01

    The Planck AustraliaTelescope Compact Array (Planck-ATCA)Co-evalObservations (PACO) have provided multi-frequency (5-40 GHz) flux density measurements of complete samples of Australia Telescope 20 GHz (AT20G) radio sources at frequencies below and overlapping with Planck frequency bands, almost simultaneously with Planck observations. In this work we analyse the data in total intensity for the spectrally selected PACO sample, a complete sample of 69 sources brighter than S20 GHz = 200 mJy sel...

  5. A Hierarchical Feature and Sample Selection Framework and Its Application for Alzheimer’s Disease Diagnosis

    Science.gov (United States)

    An, Le; Adeli, Ehsan; Liu, Mingxia; Zhang, Jun; Lee, Seong-Whan; Shen, Dinggang

    2017-03-01

    Classification is one of the most important tasks in machine learning. Due to feature redundancy or outliers in samples, using all available data for training a classifier may be suboptimal. For example, the Alzheimer’s disease (AD) is correlated with certain brain regions or single nucleotide polymorphisms (SNPs), and identification of relevant features is critical for computer-aided diagnosis. Many existing methods first select features from structural magnetic resonance imaging (MRI) or SNPs and then use those features to build the classifier. However, with the presence of many redundant features, the most discriminative features are difficult to be identified in a single step. Thus, we formulate a hierarchical feature and sample selection framework to gradually select informative features and discard ambiguous samples in multiple steps for improved classifier learning. To positively guide the data manifold preservation process, we utilize both labeled and unlabeled data during training, making our method semi-supervised. For validation, we conduct experiments on AD diagnosis by selecting mutually informative features from both MRI and SNP, and using the most discriminative samples for training. The superior classification results demonstrate the effectiveness of our approach, as compared with the rivals.

  6. Gender Wage Gap : A Semi-Parametric Approach With Sample Selection Correction

    NARCIS (Netherlands)

    Picchio, M.; Mussida, C.

    2010-01-01

    Sizeable gender differences in employment rates are observed in many countries. Sample selection into the workforce might therefore be a relevant issue when estimating gender wage gaps. This paper proposes a new semi-parametric estimator of densities in the presence of covariates which incorporates

  7. The Cosmic Lens All-Sky Survey parent population : I. Sample selection and number counts

    NARCIS (Netherlands)

    McKean, J. P.; Browne, I. W. A.; Jackson, N. J.; Fassnacht, C. D.; Helbig, P.

    We present the selection of the Jodrell Bank Flat-spectrum (JBF) radio source sample, which is designed to reduce the uncertainties in the Cosmic Lens All-Sky Survey (CLASS) gravitational lensing statistics arising from the lack of knowledge about the parent population luminosity function. From

  8. Statistical methods for genetic association studies with response-selective sampling designs

    NARCIS (Netherlands)

    Balliu, Brunilda

    2015-01-01

    This dissertation describes new statistical methods designed to improve the power of genetic association studies. Of particular interest are studies with a response-selective sampling design, i.e. case-control studies of unrelated individuals and case-control studies of family members. The

  9. RESULTS OF THE SELECTION OF BREEDING SAMPLES OF CARROT BASED ON BIOCHEMICAL COMPOSITION

    Directory of Open Access Journals (Sweden)

    V. K. Cherkasova

    2014-01-01

    Full Text Available 12 samples of carrot were analyzed for biochemical components in roots. 5 genotypes with high content of vitamin C, β-carotene, and total sugar were selected as genetic sources of high biochemical components.

  10. Phytochemical analysis and biological evaluation of selected African propolis samples from Cameroon and Congo

    NARCIS (Netherlands)

    Papachroni, D.; Graikou, K.; Kosalec, I.; Damianakos, H.; Ingram, V.J.; Chinou, I.

    2015-01-01

    The objective of this study was the chemical analysis of four selected samples of African propolis (Congo and Cameroon) and their biological evaluation. Twenty-one secondary metabolites belonging to four different chemical groups were isolated from the 70% ethanolic extracts of propolis and their

  11. Decomposing the Gender Wage Gap in the Netherlands with Sample Selection Adjustments

    NARCIS (Netherlands)

    Albrecht, James; Vuuren, van Aico; Vroman, Susan

    2004-01-01

    In this paper, we use quantile regression decomposition methods to analyzethe gender gap between men and women who work full time in the Nether-lands. Because the fraction of women working full time in the Netherlands isquite low, sample selection is a serious issue. In addition to shedding light

  12. Statistical inference of selection and divergence from a time-dependent Poisson random field model.

    Directory of Open Access Journals (Sweden)

    Amei Amei

    Full Text Available We apply a recently developed time-dependent Poisson random field model to aligned DNA sequences from two related biological species to estimate selection coefficients and divergence time. We use Markov chain Monte Carlo methods to estimate species divergence time and selection coefficients for each locus. The model assumes that the selective effects of non-synonymous mutations are normally distributed across genetic loci but constant within loci, and synonymous mutations are selectively neutral. In contrast with previous models, we do not assume that the individual species are at population equilibrium after divergence. Using a data set of 91 genes in two Drosophila species, D. melanogaster and D. simulans, we estimate the species divergence time t(div = 2.16 N(e (or 1.68 million years, assuming the haploid effective population size N(e = 6.45 x 10(5 years and a mean selection coefficient per generation μ(γ = 1.98/N(e. Although the average selection coefficient is positive, the magnitude of the selection is quite small. Results from numerical simulations are also presented as an accuracy check for the time-dependent model.

  13. Notes on interval estimation of the gamma correlation under stratified random sampling.

    Science.gov (United States)

    Lui, Kung-Jong; Chang, Kuang-Chao

    2012-07-01

    We have developed four asymptotic interval estimators in closed forms for the gamma correlation under stratified random sampling, including the confidence interval based on the most commonly used weighted-least-squares (WLS) approach (CIWLS), the confidence interval calculated from the Mantel-Haenszel (MH) type estimator with the Fisher-type transformation (CIMHT), the confidence interval using the fundamental idea of Fieller's Theorem (CIFT) and the confidence interval derived from a monotonic function of the WLS estimator of Agresti's α with the logarithmic transformation (MWLSLR). To evaluate the finite-sample performance of these four interval estimators and note the possible loss of accuracy in application of both Wald's confidence interval and MWLSLR using pooled data without accounting for stratification, we employ Monte Carlo simulation. We use the data taken from a general social survey studying the association between the income level and job satisfaction with strata formed by genders in black Americans published elsewhere to illustrate the practical use of these interval estimators. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Predictive value of testing random urine sample to detect microalbuminuria in diabetic subjects during outpatient visit.

    Science.gov (United States)

    Bouhanick, B; Berrut, G; Chameau, A M; Hallar, M; Bled, F; Chevet, B; Vergely, J; Rohmer, V; Fressinaud, P; Marre, M

    1992-01-01

    The predictive value of random urine sample during outpatient visit to predict persistent microalbuminuria was studied in 76 Type 1, insulin-dependent diabetic subjects, 61 Type 2, non-insulin-dependent diabetic subjects, and 72 Type 2, insulin-treated diabetic subjects. Seventy-six patients attended outpatient clinic during morning, and 133 during afternoon. Microalbuminuria was suspected if Urinary Albumin Excretion (UAE) exceeded 20 mg/l. All patients were hospitalized within 6 months following outpatient visit, and persistent microalbuminuria was assessed then if UAE was between 30 and 300 mg/24 h on 2-3 occasions in 3 urines samples. Of these 209 subjects eighty-three were also screened with Microbumintest (Ames-Bayer), a semi-quantitative method. Among the 209 subjects, 71 were positive both for microalbuminuria during outpatient visit and a persistent microalbuminuria during hospitalization: sensitivity 91.0%, specificity 83.2%, concordance 86.1%, and positive predictive value 76.3% (chi-squared test: 191; p less than 10(-4)). Data were not different for subjects examined on morning, or on afternoon. Among the 83 subjects also screened with Microbumintest, 22 displayed both a positive reaction and a persistent microalbuminuria: sensitivity 76%, specificity 81%, concordance 80%, and positive predictive value 69% (chi-squared test: 126; p less than 10(-4)). Both types of screening appeared equally effective during outpatient visit. Hence, a persistent microalbuminuria can be predicted during an outpatient visit in a diabetic clinic.

  15. Effectiveness of hand hygiene education among a random sample of women from the community.

    Science.gov (United States)

    Ubheeram, J; Biranjia-Hurdoyal, S D

    2017-03-01

    The effectiveness of hand hygiene education was investigated by studying the hand hygiene awareness and bacterial hand contamination among a random sample of 170 women in the community. Questionnaire was used to assess the hand hygiene awareness score, followed by swabbing of the dominant hand. Bacterial identification was done by conventional biochemical tests. Better hand hygiene awareness score was significantly associated with age, scarce bacterial growth and absence of potential pathogen (p hand samples, bacterial growth was noted in 155 (91.2%), which included 91 (53.5%) heavy growth, 53 (31.2%) moderate growth and 11 (6.47%) scanty growth. The presence of enteric bacteria was associated with long nails (49.4% vs 29.2%; p = 0.007; OR = 2.3; 95% CI: 1.25-4.44) while finger rings were associated with higher bacterial load (p = 0.003). Coliforms was significantly higher among women who had a lower hand hygiene awareness score, washed their hands at lower frequency (59.0% vs 32.8%; p = 0.003; OR = 2.9; 95% CI: 1.41-6.13) and used common soap as compared to antiseptic soaps (69.7% vs 30.3%, p = 0.000; OR = 4.11; 95% CI: 1.67-10.12). Level of hand hygiene awareness among the participants was satisfactory but not the compliance of hand washing practice, especially among the elders.

  16. Association between stalking victimisation and psychiatric morbidity in a random community sample.

    Science.gov (United States)

    Purcell, Rosemary; Pathé, Michele; Mullen, Paul E

    2005-11-01

    No studies have assessed psychopathology among victims of stalking who have not sought specialist help. To examine the associations between stalking victimisation and psychiatric morbidity in a representative community sample. A random community sample (n=1844) completed surveys examining the experience of harassment and current mental health. The 28-item General Health Questionnaire (GHQ-28) and the Impact of Event Scale were used to assess symptomatology in those reporting brief harassment (n=196) or protracted stalking (n=236) and a matched control group reporting no harassment (n=432). Rates of caseness on the GHQ-28 were higher among stalking victims (36.4%) than among controls (19.3%) and victims of brief harassment (21.9%). Psychiatric morbidity did not differ according to the recency of victimisation, with 34.1% of victims meeting caseness criteria 1 year after stalking had ended. In a significant minority of victims, stalking victimisation is associated with psychiatric morbidity that may persist long after it has ceased. Recognition of the immediate and long-term impacts of stalking is necessary to assist victims and help alleviate distress and long-term disability.

  17. Selection bias and subject refusal in a cluster-randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Rochelle Yang

    2017-07-01

    Full Text Available Abstract Background Selection bias and non-participation bias are major methodological concerns which impact external validity. Cluster-randomized controlled trials are especially prone to selection bias as it is impractical to blind clusters to their allocation into intervention or control. This study assessed the impact of selection bias in a large cluster-randomized controlled trial. Methods The Improved Cardiovascular Risk Reduction to Enhance Rural Primary Care (ICARE study examined the impact of a remote pharmacist-led intervention in twelve medical offices. To assess eligibility, a standardized form containing patient demographics and medical information was completed for each screened patient. Eligible patients were approached by the study coordinator for recruitment. Both the study coordinator and the patient were aware of the site’s allocation prior to consent. Patients who consented or declined to participate were compared across control and intervention arms for differing characteristics. Statistical significance was determined using a two-tailed, equal variance t-test and a chi-square test with adjusted Bonferroni p-values. Results were adjusted for random cluster variation. Results There were 2749 completed screening forms returned to research staff with 461 subjects who had either consented or declined participation. Patients with poorly controlled diabetes were found to be significantly more likely to decline participation in intervention sites compared to those in control sites. A higher mean diastolic blood pressure was seen in patients with uncontrolled hypertension who declined in the control sites compared to those who declined in the intervention sites. However, these findings were no longer significant after adjustment for random variation among the sites. After this adjustment, females were now found to be significantly more likely to consent than males (odds ratio = 1.41; 95% confidence interval = 1.03, 1

  18. Transfer function design based on user selected samples for intuitive multivariate volume exploration

    KAUST Repository

    Zhou, Liang

    2013-02-01

    Multivariate volumetric datasets are important to both science and medicine. We propose a transfer function (TF) design approach based on user selected samples in the spatial domain to make multivariate volumetric data visualization more accessible for domain users. Specifically, the user starts the visualization by probing features of interest on slices and the data values are instantly queried by user selection. The queried sample values are then used to automatically and robustly generate high dimensional transfer functions (HDTFs) via kernel density estimation (KDE). Alternatively, 2D Gaussian TFs can be automatically generated in the dimensionality reduced space using these samples. With the extracted features rendered in the volume rendering view, the user can further refine these features using segmentation brushes. Interactivity is achieved in our system and different views are tightly linked. Use cases show that our system has been successfully applied for simulation and complicated seismic data sets. © 2013 IEEE.

  19. Automatic training sample selection for a multi-evidence based crop classification approach

    DEFF Research Database (Denmark)

    Chellasamy, Menaka; Ferre, Ty; Greve, Mogens Humlekrog

    An approach to use the available agricultural parcel information to automatically select training samples for crop classification is investigated. Previous research addressed the multi-evidence crop classification approach using an ensemble classifier. This first produced confidence measures using...... three Multi-Layer Perceptron (MLP) neural networks trained separately with spectral, texture and vegetation indices; classification labels were then assigned based on Endorsement Theory. The present study proposes an approach to feed this ensemble classifier with automatically selected training samples....... The available vector data representing crop boundaries with corresponding crop codes are used as a source for training samples. These vector data are created by farmers to support subsidy claims and are, therefore, prone to errors such as mislabeling of crop codes and boundary digitization errors. The proposed...

  20. Semiparametric efficient and robust estimation of an unknown symmetric population under arbitrary sample selection bias

    KAUST Repository

    Ma, Yanyuan

    2013-09-01

    We propose semiparametric methods to estimate the center and shape of a symmetric population when a representative sample of the population is unavailable due to selection bias. We allow an arbitrary sample selection mechanism determined by the data collection procedure, and we do not impose any parametric form on the population distribution. Under this general framework, we construct a family of consistent estimators of the center that is robust to population model misspecification, and we identify the efficient member that reaches the minimum possible estimation variance. The asymptotic properties and finite sample performance of the estimation and inference procedures are illustrated through theoretical analysis and simulations. A data example is also provided to illustrate the usefulness of the methods in practice. © 2013 American Statistical Association.

  1. Studies on natural recovery from alcohol dependence: sample selection bias by media solicitation?

    Science.gov (United States)

    Rumpf, H J; Bischof, G; Hapke, U; Meyer, C; John, U

    2000-05-01

    To assess the selection bias of recruiting participants in studies on natural recovery from alcohol dependence through media solicitation. Two samples with different recruitment strategies are compared. Media solicitation and general population. Sample 1 consists of 176 alcohol-dependent individuals remitted without formal help and recruited through media solicitation, sample 2 consists of 32 natural remitters derived from a representative general population study with a sample size of 4075 respondents and a response rate of 70.2%. Several triggering mechanisms and maintenance factors of remission were assessed in a personal interview using standardized questionnaires. Results of logistic regression analyses show that media-solicited subjects were more often abstinent in the last 12 months, were more severely dependent, were less satisfied with eight life domains prior to remission and showed higher scores in a coping behaviour measure. Besides these major differences from the multivariate analysis, media subjects revealed more health problems prior to remission, experienced more social pressure to change drinking behaviour, and showed differences in reasons for not seeking help. Media solicitation leads to a sample selection bias in research on natural recovery from alcohol dependence. When measures to foster self-change are derived from such studies, findings from representative samples have to be considered.

  2. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with

  3. Effect of non-random mating on genomic and BLUP selection schemes

    Directory of Open Access Journals (Sweden)

    Nirea Kahsay G

    2012-04-01

    Full Text Available Abstract Background The risk of long-term unequal contribution of mating pairs to the gene pool is that deleterious recessive genes can be expressed. Such consequences could be alleviated by appropriately designing and optimizing breeding schemes i.e. by improving selection and mating procedures. Methods We studied the effect of mating designs, random, minimum coancestry and minimum covariance of ancestral contributions on rate of inbreeding and genetic gain for schemes with different information sources, i.e. sib test or own performance records, different genetic evaluation methods, i.e. BLUP or genomic selection, and different family structures, i.e. factorial or pair-wise. Results Results showed that substantial differences in rates of inbreeding due to mating design were present under schemes with a pair-wise family structure, for which minimum coancestry turned out to be more effective to generate lower rates of inbreeding. Specifically, substantial reductions in rates of inbreeding were observed in schemes using sib test records and BLUP evaluation. However, with a factorial family structure, differences in rates of inbreeding due mating designs were minor. Moreover, non-random mating had only a small effect in breeding schemes that used genomic evaluation, regardless of the information source. Conclusions It was concluded that minimum coancestry remains an efficient mating design when BLUP is used for genetic evaluation or when the size of the population is small, whereas the effect of non-random mating is smaller in schemes using genomic evaluation.

  4. Sampling and Selection Factors that Enhance the Diversity of Microbial Collections: Application to Biopesticide Development

    Directory of Open Access Journals (Sweden)

    Jun-Kyung Park

    2013-06-01

    Full Text Available Diverse bacteria are known to colonize plants. However, only a small fraction of that diversity has been evaluated for their biopesticide potential. To date, the criteria for sampling and selection in such bioprospecting endeavors have not been systematically evaluated in terms of the relative amount of diversity they provide for analysis. The present study aimed to enhance the success of bio-prospecting efforts by increasing the diversity while removing the genotypic redundancy often present in large collections of bacteria. We developed a multivariate sampling and marker-based selection strategy that significantly increase the diversity of bacteria recovered from plants. In doing so, we quantified the effects of varying sampling intensity, media composition, incubation conditions, plant species, and soil source on the diversity of recovered isolates. Subsequent sequencing and high-throughput phenotypic analyses of a small fraction of the collected isolates revealed that this approach led to the recovery of over a dozen rare and, to date, poorly characterized genera of plant-associated bacteria with significant biopesticide activities. Overall, the sampling and selection approach described led to an approximately 5-fold improvement in efficiency and the recovery of several novel strains of bacteria with significant biopesticide potential.

  5. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  6. Sample Selected Extreme Learning Machine Based Intrusion Detection in Fog Computing and MEC

    Directory of Open Access Journals (Sweden)

    Xingshuo An

    2018-01-01

    Full Text Available Fog computing, as a new paradigm, has many characteristics that are different from cloud computing. Due to the resources being limited, fog nodes/MEC hosts are vulnerable to cyberattacks. Lightweight intrusion detection system (IDS is a key technique to solve the problem. Because extreme learning machine (ELM has the characteristics of fast training speed and good generalization ability, we present a new lightweight IDS called sample selected extreme learning machine (SS-ELM. The reason why we propose “sample selected extreme learning machine” is that fog nodes/MEC hosts do not have the ability to store extremely large amounts of training data sets. Accordingly, they are stored, computed, and sampled by the cloud servers. Then, the selected sample is given to the fog nodes/MEC hosts for training. This design can bring down the training time and increase the detection accuracy. Experimental simulation verifies that SS-ELM performs well in intrusion detection in terms of accuracy, training time, and the receiver operating characteristic (ROC value.

  7. Genome-wide association data classification and SNPs selection using two-stage quality-based Random Forests.

    Science.gov (United States)

    Nguyen, Thanh-Tung; Huang, Joshua; Wu, Qingyao; Nguyen, Thuy; Li, Mark

    2015-01-01

    Single-nucleotide polymorphisms (SNPs) selection and identification are the most important tasks in Genome-wide association data analysis. The problem is difficult because genome-wide association data is very high dimensional and a large portion of SNPs in the data is irrelevant to the disease. Advanced machine learning methods have been successfully used in Genome-wide association studies (GWAS) for identification of genetic variants that have relatively big effects in some common, complex diseases. Among them, the most successful one is Random Forests (RF). Despite of performing well in terms of prediction accuracy in some data sets with moderate size, RF still suffers from working in GWAS for selecting informative SNPs and building accurate prediction models. In this paper, we propose to use a new two-stage quality-based sampling method in random forests, named ts-RF, for SNP subspace selection for GWAS. The method first applies p-value assessment to find a cut-off point that separates informative and irrelevant SNPs in two groups. The informative SNPs group is further divided into two sub-groups: highly informative and weak informative SNPs. When sampling the SNP subspace for building trees for the forest, only those SNPs from the two sub-groups are taken into account. The feature subspaces always contain highly informative SNPs when used to split a node at a tree. This approach enables one to generate more accurate trees with a lower prediction error, meanwhile possibly avoiding overfitting. It allows one to detect interactions of multiple SNPs with the diseases, and to reduce the dimensionality and the amount of Genome-wide association data needed for learning the RF model. Extensive experiments on two genome-wide SNP data sets (Parkinson case-control data comprised of 408,803 SNPs and Alzheimer case-control data comprised of 380,157 SNPs) and 10 gene data sets have demonstrated that the proposed model significantly reduced prediction errors and outperformed

  8. Sample-to-sample fluctuations of power spectrum of a random motion in a periodic Sinai model

    Science.gov (United States)

    Dean, David S.; Iorio, Antonio; Marinari, Enzo; Oshanin, Gleb

    2016-09-01

    The Sinai model of a tracer diffusing in a quenched Brownian potential is a much-studied problem exhibiting a logarithmically slow anomalous diffusion due to the growth of energy barriers with the system size. However, if the potential is random but periodic, the regime of anomalous diffusion crosses over to one of normal diffusion once a tracer has diffused over a few periods of the system. Here we consider a system in which the potential is given by a Brownian bridge on a finite interval (0 ,L ) and then periodically repeated over the whole real line and study the power spectrum S (f ) of the diffusive process x (t ) in such a potential. We show that for most of realizations of x (t ) in a given realization of the potential, the low-frequency behavior is S (f ) ˜A /f2 , i.e., the same as for standard Brownian motion, and the amplitude A is a disorder-dependent random variable with a finite support. Focusing on the statistical properties of this random variable, we determine the moments of A of arbitrary, negative, or positive order k and demonstrate that they exhibit a multifractal dependence on k and a rather unusual dependence on the temperature and on the periodicity L , which are supported by atypical realizations of the periodic disorder. We finally show that the distribution of A has a log-normal left tail and exhibits an essential singularity close to the right edge of the support, which is related to the Lifshitz singularity. Our findings are based both on analytic results and on extensive numerical simulations of the process x (t ) .

  9. Emulsion PCR: a high efficient way of PCR amplification of random DNA libraries in aptamer selection.

    Directory of Open Access Journals (Sweden)

    Keke Shao

    Full Text Available Aptamers are short RNA or DNA oligonucleotides which can bind with different targets. Typically, they are selected from a large number of random DNA sequence libraries. The main strategy to obtain aptamers is systematic evolution of ligands by exponential enrichment (SELEX. Low efficiency is one of the limitations for conventional PCR amplification of random DNA sequence library in aptamer selection because of relative low products and high by-products formation efficiency. Here, we developed emulsion PCR for aptamer selection. With this method, the by-products formation decreased tremendously to an undetectable level, while the products formation increased significantly. Our results indicated that by-products in conventional PCR amplification were from primer-product and product-product hybridization. In emulsion PCR, we can completely avoid the product-product hybridization and avoid the most of primer-product hybridization if the conditions were optimized. In addition, it also showed that the molecule ratio of template to compartment was crucial to by-product formation efficiency in emulsion PCR amplification. Furthermore, the concentration of the Taq DNA polymerase in the emulsion PCR mixture had a significant impact on product formation efficiency. So, the results of our study indicated that emulsion PCR could improve the efficiency of SELEX.

  10. A novel sample selection strategy by near-infrared spectroscopy-based high throughput tablet tester for content uniformity in early-phase pharmaceutical product development.

    Science.gov (United States)

    Shi, Zhenqi; Hermiller, James G; Gunter, Thomas Z; Zhang, Xiaoyu; Reed, Dave E

    2012-07-01

    This article proposes a new sample selection strategy to simplify the traditional content uniformity (CU) test in early research and development (R&D) with improved statistical confidence. This strategy originated from the prescreening of a large amount of tablets by a near-infrared spectroscopy (NIRS)-based high-volume tablet tester to the selection of extreme tablets with highest, medium, and lowest content of active pharmaceutical ingredient (API) for further high-performance liquid chromatography (HPLC) test. The NIRS-based high-volume tablet tester was equipped with an internally developed and integrated automated bagging and labeling system, allowing the traceability of every individual tablet by its measured physical and chemical signatures. A qualitative NIR model was used to translate spectral information to a concentration-related metric, that is scores, which allowed the selection of those extreme tablets. This sample selection strategy of extreme tablets was shown to provide equivalent representation of CU in the process compared with the traditional CU test using a large number of random samples. Because it only requires reference tests on three extreme samples per stratified location, the time- and labor-saving nature of this strategy is advantageous for CU test in early R&D. The extreme sampling approach is also shown to outperform random sampling with respect to statistical confidence for representing the process variation. In addition, a chemometric approach, which utilizes only pure component raw materials to develop an NIRS model sensitive to API concentration, is discussed with the advantage that it does not require tablets at multiple API levels. Prospective applications of this sample selection strategy are also addressed. Copyright © 2012 Wiley Periodicals, Inc.

  11. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

    Directory of Open Access Journals (Sweden)

    Michael Pfaffermayr

    2014-10-01

    Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.

  12. Random Model Sampling: Making Craig Interpolation Work When It Should Not

    Directory of Open Access Journals (Sweden)

    Marat Akhin

    2014-01-01

    Full Text Available One of the most serious problems when doing program analyses is dealing with function calls. While function inlining is the traditional approach to this problem, it nonetheless suffers from the increase in analysis complexity due to the state space explosion. Craig interpolation has been successfully used in recent years in the context of bounded model checking to do function summarization which allows one to replace the complete function body with its succinct summary and, therefore, reduce the complexity. Unfortunately this technique can be applied only to a pair of unsatisfiable formulae.In this work-in-progress paper we present an approach to function summarization based on Craig interpolation that overcomes its limitation by using random model sampling. It captures interesting input/output relations, strengthening satisfiable formulae into unsatisfiable ones and thus allowing the use of Craig interpolation. Preliminary experiments show the applicability of this approach; in our future work we plan to do a full evaluation on real-world examples.

  13. Discriminative motif discovery via simulated evolution and random under-sampling.

    Directory of Open Access Journals (Sweden)

    Tao Song

    Full Text Available Conserved motifs in biological sequences are closely related to their structure and functions. Recently, discriminative motif discovery methods have attracted more and more attention. However, little attention has been devoted to the data imbalance problem, which is one of the main reasons affecting the performance of the discriminative models. In this article, a simulated evolution method is applied to solve the multi-class imbalance problem at the stage of data preprocessing, and at the stage of Hidden Markov Models (HMMs training, a random under-sampling method is introduced for the imbalance between the positive and negative datasets. It is shown that, in the task of discovering targeting motifs of nine subcellular compartments, the motifs found by our method are more conserved than the methods without considering data imbalance problem and recover the most known targeting motifs from Minimotif Miner and InterPro. Meanwhile, we use the found motifs to predict protein subcellular localization and achieve higher prediction precision and recall for the minority classes.

  14. Neurofeedback Against Binge Eating: A Randomized Controlled Trial in a Female Subclinical Threshold Sample.

    Science.gov (United States)

    Schmidt, Jennifer; Martin, Alexandra

    2016-09-01

    Brain-directed treatment techniques, such as neurofeedback, have recently been proposed as adjuncts in the treatment of eating disorders to improve therapeutic outcomes. In line with this recommendation, a cue exposure EEG-neurofeedback protocol was developed. The present study aimed at the evaluation of the specific efficacy of neurofeedback to reduce subjective binge eating in a female subthreshold sample. A total of 75 subjects were randomized to EEG-neurofeedback, mental imagery with a comparable treatment set-up or a waitlist group. At post-treatment, only EEG-neurofeedback led to a reduced frequency of binge eating (p = .015, g = 0.65). The effects remained stable to a 3-month follow-up. EEG-neurofeedback further showed particular beneficial effects on perceived stress and dietary self-efficacy. Differences in outcomes did not arise from divergent treatment expectations. Because EEG-neurofeedback showed a specific efficacy, it may be a promising brain-directed approach that should be tested as a treatment adjunct in clinical groups with binge eating. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association. Copyright © 2016 John Wiley & Sons, Ltd and Eating Disorders Association.

  15. A coupled well-balanced and random sampling scheme for computing bubble oscillations*

    Directory of Open Access Journals (Sweden)

    Jung Jonathan

    2012-04-01

    Full Text Available We propose a finite volume scheme to study the oscillations of a spherical bubble of gas in a liquid phase. Spherical symmetry implies a geometric source term in the Euler equations. Our scheme satisfies the well-balanced property. It is based on the VFRoe approach. In order to avoid spurious pressure oscillations, the well-balanced approach is coupled with an ALE (Arbitrary Lagrangian Eulerian technique at the interface and a random sampling remap. Nous proposons un schéma de volumes finis pour étudier les oscillations d’une bulle sphérique de gaz dans l’eau. La symétrie sphérique fait apparaitre un terme source géométrique dans les équations d’Euler. Notre schéma est basé sur une approche VFRoe et préserve les états stationnaires. Pour éviter les oscillations de pression, l’approche well-balanced est couplée avec une approche ALE (Arbitrary Lagrangian Eulerian, et une étape de projection basée sur un échantillonage aléatoire.

  16. Comparing cluster-level dynamic treatment regimens using sequential, multiple assignment, randomized trials: Regression estimation and sample size considerations.

    Science.gov (United States)

    NeCamp, Timothy; Kilbourne, Amy; Almirall, Daniel

    2017-08-01

    Cluster-level dynamic treatment regimens can be used to guide sequential treatment decision-making at the cluster level in order to improve outcomes at the individual or patient-level. In a cluster-level dynamic treatment regimen, the treatment is potentially adapted and re-adapted over time based on changes in the cluster that could be impacted by prior intervention, including aggregate measures of the individuals or patients that compose it. Cluster-randomized sequential multiple assignment randomized trials can be used to answer multiple open questions preventing scientists from developing high-quality cluster-level dynamic treatment regimens. In a cluster-randomized sequential multiple assignment randomized trial, sequential randomizations occur at the cluster level and outcomes are observed at the individual level. This manuscript makes two contributions to the design and analysis of cluster-randomized sequential multiple assignment randomized trials. First, a weighted least squares regression approach is proposed for comparing the mean of a patient-level outcome between the cluster-level dynamic treatment regimens embedded in a sequential multiple assignment randomized trial. The regression approach facilitates the use of baseline covariates which is often critical in the analysis of cluster-level trials. Second, sample size calculators are derived for two common cluster-randomized sequential multiple assignment randomized trial designs for use when the primary aim is a between-dynamic treatment regimen comparison of the mean of a continuous patient-level outcome. The methods are motivated by the Adaptive Implementation of Effective Programs Trial which is, to our knowledge, the first-ever cluster-randomized sequential multiple assignment randomized trial in psychiatry.

  17. Image classification by multi-instance learning with base sample selection

    Science.gov (United States)

    Pan, Qiang; Zhang, Gang; Zhang, Xiao-Yan; Huang, Zhi-Ming; Xiong, Jie

    2012-01-01

    We propose a similarity-based learning style algorithm by regarding each image as a multi-instance (MI) sample for image classification. An image featured as vectorial representation interesting regions is transferred to a MI sample. Then a similarity like matrix is constructed using MI kernel between given images and some carefully selected base images, as the new representation of given images. Three selection strategies are proposed to build the base images set to find an optimal solution. A Weka implementation decision tree is used as the main learner in this paper. Experiments on image data repository ALOI and Corel Image 2000 show the effectiveness of the proposed algorithm compared to some previous based line methods.

  18. Martian oxidation processes and selection of ancient sedimentary samples for bio-organic analysis

    Science.gov (United States)

    Oro, J.

    1988-01-01

    The results obtained by the Viking Missions concerning organic and biological analysis are summarized and it is indicated that these results do not preclude the existence in buried or protected regions of the planet, organic molecules or fossil life. The use of automated instruments is suggested for the analyses of samples obtained from certain regions of the planet, as a preliminary step before they are selected, retrieved, and returned to Earth for more complete analysis.

  19. Passive sampling methods for contaminated sediments: Practical guidance for selection, calibration, and implementation

    OpenAIRE

    Ghosh, Upal; Driscoll, Susan Kane; Burgess, Robert M.; Jonker, Michiel To; Reible, Danny; Gobas, Frank; Choi, Yongju; Apitz, Sabine E; Maruya, Keith A; Gala, William R; Mortimer, Munro; Beegan, Chris

    2014-01-01

    This article provides practical guidance on the use of passive sampling methods (PSMs) that target the freely dissolved concentration (C free) for improved exposure assessment of hydrophobic organic chemicals in sediments. Primary considerations for selecting a PSM for a specific application include clear delineation of measurement goals for C free, whether laboratory-based “ex situ” and/or field-based “in situ” application is desired, and ultimately which PSM is best-suited to fulfill the me...

  20. Population genetics inference for longitudinally-sampled mutants under strong selection.

    Science.gov (United States)

    Lacerda, Miguel; Seoighe, Cathal

    2014-11-01

    Longitudinal allele frequency data are becoming increasingly prevalent. Such samples permit statistical inference of the population genetics parameters that influence the fate of mutant variants. To infer these parameters by maximum likelihood, the mutant frequency is often assumed to evolve according to the Wright-Fisher model. For computational reasons, this discrete model is commonly approximated by a diffusion process that requires the assumption that the forces of natural selection and mutation are weak. This assumption is not always appropriate. For example, mutations that impart drug resistance in pathogens may evolve under strong selective pressure. Here, we present an alternative approximation to the mutant-frequency distribution that does not make any assumptions about the magnitude of selection or mutation and is much more computationally efficient than the standard diffusion approximation. Simulation studies are used to compare the performance of our method to that of the Wright-Fisher and Gaussian diffusion approximations. For large populations, our method is found to provide a much better approximation to the mutant-frequency distribution when selection is strong, while all three methods perform comparably when selection is weak. Importantly, maximum-likelihood estimates of the selection coefficient are severely attenuated when selection is strong under the two diffusion models, but not when our method is used. This is further demonstrated with an application to mutant-frequency data from an experimental study of bacteriophage evolution. We therefore recommend our method for estimating the selection coefficient when the effective population size is too large to utilize the discrete Wright-Fisher model. Copyright © 2014 by the Genetics Society of America.

  1. Magnetically separable polymer (Mag-MIP) for selective analysis of biotin in food samples.

    Science.gov (United States)

    Uzuriaga-Sánchez, Rosario Josefina; Khan, Sabir; Wong, Ademar; Picasso, Gino; Pividori, Maria Isabel; Sotomayor, Maria Del Pilar Taboada

    2016-01-01

    This work presents an efficient method for the preparation of magnetic nanoparticles modified with molecularly imprinted polymers (Mag-MIP) through core-shell method for the determination of biotin in milk food samples. The functional monomer acrylic acid was selected from molecular modeling, EGDMA was used as cross-linking monomer and AIBN as radical initiator. The Mag-MIP and Mag-NIP were characterized by FTIR, magnetic hysteresis, XRD, SEM and N2-sorption measurements. The capacity of Mag-MIP for biotin adsorption, its kinetics and selectivity were studied in detail. The adsorption data was well described by Freundlich isotherm model with adsorption equilibrium constant (KF) of 1.46 mL g(-1). The selectivity experiments revealed that prepared Mag-MIP had higher selectivity toward biotin compared to other molecules with different chemical structure. The material was successfully applied for the determination of biotin in diverse milk samples using HPLC for quantification of the analyte, obtaining the mean value of 87.4% recovery. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Novel Zn2+-chelating peptides selected from a fimbria-displayed random peptide library

    DEFF Research Database (Denmark)

    Kjærgaard, Kristian; Schembri, Mark; Klemm, Per

    2001-01-01

    H adhesin. FimH is a component of the fimbrial organelle that can accommodate and display a diverse range of peptide sequences on the E. coli cell surface. In this study we have constructed a random peptide library in FimH. The library, consisting of similar to 40 million individual clones, was screened...... for peptide sequences that conferred on recombinant cells the ability to bind Zn2+. By serial selection, sequences that exhibited various degrees of binding affinity and specificity toward Zn2+ were enriched. None of the isolated sequences showed similarity to known Zn2+-binding proteins, indicating...

  3. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  4. PReFerSim: fast simulation of demography and selection under the Poisson Random Field model.

    Science.gov (United States)

    Ortega-Del Vecchyo, Diego; Marsden, Clare D; Lohmueller, Kirk E

    2016-11-15

    The Poisson Random Field (PRF) model has become an important tool in population genetics to study weakly deleterious genetic variation under complicated demographic scenarios. Currently, there are no freely available software applications that allow simulation of genetic variation data under this model. Here we present PReFerSim, an ANSI C program that performs forward simulations under the PRF model. PReFerSim models changes in population size, arbitrary amounts of inbreeding, dominance and distributions of selective effects. Users can track summaries of genetic variation over time and output trajectories of selected alleles. PReFerSim is freely available at: https://github.com/LohmuellerLab/PReFerSim CONTACT: klohmueller@ucla.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Novel joint selection methods can reduce sample size for rheumatoid arthritis clinical trials with ultrasound endpoints.

    Science.gov (United States)

    Allen, John C; Thumboo, Julian; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Tan, York Kiat

    2017-10-03

    To determine whether novel methods of selecting joints through (i) ultrasonography (individualized-ultrasound [IUS] method), or (ii) ultrasonography and clinical examination (individualized-composite-ultrasound [ICUS] method) translate into smaller rheumatoid arthritis (RA) clinical trial sample sizes when compared to existing methods utilizing predetermined joint sites for ultrasonography. Cohen's effect size (ES) was estimated (ES^) and a 95% CI (ES^L, ES^U) calculated on a mean change in 3-month total inflammatory score for each method. Corresponding 95% CIs [nL(ES^U), nU(ES^L)] were obtained on a post hoc sample size reflecting the uncertainty in ES^. Sample size calculations were based on a one-sample t-test as the patient numbers needed to provide 80% power at α = 0.05 to reject a null hypothesis H0 : ES = 0 versus alternative hypotheses H1 : ES = ES^, ES = ES^L and ES = ES^U. We aimed to provide point and interval estimates on projected sample sizes for future studies reflecting the uncertainty in our study ES^S. Twenty-four treated RA patients were followed up for 3 months. Utilizing the 12-joint approach and existing methods, the post hoc sample size (95% CI) was 22 (10-245). Corresponding sample sizes using ICUS and IUS were 11 (7-40) and 11 (6-38), respectively. Utilizing a seven-joint approach, the corresponding sample sizes using ICUS and IUS methods were nine (6-24) and 11 (6-35), respectively. Our pilot study suggests that sample size for RA clinical trials with ultrasound endpoints may be reduced using the novel methods, providing justification for larger studies to confirm these observations. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.

  6. A complete hard X-ray selected sample of local, luminous AGNs

    Science.gov (United States)

    Burtscher, Leonard; Davies, Ric; Lin, Ming-yi; Orban de Xivry, Gilles; Rosario, David

    2016-08-01

    Choosing a very well defined sample is essential for studying the AGN phenomenon. Only the most luminous AGNs can be expected to require a coherent feeding mechanism to sustain their activity and since host galaxy properties and AGN activity are essentially uncorrelated, nuclear scales must be resolved in order to shed light on the feeding mechanisms of AGNs. For these reasons we are compiling a sample of the most powerful, local AGNs. In this talk we present our on-going programme to observe a complete volume limited sample of nearby active galaxies selected by their 14-195 keV luminosity, and outline its rationale for studying the mechanisms regulating gas inflow and outflow.

  7. Impact of repeated measures and sample selection on genome-wide association studies of fasting glucose

    Science.gov (United States)

    Rasmussen-Torvik, Laura J.; Alonso, Alvaro; Li, Man; Kao, Wen; Köttgen, Anna; Yan, Yuer; Couper, David; Boerwinkle, Eric; Bielinski, Suzette J.; Pankow, James S.

    2010-01-01

    Although GWAS have been performed in longitudinal studies, most used only a single trait measure. GWAS of fasting glucose have generally included only normoglycemic individuals. We examined the impact of both repeated measures and sample selection on GWAS in ARIC, a study which obtained four longitudinal measures of fasting glucose and included both individuals with and without prevalent diabetes. The sample included Caucasians and the Affymetrix 6.0 chip was used for genotyping. Sample sizes for GWAS analyses ranged from 8372 (first study visit) to 5782 (average fasting glucose). Candidate SNP analyses with SNPs identified through fasting glucose or diabetes GWAS were conducted in 9133 individuals, including 761 with prevalent diabetes. For a constant sample size, smaller p-values were obtained for the average measure of fasting glucose compared to values at any single visit, and two additional significant GWAS signals were detected. For four candidate SNPs (rs780094, rs10830963, rs7903146, and rs4607517), the strength of association between genotype and glucose was significantly (p-interaction fasting glucose candidate SNPs (rs780094, rs10830963, rs560887, rs4607517, rs13266634) the association with measured fasting glucose was more significant in the smaller sample without prevalent diabetes than in the larger combined sample of those with and without diabetes. This analysis demonstrates the potential utility of averaging trait values in GWAS studies and explores the advantage of using only individuals without prevalent diabetes in GWAS of fasting glucose. PMID:20839289

  8. THE zCOSMOS-SINFONI PROJECT. I. SAMPLE SELECTION AND NATURAL-SEEING OBSERVATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mancini, C.; Renzini, A. [INAF-OAPD, Osservatorio Astronomico di Padova, Vicolo Osservatorio 5, I-35122 Padova (Italy); Foerster Schreiber, N. M.; Hicks, E. K. S.; Genzel, R.; Tacconi, L.; Davies, R. [Max-Planck-Institut fuer Extraterrestrische Physik, Giessenbachstrasse, D-85748 Garching (Germany); Cresci, G. [Osservatorio Astrofisico di Arcetri (OAF), INAF-Firenze, Largo E. Fermi 5, I-50125 Firenze (Italy); Peng, Y.; Lilly, S.; Carollo, M.; Oesch, P. [Institute of Astronomy, Department of Physics, Eidgenossische Technische Hochschule, ETH Zurich CH-8093 (Switzerland); Vergani, D.; Pozzetti, L.; Zamorani, G. [INAF-Bologna, Via Ranzani, I-40127 Bologna (Italy); Daddi, E. [CEA-Saclay, DSM/DAPNIA/Service d' Astrophysique, F-91191 Gif-Sur Yvette Cedex (France); Maraston, C. [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, PO1 3HE Portsmouth (United Kingdom); McCracken, H. J. [IAP, 98bis bd Arago, F-75014 Paris (France); Bouche, N. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Shapiro, K. [Aerospace Research Laboratories, Northrop Grumman Aerospace Systems, Redondo Beach, CA 90278 (United States); and others

    2011-12-10

    The zCOSMOS-SINFONI project is aimed at studying the physical and kinematical properties of a sample of massive z {approx} 1.4-2.5 star-forming galaxies, through SINFONI near-infrared integral field spectroscopy (IFS), combined with the multiwavelength information from the zCOSMOS (COSMOS) survey. The project is based on one hour of natural-seeing observations per target, and adaptive optics (AO) follow-up for a major part of the sample, which includes 30 galaxies selected from the zCOSMOS/VIMOS spectroscopic survey. This first paper presents the sample selection, and the global physical characterization of the target galaxies from multicolor photometry, i.e., star formation rate (SFR), stellar mass, age, etc. The H{alpha} integrated properties, such as, flux, velocity dispersion, and size, are derived from the natural-seeing observations, while the follow-up AO observations will be presented in the next paper of this series. Our sample appears to be well representative of star-forming galaxies at z {approx} 2, covering a wide range in mass and SFR. The H{alpha} integrated properties of the 25 H{alpha} detected galaxies are similar to those of other IFS samples at the same redshifts. Good agreement is found among the SFRs derived from H{alpha} luminosity and other diagnostic methods, provided the extinction affecting the H{alpha} luminosity is about twice that affecting the continuum. A preliminary kinematic analysis, based on the maximum observed velocity difference across the source and on the integrated velocity dispersion, indicates that the sample splits nearly 50-50 into rotation-dominated and velocity-dispersion-dominated galaxies, in good agreement with previous surveys.

  9. Association of macronutrient intake patterns with being overweight in a population-based random sample of men in France.

    Science.gov (United States)

    Ahluwalia, N; Ferrières, J; Dallongeville, J; Simon, C; Ducimetière, P; Amouyel, P; Arveiler, D; Ruidavets, J-B

    2009-04-01

    Diet is considered an important modifiable factor in the overweight. The role of macronutrients in obesity has been examined in general in selected populations, but the results of these studies are mixed, depending on the potential confounders and adjustments for other macronutrients. For this reason, we examined the association between macronutrient intake patterns and being overweight in a population-based representative sample of middle-aged (55.1+/-6.1 years) men (n=966), using various adjustment modalities. The study subjects kept 3-day food-intake records, and the standard cardiovascular risk factors were assessed. Weight, height and waist circumference (WC) were also measured. Carbohydrate intake was negatively associated and fat intake was positively associated with body mass index (BMI) and WC in regression models adjusted for energy intake and other factors, including age, smoking and physical activity. However, with mutual adjustments for other energy-yielding nutrients, the negative association of carbohydrate intake with WC remained significant, whereas the associations between fat intake and measures of obesity did not. Adjusted odds ratios (95% confidence interval) comparing the highest and lowest quartiles of carbohydrate intake were 0.50 (0.25-0.97) for obesity (BMI>29.9) and 0.41 (0.23-0.73) for abdominal obesity (WC>101.9 cm). Consistent negative associations between carbohydrate intake and BMI and WC were seen in this random representative sample of the general male population. The associations between fat intake and these measures of being overweight were attenuated on adjusting for carbohydrate intake. Thus, the balance of carbohydrate-to-fat intake is an important element in obesity in a general male population, and should be highlighted in dietary guidelines.

  10. Selective oropharyngeal decontamination versus selective digestive decontamination in critically ill patients: a meta-analysis of randomized controlled trials

    Directory of Open Access Journals (Sweden)

    Zhao D

    2015-07-01

    Full Text Available Di Zhao,1,* Jian Song,2,* Xuan Gao,3 Fei Gao,4 Yupeng Wu,2 Yingying Lu,5 Kai Hou1 1Department of Neurosurgery, The First Hospital of Hebei Medical University, 2Department of Neurosurgery, 3Department of Neurology, The Second Hospital of Hebei Medical University, 4Hebei Provincial Procurement Centers for Medical Drugs and Devices, 5Department of Neurosurgery, The Second Hospital of Hebei Medical University, Shijiazhuang People’s Republic of China *These authors contributed equally to this work Background: Selective digestive decontamination (SDD and selective oropharyngeal decontamination (SOD are associated with reduced mortality and infection rates among patients in intensive care units (ICUs; however, whether SOD has a superior effect than SDD remains uncertain. Hence, we conducted a meta-analysis of randomized controlled trials (RCTs to compare SOD with SDD in terms of clinical outcomes and antimicrobial resistance rates in patients who were critically ill. Methods: RCTs published in PubMed, Embase, and Web of Science were systematically reviewed to compare the effects of SOD and SDD in patients who were critically ill. Outcomes included day-28 mortality, length of ICU stay, length of hospital stay, duration of mechanical ventilation, ICU-acquired bacteremia, and prevalence of antibiotic-resistant Gram-negative bacteria. Results were expressed as risk ratio (RR with 95% confidence intervals (CIs, and weighted mean differences (WMDs with 95% CIs. Pooled estimates were performed using a fixed-effects model or random-effects model, depending on the heterogeneity among studies. Results: A total of four RCTs involving 23,822 patients met the inclusion criteria and were included in this meta-analysis. Among patients whose admitting specialty was surgery, cardiothoracic surgery (57.3% and neurosurgery (29.7% were the two main types of surgery being performed. Pooled results showed that SOD had similar effects as SDD in day-28 mortality (RR =1

  11. Antibiotic content of selective culture media for isolation of Capnocytophaga species from oral polymicrobial samples.

    Science.gov (United States)

    Ehrmann, E; Jolivet-Gougeon, A; Bonnaure-Mallet, M; Fosse, T

    2013-10-01

    In oral microbiome, because of the abundance of commensal competitive flora, selective media with antibiotics are necessary for the recovery of fastidious Capnocytophaga species. The performances of six culture media (blood agar, chocolate blood agar, VCAT medium, CAPE medium, bacitracin chocolate blood agar and VK medium) were compared with literature data concerning five other media (FAA, LB, TSBV, CapR and TBBP media). To understand variable growth on selective media, the MICs of each antimicrobial agent contained in this different media (colistin, kanamycin, trimethoprim, trimethoprim-sulfamethoxazole, vancomycin, aztreonam and bacitracin) were determined for all Capnocytophaga species. Overall, VCAT medium (Columbia, 10% cooked horse blood, polyvitaminic supplement, 3·75 mg l(-1) of colistin, 1·5 mg l(-1) of trimethoprim, 1 mg l(-1) of vancomycin and 0·5 mg l(-1) of amphotericin B, Oxoid, France) was the more efficient selective medium, with regard to the detection of Capnocytophaga species from oral samples (P culture, a simple blood agar allowed the growth of all Capnocytophaga species. Nonetheless, in oral samples, because of the abundance of commensal competitive flora, selective media with antibiotics are necessary for the recovery of Capnocytophaga species. The demonstrated superiority of VCAT medium made its use essential for the optimal detection of this bacterial genus. This work showed that extreme caution should be exercised when reporting the isolation of Capnocytophaga species from oral polymicrobial samples, because the culture medium is a determining factor. © 2013 The Society for Applied Microbiology.

  12. Ethnopharmacological versus random plant selection methods for the evaluation of the antimycobacterial activity

    Directory of Open Access Journals (Sweden)

    Danilo R. Oliveira

    2011-05-01

    Full Text Available The municipality of Oriximiná, Brazil, has 33 quilombola communities in remote areas, endowed with wide experience in the use of medicinal plants. An ethnobotanical survey was carried out in five of these communities. A free-listing method directed for the survey of species locally indicated against Tuberculosis and lung problems was also applied. Data were analyzed by quantitative techniques: saliency index and major use agreement. Thirty four informants related 254 ethnospecies. Among these, 43 were surveyed for possible antimycobacterial activity. As a result of those informations, ten species obtained from the ethnodirected approach (ETHNO and eighteen species obtained from the random approach (RANDOM were assayed against Mycobacterium tuberculosis by the microdilution method, using resazurin as an indicator of cell viability. The best results for antimycobacterial activity were obtained of some plants selected by the ethnopharmacological approach (50% ETHNO x 16,7% RANDOM. These results can be even more significant if we consider that the therapeutic success obtained among the quilombola practice is complex, being the use of some plants acting as fortifying agents, depurative, vomitory, purgative and bitter remedy, especially to infectious diseases, of great importance to the communities in the curing or recovering of health as a whole.

  13. Random forest variable selection in spatial malaria transmission modelling in Mpumalanga Province, South Africa.

    Science.gov (United States)

    Kapwata, Thandi; Gebreslasie, Michael T

    2016-11-16

    Malaria is an environmentally driven disease. In order to quantify the spatial variability of malaria transmission, it is imperative to understand the interactions between environmental variables and malaria epidemiology at a micro-geographic level using a novel statistical approach. The random forest (RF) statistical learning method, a relatively new variable-importance ranking method, measures the variable importance of potentially influential parameters through the percent increase of the mean squared error. As this value increases, so does the relative importance of the associated variable. The principal aim of this study was to create predictive malaria maps generated using the selected variables based on the RF algorithm in the Ehlanzeni District of Mpumalanga Province, South Africa. From the seven environmental variables used [temperature, lag temperature, rainfall, lag rainfall, humidity, altitude, and the normalized difference vegetation index (NDVI)], altitude was identified as the most influential predictor variable due its high selection frequency. It was selected as the top predictor for 4 out of 12 months of the year, followed by NDVI, temperature and lag rainfall, which were each selected twice. The combination of climatic variables that produced the highest prediction accuracy was altitude, NDVI, and temperature. This suggests that these three variables have high predictive capabilities in relation to malaria transmission. Furthermore, it is anticipated that the predictive maps generated from predictions made by the RF algorithm could be used to monitor the progression of malaria and assist in intervention and prevention efforts with respect to malaria.

  14. Random forest variable selection in spatial malaria transmission modelling in Mpumalanga Province, South Africa

    Directory of Open Access Journals (Sweden)

    Thandi Kapwata

    2016-11-01

    Full Text Available Malaria is an environmentally driven disease. In order to quantify the spatial variability of malaria transmission, it is imperative to understand the interactions between environmental variables and malaria epidemiology at a micro-geographic level using a novel statistical approach. The random forest (RF statistical learning method, a relatively new variable-importance ranking method, measures the variable importance of potentially influential parameters through the percent increase of the mean squared error. As this value increases, so does the relative importance of the associated variable. The principal aim of this study was to create predictive malaria maps generated using the selected variables based on the RF algorithm in the Ehlanzeni District of Mpumalanga Province, South Africa. From the seven environmental variables used [temperature, lag temperature, rainfall, lag rainfall, humidity, altitude, and the normalized difference vegetation index (NDVI], altitude was identified as the most influential predictor variable due its high selection frequency. It was selected as the top predictor for 4 out of 12 months of the year, followed by NDVI, temperature and lag rainfall, which were each selected twice. The combination of climatic variables that produced the highest prediction accuracy was altitude, NDVI, and temperature. This suggests that these three variables have high predictive capabilities in relation to malaria transmission. Furthermore, it is anticipated that the predictive maps generated from predictions made by the RF algorithm could be used to monitor the progression of malaria and assist in intervention and prevention efforts with respect to malaria.

  15. Selecting the appropriate pacing mode for patients with sick sinus syndrome: evidence from randomized clinical trials.

    Science.gov (United States)

    Albertsen, A E; Nielsen, J C

    2003-12-01

    Several observational studies have indicated that selection of pacing mode may be important for the clinical outcome in patients with symptomatic bradycardia, affecting the development of atrial fibrillation (AF), thromboembolism, congestive heart failure, mortality and quality of life. In this paper we present and discuss the most recent data from six randomized trials on mode selection in patients with sick sinus syndrome (SSS). In pacing mode selection, VVI(R) pacing is the least attractive solution, increasing the incidence of AF and-as compared with AAI(R) pacing, also the incidence of heart failure, thromboembolism and death. VVI(R) pacing should not be used as the primary pacing mode in patients with SSS, who haven't chronic AF. AAIR pacing is superior to DDDR pacing, reducing AF and preserving left ventricular function. Single site right ventricular pacing-VVI(R) or DDD(R) mode-causes an abnormal ventricular activation and contraction (called ventricular desynchronization), which results in a reduced left ventricular function. Despite the risk of AV block, we consider AAIR pacing to be the optimal pacing mode for isolated SSS today and an algorithm to select patients for AAIR pacing is suggested. Trials on new pacemaker algorithms minimizing right ventricular pacing as well as trials testing alternative pacing sites and multisite pacing to reduce ventricular desynchronization can be expected within the next years.

  16. The Planck-ATCA Co-eval Observations project: the spectrally selected sample

    Science.gov (United States)

    Bonaldi, Anna; Bonavera, Laura; Massardi, Marcella; De Zotti, Gianfranco

    2013-01-01

    The Planck Australia Telescope Compact Array (Planck-ATCA) Co-eval Observations (PACO) have provided multi-frequency (5-40 GHz) flux density measurements of complete samples of Australia Telescope 20 GHz (AT20G) radio sources at frequencies below and overlapping with Planck frequency bands, almost simultaneously with Planck observations. In this work we analyse the data in total intensity for the spectrally selected PACO sample, a complete sample of 69 sources brighter than S20 GHz = 200 mJy selected from the AT20G survey catalogue to be inverted or upturning between 5 and 20 GHz. We study the spectral behaviour and variability of the sample. We use the variability between AT20G (2004-2007) and PACO (2009-2010) epochs to discriminate between candidate High-Frequency Peakers (HFPs) and candidate blazars. The HFPs picked up by our selection criteria have spectral peaks >10 GHz in the observer frame and turn out to be rare ( = 2.1 ± 0.5 yr (median: τmedian = 1.3 yr). The 5-20 GHz spectral indices show a systematic decrease from AT20G to PACO. At higher frequencies spectral indices steepen: the median α4030 is steeper than the median α205 by δα = 0.6. Taking further into account the Wide-field Infrared Survey Explorer data we find that the Spectral Energy Distributions (SEDs), νS(ν), of most of our blazars peak at νSEDp 105 GHz.

  17. Geography and genography: prediction of continental origin using randomly selected single nucleotide polymorphisms

    Directory of Open Access Journals (Sweden)

    Ramoni Marco F

    2007-03-01

    Full Text Available Abstract Background Recent studies have shown that when individuals are grouped on the basis of genetic similarity, group membership corresponds closely to continental origin. There has been considerable debate about the implications of these findings in the context of larger debates about race and the extent of genetic variation between groups. Some have argued that clustering according to continental origin demonstrates the existence of significant genetic differences between groups and that these differences may have important implications for differences in health and disease. Others argue that clustering according to continental origin requires the use of large amounts of genetic data or specifically chosen markers and is indicative only of very subtle genetic differences that are unlikely to have biomedical significance. Results We used small numbers of randomly selected single nucleotide polymorphisms (SNPs from the International HapMap Project to train naïve Bayes classifiers for prediction of ancestral continent of origin. Predictive accuracy was tested on two independent data sets. Genetically similar groups should be difficult to distinguish, especially if only a small number of genetic markers are used. The genetic differences between continentally defined groups are sufficiently large that one can accurately predict ancestral continent of origin using only a minute, randomly selected fraction of the genetic variation present in the human genome. Genotype data from only 50 random SNPs was sufficient to predict ancestral continent of origin in our primary test data set with an average accuracy of 95%. Genetic variations informative about ancestry were common and widely distributed throughout the genome. Conclusion Accurate characterization of ancestry is possible using small numbers of randomly selected SNPs. The results presented here show how investigators conducting genetic association studies can use small numbers of arbitrarily

  18. Joint random beam and spectrum selection for spectrum sharing systems with partial channel state information

    KAUST Repository

    Abdallah, Mohamed M.

    2013-11-01

    In this work, we develop joint interference-aware random beam and spectrum selection scheme that provide enhanced performance for the secondary network under the condition that the interference observed at the primary receiver is below a predetermined acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a set of primary links composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes jointly select a beam, among a set of power-optimized random beams, as well as the primary spectrum that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint. In particular, we consider the case where the interference level is described by a q-bit description of its magnitude, whereby we propose a technique to find the optimal quantizer thresholds in a mean square error (MSE) sense. © 2013 IEEE.

  19. Interference-aware random beam selection schemes for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed

    2012-10-19

    Spectrum sharing systems have been recently introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this work, we develop interference-aware random beam selection schemes that provide enhanced performance for the secondary network under the condition that the interference observed by the receivers of the primary network is below a predetermined/acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a primary link composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes select a beam, among a set of power-optimized random beams, that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the SINR statistics as well as the capacity and bit error rate (BER) of the secondary link.

  20. Feature selection for outcome prediction in oesophageal cancer using genetic algorithm and random forest classifier.

    Science.gov (United States)

    Paul, Desbordes; Su, Ruan; Romain, Modzelewski; Sébastien, Vauclin; Pierre, Vera; Isabelle, Gardin

    2017-09-01

    The outcome prediction of patients can greatly help to personalize cancer treatment. A large amount of quantitative features (clinical exams, imaging, …) are potentially useful to assess the patient outcome. The challenge is to choose the most predictive subset of features. In this paper, we propose a new feature selection strategy called GARF (genetic algorithm based on random forest) extracted from positron emission tomography (PET) images and clinical data. The most relevant features, predictive of the therapeutic response or which are prognoses of the patient survival 3 years after the end of treatment, were selected using GARF on a cohort of 65 patients with a local advanced oesophageal cancer eligible for chemo-radiation therapy. The most relevant predictive results were obtained with a subset of 9 features leading to a random forest misclassification rate of 18±4% and an areas under the of receiver operating characteristic (ROC) curves (AUC) of 0.823±0.032. The most relevant prognostic results were obtained with 8 features leading to an error rate of 20±7% and an AUC of 0.750±0.108. Both predictive and prognostic results show better performances using GARF than using 4 other studied methods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Selection of Sampling Pumps Used for Groundwater Monitoring at the Hanford Site

    Energy Technology Data Exchange (ETDEWEB)

    Schalla, Ronald; Webber, William D.; Smith, Ronald M.

    2001-11-05

    The variable frequency drive centrifugal submersible pump, Redi-Flo2a made by Grundfosa, was selected for universal application for Hanford Site groundwater monitoring. Specifications for the selected pump and five other pumps were evaluated against current and future Hanford groundwater monitoring performance requirements, and the Redi-Flo2 was selected as the most versatile and applicable for the range of monitoring conditions. The Redi-Flo2 pump distinguished itself from the other pumps considered because of its wide range in output flow rate and its comparatively moderate maintenance and low capital costs. The Redi-Flo2 pump is able to purge a well at a high flow rate and then supply water for sampling at a low flow rate. Groundwater sampling using a low-volume-purging technique (e.g., low flow, minimal purge, no purge, or micropurgea) is planned in the future, eliminating the need for the pump to supply a high-output flow rate. Under those conditions, the Well Wizard bladder pump, manufactured by QED Environmental Systems, Inc., may be the preferred pump because of the lower capital cost.

  2. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  3. Effects of soil water saturation on sampling equilibrium and kinetics of selected polycyclic aromatic hydrocarbons.

    Science.gov (United States)

    Kim, Pil-Gon; Roh, Ji-Yeon; Hong, Yongseok; Kwon, Jung-Hwan

    2017-10-01

    Passive sampling can be applied for measuring the freely dissolved concentration of hydrophobic organic chemicals (HOCs) in soil pore water. When using passive samplers under field conditions, however, there are factors that might affect passive sampling equilibrium and kinetics, such as soil water saturation. To determine the effects of soil water saturation on passive sampling, the equilibrium and kinetics of passive sampling were evaluated by observing changes in the distribution coefficient between sampler and soil (Ksampler/soil) and the uptake rate constant (ku) at various soil water saturations. Polydimethylsiloxane (PDMS) passive samplers were deployed into artificial soils spiked with seven selected polycyclic aromatic hydrocarbons (PAHs). In dry soil (0% water saturation), both Ksampler/soil and ku values were much lower than those in wet soils likely due to the contribution of adsorption of PAHs onto soil mineral surfaces and the conformational changes in soil organic matter. For high molecular weight PAHs (chrysene, benzo[a]pyrene, and dibenzo[a,h]anthracene), both Ksampler/soil and ku values increased with increasing soil water saturation, whereas they decreased with increasing soil water saturation for low molecular weight PAHs (phenanthrene, anthracene, fluoranthene, and pyrene). Changes in the sorption capacity of soil organic matter with soil water content would be the main cause of the changes in passive sampling equilibrium. Henry's law constant could explain the different behaviors in uptake kinetics of the selected PAHs. The results of this study would be helpful when passive samplers are deployed under various soil water saturations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Selection of Candidate Housekeeping Genes for Normalization in Human Postmortem Brain Samples

    Directory of Open Access Journals (Sweden)

    Aldo Pagano

    2011-08-01

    Full Text Available The most frequently used technique to study the expression profile of genes involved in common neurological disorders is quantitative real-time RT-PCR, which allows the indirect detection of very low amounts of selected mRNAs in tissue samples. Expression analysis by RT-qPCR requires an appropriate normalization to the expression level of genes characterized by a stable, constitutive transcription. However, the identification of a gene transcribed at a very stable level is difficult if not impossible, since significant fluctuations of the level of mRNA synthesis often accompanies changes of cell behavior. The aim of this study is to identify the most stable genes in postmortem human brain samples of patients affected by Alzheimer’s disease (AD suitable as reference genes. The experiments analyzed 12 commonly used reference genes in brain samples from eight individuals with AD and seven controls. After a careful analysis of the results calculated by geNorm and NormFinder algorithms, we found that CYC1 and EIF4A2 are the best reference genes. We remark on the importance of the determination of the best reference genes for each sample to be analyzed and suggest a practical combination of reference genes to be used in the analysis of human postmortem samples.

  5. Detection of Salmonella spp. in veterinary samples by combining selective enrichment and real-time PCR.

    Science.gov (United States)

    Goodman, Laura B; McDonough, Patrick L; Anderson, Renee R; Franklin-Guild, Rebecca J; Ryan, James R; Perkins, Gillian A; Thachil, Anil J; Glaser, Amy L; Thompson, Belinda S

    2017-11-01

    Rapid screening for enteric bacterial pathogens in clinical environments is essential for biosecurity. Salmonella found in veterinary hospitals, particularly Salmonella enterica serovar Dublin, can pose unique challenges for culture and testing because of its poor growth. Multiple Salmonella serovars including Dublin are emerging threats to public health given increasing prevalence and antimicrobial resistance. We adapted an automated food testing method to veterinary samples and evaluated the performance of the method in a variety of matrices including environmental samples ( n = 81), tissues ( n = 52), feces ( n = 148), and feed ( n = 29). A commercial kit was chosen as the basis for this approach in view of extensive performance characterizations published by multiple independent organizations. A workflow was established for efficiently and accurately testing veterinary matrices and environmental samples by use of real-time PCR after selective enrichment in Rappaport-Vassiliadis soya (RVS) medium. Using this method, the detection limit for S. Dublin improved by 100-fold over subculture on selective agars (eosin-methylene blue, brilliant green, and xylose-lysine-deoxycholate). Overall, the procedure was effective in detecting Salmonella spp. and provided next-day results.

  6. Changes in Selected Biochemical Indices Resulting from Various Pre-sampling Handling Techniques in Broilers

    Directory of Open Access Journals (Sweden)

    Chloupek Petr

    2011-05-01

    Full Text Available Abstract Background Since it is not yet clear whether it is possible to satisfactorily avoid sampling-induced stress interference in poultry, more studies on the pattern of physiological response and detailed quantification of stress connected with the first few minutes of capture and pre-sampling handling in poultry are required. This study focused on detection of changes in the corticosterone level and concentrations of other selected biochemical parameters in broilers handled in two different manners during blood sampling (involving catching, carrying, restraint, and blood collection itself that lasted for various time periods within the interval 30-180 seconds. Methods Stress effects of pre-sampling handling were studied in a group (n = 144 of unsexed ROSS 308 broiler chickens aged 42 d. Handling (catching, carrying, restraint, and blood sampling itself was carried out in a gentle (caught, held and carried carefully in an upright position or rough (caught by the leg, held and carried with lack of care in inverted position manner and lasted for 30 s, 60 s, 90 s, 120 s, 150 s, and 180 s. Plasma corticosterone, albumin, glucose, cholesterol, lactate, triglycerides and total protein were measured in order to assess the stress-induced changes to these biochemical indices following handling in the first few minutes of capture. Results Pre-sampling handling in a rough manner resulted in considerably higher plasma concentrations of all biochemical indices monitored when compared with gentle handling. Concentrations of plasma corticosterone after 150 and 180 s of handling were considerably higher (P Conclusions These results indicate that the pre-sampling procedure may be a considerably stressful procedure for broilers, particularly when carried out with lack of care and exceeding 120 seconds.

  7. Rapid determination of trace level copper in tea infusion samples by solid contact ion selective electrode

    Directory of Open Access Journals (Sweden)

    Aysenur Birinci

    2016-07-01

    Full Text Available A new solid contact copper selective electrode with a poly (vinyl chloride (PVC membrane consisting of o-xylylenebis(N,N-diisobutyldithiocarbamate as ionophore has been prepared. The main novelties of constructed ion selective electrode concept are the enhanced robustness, cheapness, and fastness due to the use of solid contacts. The electrode exhibits a rapid (< 10 seconds and near-Nernstian response to Cu2+ activity from 10−1 to 10−6 mol/L at the pH range of 4.0–6.0. No serious interference from common ions was found. The electrode characterizes by high potential stability, reproducibility, and full repeatability. The electrode was used as an indicator electrode in potentiometric titration of Cu(II ions with EDTA and for the direct assay of tea infusion samples by means of the calibration graph technique. The results compared favorably with those obtained by the atomic absorption spectroscopy (AAS.

  8. Experiments with central-limit properties of spatial samples from locally covariant random fields

    Science.gov (United States)

    Barringer, T.H.; Smith, T.E.

    1992-01-01

    When spatial samples are statistically dependent, the classical estimator of sample-mean standard deviation is well known to be inconsistent. For locally dependent samples, however, consistent estimators of sample-mean standard deviation can be constructed. The present paper investigates the sampling properties of one such estimator, designated as the tau estimator of sample-mean standard deviation. In particular, the asymptotic normality properties of standardized sample means based on tau estimators are studied in terms of computer experiments with simulated sample-mean distributions. The effects of both sample size and dependency levels among samples are examined for various value of tau (denoting the size of the spatial kernel for the estimator). The results suggest that even for small degrees of spatial dependency, the tau estimator exhibits significantly stronger normality properties than does the classical estimator of standardized sample means. ?? 1992.

  9. LONG-TERM VARIABILITY OF BRONCHIAL RESPONSIVENESS TO HISTAMINE IN A RANDOM-POPULATION SAMPLE OF ADULTS

    NARCIS (Netherlands)

    RIJCKEN, B; SCHOUTEN, JP; WEISS, ST; ROSNER, B; DEVRIES, K; VANDERLENDE, R

    1993-01-01

    Long-term variability of bronchial responsiveness has been studied in a random population sample of adults. During a follow-up period of 18 yr, 2,216 subjects contributed 5,012 observations to the analyses. Each subject could have as many as seven observations. Bronchial responsiveness was assessed

  10. SELECTION OF MODELS FOR SEQUENTIAL SAMPLING OF THE TAN-MITE Dichopelmus notus KEIFER (ACARI, ERIOPHYIDAE IN MATE-TEA

    Directory of Open Access Journals (Sweden)

    João Vieira Neto

    2007-09-01

    Full Text Available This research established models for the construction of plans of binomial sequential sampling for the tan-miteDichopelmus notus Keifer (Acari, Eriophyidae in mate-tea orchards. The study was carried out in a ten years old orchard, locatedin Chapecó, Santa Catarina state, Brazil. In three areas of approximately 2,500 m2, 30 plants had been selected randomly. Fortnightly,from January to December, 2004, infestation of D. notus in 18 mature leaves of ten plants in each area were evaluated. Theevaluations were executed directly in the orchard, using lenses (10x and 1 cm2 of fixed field. The lines of the sequential plans wereconstructed using the methodology based on the confidence interval of Iwao (1975, considering the models of Normal Approach withCorrection of Continuity, Normal Approach of Blyth (1986, Approach of Hall (1982 modified by Blyth (1986, Normal Approach ofMolenaar (1973, Normal Approach of Pratt (1968 and Leemis & Trivedi (1996 methodology. The models were evaluatedconsidering amplitude analysis of the confidence intervals. The results had evidenced that the Model of Normal Approach withCorrection of Continuity must preferentiably be used in the elaboration of plans of binomial sequential sampling for the tan-mite inmate-tea orchards.

  11. Albumin to creatinine ratio in a random urine sample: Correlation with severity of preeclampsia

    Directory of Open Access Journals (Sweden)

    Fady S. Moiety

    2014-06-01

    Conclusions: Random urine ACR may be a reliable method for prediction and assessment of severity of preeclampsia. Using the estimated cut-off may add to the predictive value of such a simple quick test.

  12. VizieR Online Data Catalog: PACO spectrally selected sample (Bonaldi+, 2013)

    Science.gov (United States)

    Bonaldi, A.; Bonavera, L.; Massardi, M.; de Zotti, G.

    2014-01-01

    The SS PACO sample comprises all the 69 sources with S20GHz>200mJy and spectra classified in the AT20G catalogue (Massardi et al. 2011MNRAS.412..318M) as inverted or upturning in the frequency range 4.8-20GHz, selected over the whole Southern sky. These sources have been re-observed for the PACO project between 2009 September and 2010 February, with a scheduling process optimized to allow observations at all the frequencies almost simultaneous (i.e. within 10 d) with the Planck satellite. (2 data files).

  13. Sample-based Attribute Selective AnDE for Large Data

    DEFF Research Database (Denmark)

    Chen, Shenglei; Martinez, Ana; Webb, Geoffrey

    2017-01-01

    More and more applications come with large data sets in the past decade. However, existing algorithms cannot guarantee to scale well on large data. Averaged n-Dependence Estimators (AnDE) allows for flexible learning from out-of-core data, by varying the value of n (number of super parents). Hence...... AnDE is especially appropriate for large data learning. In this paper, we propose a sample-based attribute selection technique for AnDE. It needs one more pass through the training data, in which a multitude of approximate AnDE models are built and efficiently assessed by leave-one-out cross...

  14. A genetic algorithm-based framework for wavelength selection on sample categorization.

    Science.gov (United States)

    Anzanello, Michel J; Yamashita, Gabrielli; Marcelo, Marcelo; Fogliatto, Flávio S; Ortiz, Rafael S; Mariotti, Kristiane; Ferrão, Marco F

    2017-08-01

    In forensic and pharmaceutical scenarios, the application of chemometrics and optimization techniques has unveiled common and peculiar features of seized medicine and drug samples, helping investigative forces to track illegal operations. This paper proposes a novel framework aimed at identifying relevant subsets of attenuated total reflectance Fourier transform infrared (ATR-FTIR) wavelengths for classifying samples into two classes, for example authentic or forged categories in case of medicines, or salt or base form in cocaine analysis. In the first step of the framework, the ATR-FTIR spectra were partitioned into equidistant intervals and the k-nearest neighbour (KNN) classification technique was applied to each interval to insert samples into proper classes. In the next step, selected intervals were refined through the genetic algorithm (GA) by identifying a limited number of wavelengths from the intervals previously selected aimed at maximizing classification accuracy. When applied to Cialis®, Viagra®, and cocaine ATR-FTIR datasets, the proposed method substantially decreased the number of wavelengths needed to categorize, and increased the classification accuracy. From a practical perspective, the proposed method provides investigative forces with valuable information towards monitoring illegal production of drugs and medicines. In addition, focusing on a reduced subset of wavelengths allows the development of portable devices capable of testing the authenticity of samples during police checking events, avoiding the need for later laboratorial analyses and reducing equipment expenses. Theoretically, the proposed GA-based approach yields more refined solutions than the current methods relying on interval approaches, which tend to insert irrelevant wavelengths in the retained intervals. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Sampling

    CERN Document Server

    Thompson, Steven K

    2012-01-01

    Praise for the Second Edition "This book has never had a competitor. It is the only book that takes a broad approach to sampling . . . any good personal statistics library should include a copy of this book." —Technometrics "Well-written . . . an excellent book on an important subject. Highly recommended." —Choice "An ideal reference for scientific researchers and other professionals who use sampling." —Zentralblatt Math Features new developments in the field combined with all aspects of obtaining, interpreting, and using sample data Sampling provides an up-to-date treat

  16. Improved age determination of blood and teeth samples using a selected set of DNA methylation markers.

    Science.gov (United States)

    Bekaert, Bram; Kamalandua, Aubeline; Zapico, Sara C; Van de Voorde, Wim; Decorte, Ronny

    2015-01-01

    Age estimation from DNA methylation markers has seen an exponential growth of interest, not in the least from forensic scientists. The current published assays, however, can still be improved by lowering the number of markers in the assay and by providing more accurate models to predict chronological age. From the published literature we selected 4 age-associated genes (ASPA, PDE4C, ELOVL2, and EDARADD) and determined CpG methylation levels from 206 blood samples of both deceased and living individuals (age range: 0-91 years). This data was subsequently used to compare prediction accuracy with both linear and non-linear regression models. A quadratic regression model in which the methylation levels of ELOVL2 were squared showed the highest accuracy with a Mean Absolute Deviation (MAD) between chronological age and predicted age of 3.75 years and an adjusted R(2) of 0.95. No difference in accuracy was observed for samples obtained either from living and deceased individuals or between the 2 genders. In addition, 29 teeth from different individuals (age range: 19-70 years) were analyzed using the same set of markers resulting in a MAD of 4.86 years and an adjusted R(2) of 0.74. Cross validation of the results obtained from blood samples demonstrated the robustness and reproducibility of the assay. In conclusion, the set of 4 CpG DNA methylation markers is capable of producing highly accurate age predictions for blood samples from deceased and living individuals.

  17. Cold Spray Deposition of Freestanding Inconel Samples and Comparative Analysis with Selective Laser Melting

    Science.gov (United States)

    Bagherifard, Sara; Roscioli, Gianluca; Zuccoli, Maria Vittoria; Hadi, Mehdi; D'Elia, Gaetano; Demir, Ali Gökhan; Previtali, Barbara; Kondás, Ján; Guagliano, Mario

    2017-10-01

    Cold spray offers the possibility of obtaining almost zero-porosity buildups with no theoretical limit to the thickness. Moreover, cold spray can eliminate particle melting, evaporation, crystallization, grain growth, unwanted oxidation, undesirable phases and thermally induced tensile residual stresses. Such characteristics can boost its potential to be used as an additive manufacturing technique. Indeed, deposition via cold spray is recently finding its path toward fabrication of freeform components since it can address the common challenges of powder-bed additive manufacturing techniques including major size constraints, deposition rate limitations and high process temperature. Herein, we prepared nickel-based superalloy Inconel 718 samples with cold spray technique and compared them with similar samples fabricated by selective laser melting method. The samples fabricated using both methods were characterized in terms of mechanical strength, microstructural and porosity characteristics, Vickers microhardness and residual stresses distribution. Different heat treatment cycles were applied to the cold-sprayed samples in order to enhance their mechanical characteristics. The obtained data confirm that cold spray technique can be used as a complementary additive manufacturing method for fabrication of high-quality freestanding components where higher deposition rate, larger final size and lower fabrication temperatures are desired.

  18. Evaluation of a gas chromatography method for azelaic acid determination in selected biological samples

    Science.gov (United States)

    Garelnabi, Mahdi; Litvinov, Dmitry; Parthasarathy, Sampath

    2010-01-01

    Background: Azelaic acid (AzA) is the best known dicarboxilic acid to have pharmaceutical benefits and clinical applications and also to be associated with some diseases pathophysiology. Materials and Methods: We extracted and methylesterified AzA and determined its concentration in human plasma obtained from healthy individuals and also in mice fed AzA containing diet for three months. Results: AzA was detected in Gas Chromatography (GC) and confirmed by Liquid chromatography mass spectrometry (LCMS), and gas chromatography mass spectrometry (GCMC). Our results have shown that AzA can be determined efficiently in selected biological samples by GC method with 1nM limit of detection (LoD) and the limit of quantification (LoQ); was established at 50nM. Analytical Sensitivity as assayed by hexane demonstrated an analytical sensitivity at 0.050nM. The method has demonstrated 8-10% CV batch repeatability across the sample types and 13-18.9% CV for the Within-Lab Precision analysis. The method has shown that AzA can efficiently be recovered from various sample preparation including liver tissue homogenate (95%) and human plasma (97%). Conclusions: Because of its simplicity and lower limit of quantification, the present method provides a useful tool for determining AzA in various biological sample preparations. PMID:22558586

  19. Does the Use of a Decision Aid Improve Decision Making in Prosthetic Heart Valve Selection? A Multicenter Randomized Trial

    NARCIS (Netherlands)

    Korteland, Nelleke M.; Ahmed, Yunus; Koolbergen, David R.; Brouwer, Marjan; de Heer, Frederiek; Kluin, Jolanda; Bruggemans, Eline F.; Klautz, Robert J. M.; Stiggelbout, Anne M.; Bucx, Jeroen J. J.; Roos-Hesselink, Jolien W.; Polak, Peter; Markou, Thanasie; van den Broek, Inge; Ligthart, Rene; Bogers, Ad J. J. C.; Takkenberg, Johanna J. M.

    2017-01-01

    A Dutch online patient decision aid to support prosthetic heart valve selection was recently developed. A multicenter randomized controlled trial was conducted to assess whether use of the patient decision aid results in optimization of shared decision making in prosthetic heart valve selection. In

  20. Beyond Random Walk and Metropolis-Hastings Samplers: Why You Should Not Backtrack for Unbiased Graph Sampling

    CERN Document Server

    Lee, Chul-Ho; Eun, Do Young

    2012-01-01

    Graph sampling via crawling has been actively considered as a generic and important tool for collecting uniform node samples so as to consistently estimate and uncover various characteristics of complex networks. The so-called simple random walk with re-weighting (SRW-rw) and Metropolis-Hastings (MH) algorithm have been popular in the literature for such unbiased graph sampling. However, an unavoidable downside of their core random walks -- slow diffusion over the space, can cause poor estimation accuracy. In this paper, we propose non-backtracking random walk with re-weighting (NBRW-rw) and MH algorithm with delayed acceptance (MHDA) which are theoretically guaranteed to achieve, at almost no additional cost, not only unbiased graph sampling but also higher efficiency (smaller asymptotic variance of the resulting unbiased estimators) than the SRW-rw and the MH algorithm, respectively. In particular, a remarkable feature of the MHDA is its applicability for any non-uniform node sampling like the MH algorithm,...

  1. Identifying the origin of groundwater samples in a multi-layer aquifer system with Random Forest classification

    Science.gov (United States)

    Baudron, Paul; Alonso-Sarría, Francisco; García-Aróstegui, José Luís; Cánovas-García, Fulgencio; Martínez-Vicente, David; Moreno-Brotóns, Jesús

    2013-08-01

    Accurate identification of the origin of groundwater samples is not always possible in complex multilayered aquifers. This poses a major difficulty for a reliable interpretation of geochemical results. The problem is especially severe when the information on the tubewells design is hard to obtain. This paper shows a supervised classification method based on the Random Forest (RF) machine learning technique to identify the layer from where groundwater samples were extracted. The classification rules were based on the major ion composition of the samples. We applied this method to the Campo de Cartagena multi-layer aquifer system, in southeastern Spain. A large amount of hydrogeochemical data was available, but only a limited fraction of the sampled tubewells included a reliable determination of the borehole design and, consequently, of the aquifer layer being exploited. Added difficulty was the very similar compositions of water samples extracted from different aquifer layers. Moreover, not all groundwater samples included the same geochemical variables. Despite of the difficulty of such a background, the Random Forest classification reached accuracies over 90%. These results were much better than the Linear Discriminant Analysis (LDA) and Decision Trees (CART) supervised classification methods. From a total of 1549 samples, 805 proceeded from one unique identified aquifer, 409 proceeded from a possible blend of waters from several aquifers and 335 were of unknown origin. Only 468 of the 805 unique-aquifer samples included all the chemical variables needed to calibrate and validate the models. Finally, 107 of the groundwater samples of unknown origin could be classified. Most unclassified samples did not feature a complete dataset. The uncertainty on the identification of training samples was taken in account to enhance the model. Most of the samples that could not be identified had an incomplete dataset.

  2. Soil map disaggregation improved by soil-landscape relationships, area-proportional sampling and random forest implementation

    DEFF Research Database (Denmark)

    Møller, Anders Bjørn; Malone, Brendan P.; Odgers, Nathan

    algorithm were evaluated. The resulting maps were validated on 777 soil profiles situated in a grid covering Denmark. The experiments showed that the results obtained with Jacobsen’s map were more accurate than the results obtained with the CEC map, despite a nominally coarser scale of 1:2,000,000 vs. 1...... of European Communities (CEC, 1985) respectively, both using the FAO 1974 classification. Furthermore, the effects of implementing soil-landscape relationships, using area proportional sampling instead of per polygon sampling, and replacing the default C5.0 classification tree algorithm with a random forest......:1,000,000. This finding is probably related to the fact that Jacobsen’s map was more detailed with a larger number of polygons, soil map units and soil types, despite its coarser scale. The results showed that the implementation of soil-landscape relationships, area-proportional sampling and the random forest...

  3. Selective extraction of dimethoate from cucumber samples by use of molecularly imprinted microspheres

    Directory of Open Access Journals (Sweden)

    Jiao-Jiao Du

    2015-06-01

    Full Text Available Molecularly imprinted polymers for dimethoate recognition were synthesized by the precipitation polymerization technique using methyl methacrylate (MMA as the functional monomer and ethylene glycol dimethacrylate (EGDMA as the cross-linker. The morphology, adsorption and recognition properties were investigated by scanning electron microscopy (SEM, static adsorption test, and competitive adsorption test. To obtain the best selectivity and binding performance, the synthesis and adsorption conditions of MIPs were optimized through single factor experiments. Under the optimized conditions, the resultant polymers exhibited uniform size, satisfactory binding capacity and significant selectivity. Furthermore, the imprinted polymers were successfully applied as a specific solid-phase extractants combined with high performance liquid chromatography (HPLC for determination of dimethoate residues in the cucumber samples. The average recoveries of three spiked samples ranged from 78.5% to 87.9% with the relative standard deviations (RSDs less than 4.4% and the limit of detection (LOD obtained for dimethoate as low as 2.3 μg/mL. Keywords: Molecularly imprinted polymer, Precipitation polymerization, Dimethoate, Cucumber, HPLC

  4. Active classifier selection for RGB-D object categorization using a Markov random field ensemble method

    Science.gov (United States)

    Durner, Maximilian; Márton, Zoltán.; Hillenbrand, Ulrich; Ali, Haider; Kleinsteuber, Martin

    2017-03-01

    In this work, a new ensemble method for the task of category recognition in different environments is presented. The focus is on service robotic perception in an open environment, where the robot's task is to recognize previously unseen objects of predefined categories, based on training on a public dataset. We propose an ensemble learning approach to be able to flexibly combine complementary sources of information (different state-of-the-art descriptors computed on color and depth images), based on a Markov Random Field (MRF). By exploiting its specific characteristics, the MRF ensemble method can also be executed as a Dynamic Classifier Selection (DCS) system. In the experiments, the committee- and topology-dependent performance boost of our ensemble is shown. Despite reduced computational costs and using less information, our strategy performs on the same level as common ensemble approaches. Finally, the impact of large differences between datasets is analyzed.

  5. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    Science.gov (United States)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  6. Implications of structural genomics target selection strategies: Pfam5000, whole genome, and random approaches

    Energy Technology Data Exchange (ETDEWEB)

    Chandonia, John-Marc; Brenner, Steven E.

    2004-07-14

    The structural genomics project is an international effort to determine the three-dimensional shapes of all important biological macromolecules, with a primary focus on proteins. Target proteins should be selected according to a strategy which is medically and biologically relevant, of good value, and tractable. As an option to consider, we present the Pfam5000 strategy, which involves selecting the 5000 most important families from the Pfam database as sources for targets. We compare the Pfam5000 strategy to several other proposed strategies that would require similar numbers of targets. These include including complete solution of several small to moderately sized bacterial proteomes, partial coverage of the human proteome, and random selection of approximately 5000 targets from sequenced genomes. We measure the impact that successful implementation of these strategies would have upon structural interpretation of the proteins in Swiss-Prot, TrEMBL, and 131 complete proteomes (including 10 of eukaryotes) from the Proteome Analysis database at EBI. Solving the structures of proteins from the 5000 largest Pfam families would allow accurate fold assignment for approximately 68 percent of all prokaryotic proteins (covering 59 percent of residues) and 61 percent of eukaryotic proteins (40 percent of residues). More fine-grained coverage which would allow accurate modeling of these proteins would require an order of magnitude more targets. The Pfam5000 strategy may be modified in several ways, for example to focus on larger families, bacterial sequences, or eukaryotic sequences; as long as secondary consideration is given to large families within Pfam, coverage results vary only slightly. In contrast, focusing structural genomics on a single tractable genome would have only a limited impact in structural knowledge of other proteomes: a significant fraction (about 30-40 percent of the proteins, and 40-60 percent of the residues) of each proteome is classified in small

  7. Clinical outcome of intracytoplasmic injection of spermatozoa morphologically selected under high magnification: a prospective randomized study.

    Science.gov (United States)

    Balaban, Basak; Yakin, Kayhan; Alatas, Cengiz; Oktem, Ozgur; Isiklar, Aycan; Urman, Bulent

    2011-05-01

    Recent evidence shows that the selection of spermatozoa based on the analysis of morphology under high magnification (×6000) may have a positive impact on embryo development in cases with severe male factor infertility and/or previous implantation failures. The objective of this prospective randomized study was to compare the clinical outcome of 87 intracytoplasmic morphologically selected sperm injection (IMSI) cycles with 81 conventional intracytoplasmic sperm injection (ICSI) cycles in an unselected infertile population. IMSI did not provide a significant improvement in the clinical outcome compared with ICSI although there were trends for higher implantation (28.9% versus 19.5%), clinical pregnancy (54.0% versus 44.4%) and live birth rates (43.7% versus 38.3%) in the IMSI group. However, severe male factor patients benefited from the IMSI procedure as shown by significantly higher implantation rates compared with their counterparts in the ICSI group (29.6% versus 15.2%, P=0.01). These results suggest that IMSI may improve IVF success rates in a selected group of patients with male factor infertility. New technological developments enable the real time examination of motile spermatozoa with an inverted light microscope equipped with high-power differential interference contrast optics, enhanced by digital imaging. High magnification (over ×6000) provides the identification of spermatozoa with a normal nucleus and nuclear content. Intracytoplasmic injection of spermatozoa selected according to fine nuclear morphology under high magnification may improve the clinical outcome in cases with severe male factor infertility. Copyright © 2010 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  8. Radiographic methods used before removal of mandibular third molars among randomly selected general dental clinics.

    Science.gov (United States)

    Matzen, Louise H; Petersen, Lars B; Wenzel, Ann

    2016-01-01

    To assess radiographic methods and diagnostically sufficient images used before removal of mandibular third molars among randomly selected general dental clinics. Furthermore, to assess factors predisposing for an additional radiographic examination. 2 observers visited 18 randomly selected clinics in Denmark and studied patient files, including radiographs of patients who had their mandibular third molar(s) removed. The radiographic unit and type of receptor were registered. A diagnostically sufficient image was defined as the whole tooth and mandibular canal were displayed in the radiograph (yes/no). Overprojection between the tooth and mandibular canal (yes/no) and patient-reported inferior alveolar nerve sensory disturbances (yes/no) were recorded. Regression analyses tested if overprojection between the third molar and the mandibular canal and an insufficient intraoral image predisposed for additional radiographic examination(s). 1500 mandibular third molars had been removed; 1090 had intraoral, 468 had panoramic and 67 had CBCT examination. 1000 teeth were removed after an intraoral examination alone, 433 after panoramic examination and 67 after CBCT examination. 90 teeth had an additional examination after intraoral. Overprojection between the tooth and mandibular canal was a significant factor (p < 0.001, odds ratio = 3.56) for an additional examination. 63.7% of the intraoral images were sufficient and 36.3% were insufficient, with no significant difference between images performed with phosphor plates and solid-state sensors (p = 0.6). An insufficient image predisposed for an additional examination (p = 0.008, odds ratio = 1.8) but was only performed in 11% of the cases. Most mandibular third molars were removed based on an intraoral examination although 36.3% were insufficient.

  9. Active Learning to Overcome Sample Selection Bias: Application to Photometric Variable Star Classification

    Science.gov (United States)

    Richards, Joseph W.; Starr, Dan L.; Brink, Henrik; Miller, Adam A.; Bloom, Joshua S.; Butler, Nathaniel R.; James, J. Berian; Long, James P.; Rice, John

    2012-01-01

    Despite the great promise of machine-learning algorithms to classify and predict astrophysical parameters for the vast numbers of astrophysical sources and transients observed in large-scale surveys, the peculiarities of the training data often manifest as strongly biased predictions on the data of interest. Typically, training sets are derived from historical surveys of brighter, more nearby objects than those from more extensive, deeper surveys (testing data). This sample selection bias can cause catastrophic errors in predictions on the testing data because (1) standard assumptions for machine-learned model selection procedures break down and (2) dense regions of testing space might be completely devoid of training data. We explore possible remedies to sample selection bias, including importance weighting, co-training, and active learning (AL). We argue that AL—where the data whose inclusion in the training set would most improve predictions on the testing set are queried for manual follow-up—is an effective approach and is appropriate for many astronomical applications. For a variable star classification problem on a well-studied set of stars from Hipparcos and Optical Gravitational Lensing Experiment, AL is the optimal method in terms of error rate on the testing data, beating the off-the-shelf classifier by 3.4% and the other proposed methods by at least 3.0%. To aid with manual labeling of variable stars, we developed a Web interface which allows for easy light curve visualization and querying of external databases. Finally, we apply AL to classify variable stars in the All Sky Automated Survey, finding dramatic improvement in our agreement with the ASAS Catalog of Variable Stars, from 65.5% to 79.5%, and a significant increase in the classifier's average confidence for the testing set, from 14.6% to 42.9%, after a few AL iterations.

  10. The Swift Gamma-Ray Burst Host Galaxy Legacy Survey. I. Sample Selection and Redshift Distribution

    Science.gov (United States)

    Perley, D. A.; Krühler, T.; Schulze, S.; de Ugarte Postigo, A.; Hjorth, J.; Berger, E.; Cenko, S. B.; Chary, R.; Cucchiara, A.; Ellis, R.; Fong, W.; Fynbo, J. P. U.; Gorosabel, J.; Greiner, J.; Jakobsson, P.; Kim, S.; Laskar, T.; Levan, A. J.; Michałowski, M. J.; Milvang-Jensen, B.; Tanvir, N. R.; Thöne, C. C.; Wiersema, K.

    2016-01-01

    We introduce the Swift Gamma-Ray Burst Host Galaxy Legacy Survey (“SHOALS”), a multi-observatory high-redshift galaxy survey targeting the largest unbiased sample of long-duration gamma-ray burst (GRB) hosts yet assembled (119 in total). We describe the motivations of the survey and the development of our selection criteria, including an assessment of the impact of various observability metrics on the success rate of afterglow-based redshift measurement. We briefly outline our host galaxy observational program, consisting of deep Spitzer/IRAC imaging of every field supplemented by similarly deep, multicolor optical/near-IR photometry, plus spectroscopy of events without preexisting redshifts. Our optimized selection cuts combined with host galaxy follow-up have so far enabled redshift measurements for 110 targets (92%) and placed upper limits on all but one of the remainder. About 20% of GRBs in the sample are heavily dust obscured, and at most 2% originate from z\\gt 5.5. Using this sample, we estimate the redshift-dependent GRB rate density, showing it to peak at z∼ 2.5 and fall by at least an order of magnitude toward low (z = 0) redshift, while declining more gradually toward high (z∼ 7) redshift. This behavior is consistent with a progenitor whose formation efficiency varies modestly over cosmic history. Our survey will permit the most detailed examination to date of the connection between the GRB host population and general star-forming galaxies, directly measure evolution in the host population over cosmic time and discern its causes, and provide new constraints on the fraction of cosmic star formation occurring in undetectable galaxies at all redshifts.

  11. The Swift Gamma-Ray Burst Host Galaxy Legacy Survey. I. Sample Selection and Redshift Distribution

    Science.gov (United States)

    Perley, D. A.; Kruhler, T.; Schulze, S.; Postigo, A. De Ugarte; Hjorth, J.; Berger, E.; Cenko, S. B.; Chary, R.; Cucchiara, A.; Ellis, R.; hide

    2016-01-01

    We introduce the Swift Gamma-Ray Burst Host Galaxy Legacy Survey (SHOALS), a multi-observatory high redshift galaxy survey targeting the largest unbiased sample of long-duration gamma-ray burst (GRB) hosts yet assembled (119 in total). We describe the motivations of the survey and the development of our selection criteria, including an assessment of the impact of various observability metrics on the success rate of afterglow-based redshift measurement. We briefly outline our host galaxy observational program, consisting of deep Spitzer/IRAC imaging of every field supplemented by similarly deep, multicolor optical/near-IR photometry, plus spectroscopy of events without preexisting redshifts. Our optimized selection cuts combined with host galaxy follow-up have so far enabled redshift measurements for 110 targets (92%) and placed upper limits on all but one of the remainder. About 20% of GRBs in the sample are heavily dust obscured, and at most 2% originate from z > 5.5. Using this sample, we estimate the redshift-dependent GRB rate density, showing it to peak at z approx. 2.5 and fall by at least an order of magnitude toward low (z = 0) redshift, while declining more gradually toward high (z approx. 7) redshift. This behavior is consistent with a progenitor whose formation efficiency varies modestly over cosmic history. Our survey will permit the most detailed examination to date of the connection between the GRB host population and general star-forming galaxies, directly measure evolution in the host population over cosmic time and discern its causes, and provide new constraints on the fraction of cosmic star formation occurring in undetectable galaxies at all redshifts.

  12. Randomization tests

    CERN Document Server

    Edgington, Eugene

    2007-01-01

    Statistical Tests That Do Not Require Random Sampling Randomization Tests Numerical Examples Randomization Tests and Nonrandom Samples The Prevalence of Nonrandom Samples in Experiments The Irrelevance of Random Samples for the Typical Experiment Generalizing from Nonrandom Samples Intelligibility Respect for the Validity of Randomization Tests Versatility Practicality Precursors of Randomization Tests Other Applications of Permutation Tests Questions and Exercises Notes References Randomized Experiments Unique Benefits of Experiments Experimentation without Mani

  13. Finite-sample corrected generalized estimating equation of population average treatment effects in stepped wedge cluster randomized trials.

    Science.gov (United States)

    Scott, JoAnna M; deCamp, Allan; Juraska, Michal; Fay, Michael P; Gilbert, Peter B

    2017-04-01

    Stepped wedge designs are increasingly commonplace and advantageous for cluster randomized trials when it is both unethical to assign placebo, and it is logistically difficult to allocate an intervention simultaneously to many clusters. We study marginal mean models fit with generalized estimating equations for assessing treatment effectiveness in stepped wedge cluster randomized trials. This approach has advantages over the more commonly used mixed models that (1) the population-average parameters have an important interpretation for public health applications and (2) they avoid untestable assumptions on latent variable distributions and avoid parametric assumptions about error distributions, therefore, providing more robust evidence on treatment effects. However, cluster randomized trials typically have a small number of clusters, rendering the standard generalized estimating equation sandwich variance estimator biased and highly variable and hence yielding incorrect inferences. We study the usual asymptotic generalized estimating equation inferences (i.e., using sandwich variance estimators and asymptotic normality) and four small-sample corrections to generalized estimating equation for stepped wedge cluster randomized trials and for parallel cluster randomized trials as a comparison. We show by simulation that the small-sample corrections provide improvement, with one correction appearing to provide at least nominal coverage even with only 10 clusters per group. These results demonstrate the viability of the marginal mean approach for both stepped wedge and parallel cluster randomized trials. We also study the comparative performance of the corrected methods for stepped wedge and parallel designs, and describe how the methods can accommodate interval censoring of individual failure times and incorporate semiparametric efficient estimators.

  14. Control group selection in critical care randomized controlled trials evaluating interventional strategies: An ethical assessment.

    Science.gov (United States)

    Silverman, Henry J; Miller, Franklin G

    2004-03-01

    Ethical concern has been raised with critical care randomized controlled trials in which the standard of care reflects a broad range of clinical practices. Commentators have argued that trials without an unrestricted control group, in which standard practices are implemented at the discretion of the attending physician, lack the ability to redefine the standard of care and might expose subjects to excessive harms due to an inability to stop early. To develop a framework for analyzing control group selection for critical care trials. Ethical analysis. A key ethical variable in trial design is the extent with which the control group adequately reflects standard care practices. Such a control group might incorporate either the "unrestricted" practices of physicians or a protocol that specifies and restricts the parameters of standard practices. Control group selection should be determined with respect to the following ethical objectives of trial design: 1) clinical value, 2) scientific validity, 3) efficiency and feasibility, and 4) protection of human subjects. Because these objectives may conflict, control group selection will involve trade-offs and compromises. Trials using a protocolized rather than an unrestricted standard care control group will likely have enhanced validity. However, if the protocolized control group lacks representativeness to standard care practices, then trials that use such groups will offer less clinical value and could provide less assurance of protecting subjects compared with trials that use unrestricted control groups. For trials evaluating contrasting strategies that do not adequately represent standard practices, use of a third group that is more representative of standard practices will enhance clinical value and increase the ability to stop early if needed to protect subjects. These advantages might come at the expense of efficiency and feasibility. Weighing and balancing the competing ethical objectives of trial design should be

  15. Airway hyperresponsiveness to mannitol and methacholine and exhaled nitric oxide: a random-sample population study

    DEFF Research Database (Denmark)

    Sverrild, Asger; Porsbjerg, Celeste; Thomsen, Simon Francis

    2010-01-01

    Studies of selected patient groups have shown that airway hyperresponsiveness (AHR) to mannitol is more specific than methacholine for the diagnosis of asthma, as well as more closely associated with markers of airway inflammation in asthma....

  16. Airway hyperresponsiveness to mannitol and methacholine and exhaled nitric oxide: a random-sample population study

    DEFF Research Database (Denmark)

    Sverrild, Asger; Porsbjerg, Celeste; Thomsen, Simon Francis

    2010-01-01

    Studies of selected patient groups have shown that airway hyperresponsiveness (AHR) to mannitol is more specific than methacholine for the diagnosis of asthma, as well as more closely associated with markers of airway inflammation in asthma.......Studies of selected patient groups have shown that airway hyperresponsiveness (AHR) to mannitol is more specific than methacholine for the diagnosis of asthma, as well as more closely associated with markers of airway inflammation in asthma....

  17. Securing image information using double random phase encoding and parallel compressive sensing with updated sampling processes

    Science.gov (United States)

    Hu, Guiqiang; Xiao, Di; Wang, Yong; Xiang, Tao; Zhou, Qing

    2017-11-01

    Recently, a new kind of image encryption approach using compressive sensing (CS) and double random phase encoding has received much attention due to the advantages such as compressibility and robustness. However, this approach is found to be vulnerable to chosen plaintext attack (CPA) if the CS measurement matrix is re-used. Therefore, designing an efficient measurement matrix updating mechanism that ensures resistance to CPA is of practical significance. In this paper, we provide a novel solution to update the CS measurement matrix by altering the secret sparse basis with the help of counter mode operation. Particularly, the secret sparse basis is implemented by a reality-preserving fractional cosine transform matrix. Compared with the conventional CS-based cryptosystem that totally generates all the random entries of measurement matrix, our scheme owns efficiency superiority while guaranteeing resistance to CPA. Experimental and analysis results show that the proposed scheme has a good security performance and has robustness against noise and occlusion.

  18. Selective determination of total vanadium in water samples by cloud point extraction of its ternary complex.

    Science.gov (United States)

    Filik, Hayati; Yanaz, Zeynep; Apak, Reşat

    2008-07-14

    A highly sensitive micelle-mediated extraction methodology for the preconcentration of trace levels of vanadium as a prior step to its determination by flame atomic absorption spectrometry (FAAS) has been developed. Vanadium was complexed with 1-(2-pyridylazo)-2-naphthol (PAN) and hydrogen peroxide in acidic medium (0.2 mol L(-1) phosphoric acid) using Triton X-100 as surfactant and quantitatively extracted into a small volume of the surfactant-rich phase after centrifugation. The color reaction of vanadium ions with hydrogen peroxide and PAN in phosphoric acid medium is highly selective. The chemical variables affecting cloud point extraction (CPE) were evaluated and optimized. The R.S.D. for 5 replicate determinations at the 20 microg L(-1)V level was 3.6%. The calibration graph using the preconcentration system for vanadium was linear with a correlation coefficient of 0.99 at levels near the detection limits up to at least 0.6 microg L(-1). The method has good sensitivity and selectivity and was applied to the determination of trace amounts of vanadium in water samples with satisfactory result. The proposed method is a rare application of CPE-atomic spectrometry to vanadium assay, and is superior to most other similar methods, because its useful pH range is in the moderately acidic range achieved with phosphoric acid. At this pH, many potential interferents are not chelated with PAN, and iron(III) as the major interferent is bound in a stable phosphate complex.

  19. Determination of Nd3+ Ions in Solution Samples by a Coated Wire Ion-Selective Sensor

    Directory of Open Access Journals (Sweden)

    Hassan Ali Zamani

    2012-01-01

    Full Text Available A new coated wire electrode (CWE using 5-(methylsulfanyl-3-phenyl-1H-1,2,4-triazole (MPT as an ionophore has been developed as a neodymium ion-selective sensor. The sensor exhibits Nernstian response for the Nd3+ ions in the concentration range of 1.0×10−6-1.0×10−2 M with detection limit of 3.7×10−7 M. It displays a Nernstian slope of 20.2±0.2 mV/decade in the pH range of 2.7–8.1. The proposed sensor also exhibits a fast response time of ∼5 s. The sensor revealed high selectivity with respect to all common alkali, alkaline earth, transition and heavy metal ions, including members of the lanthanide family other than Nd3+. The electrode was used as an indicator electrode in the potentiometric titration of Nd(III ions with EDTA. The electrode was also employed for the determination of the Nd3+ ions concentration in water solution samples.

  20. Alcohol and marijuana use in adolescents' daily lives: a random sample of experiences.

    Science.gov (United States)

    Larson, R; Csikszentmihalyi, M; Freeman, M

    1984-07-01

    High school students filled out reports on their experiences at random times during their daily lives, including 48 occasions when they were using alcohol or marijuana. Alcohol use was reported primarily in the context of Friday and Saturday night social gatherings and was associated with a happy and gregarious subjective state. Marijuana use was reported across a wider range of situations and was associated with an average state that differed much less from ordinary experience.

  1. Stemflow estimation in a redwood forest using model-based stratified random sampling

    Science.gov (United States)

    Jack Lewis

    2003-01-01

    Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...

  2. Random or systematic sampling to detect a localised microbial contamination within a batch of food

    NARCIS (Netherlands)

    Jongenburger, I.; Reij, M.W.; Boer, E.P.J.; Gorris, L.G.M.; Zwietering, M.H.

    2011-01-01

    Pathogenic microorganisms are known to be distributed heterogeneously in food products that are solid, semi-solid or powdered, like for instance peanut butter, cereals, or powdered milk. This complicates effective detection of the pathogens by sampling. Two-class sampling plans, which are deployed

  3. Multistage point relascope and randomized branch sampling for downed coarse woody debris estimation

    Science.gov (United States)

    Jeffrey H. Gove; Mark J. Ducey; Harry T. Valentine

    2002-01-01

    New sampling methods have recently been introduced that allow estimation of downed coarse woody debris using an angle gauge, or relascope. The theory behind these methods is based on sampling straight pieces of downed coarse woody debris. When pieces deviate from this ideal situation, auxillary methods must be employed. We describe a two-stage procedure where the...

  4. Sample selection, recruitment and participation rates in health examination surveys in Europe--experience from seven national surveys.

    Science.gov (United States)

    Mindell, Jennifer S; Giampaoli, Simona; Goesswald, Antje; Kamtsiuris, Panagiotis; Mann, Charlotte; Männistö, Satu; Morgan, Karen; Shelton, Nicola J; Verschuren, W M Monique; Tolonen, Hanna

    2015-10-05

    Health examination surveys (HESs), carried out in Europe since the 1950's, provide valuable information about the general population's health for health monitoring, policy making, and research. Survey participation rates, important for representativeness, have been falling. International comparisons are hampered by differing exclusion criteria and definitions for non-response. Information was collected about seven national HESs in Europe conducted in 2007-2012. These surveys can be classified into household and individual-based surveys, depending on the sampling frames used. Participation rates of randomly selected adult samples were calculated for four survey modules using standardised definitions and compared by sex, age-group, geographical areas within countries, and over time, where possible. All surveys covered residents not just citizens; three countries excluded those in institutions. In two surveys, physical examinations and blood sample collection were conducted at the participants' home; the others occurred at examination clinics. Recruitment processes varied considerably between surveys. Monetary incentives were used in four surveys. Initial participation rates aged 35-64 were 45% in the Netherlands (phase II), 54% in Germany (new and previous participants combined), 55% in Italy, and 65% in Finland. In Ireland, England and Scotland, household participation rates were 66%, 66% and 63% respectively. Participation rates were generally higher in women and increased with age. Almost all participants attending an examination centre agreed to all modules but surveys conducted in the participants' home had falling responses to each stage. Participation rates in most primate cities were substantially lower than the national average. Age-standardized response rates to blood pressure measurement among those aged 35-64 in Finland, Germany and England fell by 0.7-1.5 percentage points p.a. between 1998-2002 and 2010-2012. Longer trends in some countries show a more

  5. The adverse effect of selective cyclooxygenase-2 inhibitor on random skin flap survival in rats.

    Directory of Open Access Journals (Sweden)

    Haiyong Ren

    Full Text Available BACKGROUND: Cyclooxygenase-2(COX-2 inhibitors provide desired analgesic effects after injury or surgery, but evidences suggested they also attenuate wound healing. The study is to investigate the effect of COX-2 inhibitor on random skin flap survival. METHODS: The McFarlane flap model was established in 40 rats and evaluated within two groups, each group gave the same volume of Parecoxib and saline injection for 7 days. The necrotic area of the flap was measured, the specimens of the flap were stained with haematoxylin-eosin(HE for histologic analysis. Immunohistochemical staining was performed to analyse the level of VEGF and COX-2 . RESULTS: 7 days after operation, the flap necrotic area ratio in study group (66.65 ± 2.81% was significantly enlarged than that of the control group(48.81 ± 2.33%(P <0.01. Histological analysis demonstrated angiogenesis with mean vessel density per mm(2 being lower in study group (15.4 ± 4.4 than in control group (27.2 ± 4.1 (P <0.05. To evaluate the expression of COX-2 and VEGF protein in the intermediate area II in the two groups by immunohistochemistry test .The expression of COX-2 in study group was (1022.45 ± 153.1, and in control group was (2638.05 ± 132.2 (P <0.01. The expression of VEGF in the study and control groups were (2779.45 ± 472.0 vs (4938.05 ± 123.6(P <0.01.In the COX-2 inhibitor group, the expressions of COX-2 and VEGF protein were remarkably down-regulated as compared with the control group. CONCLUSION: Selective COX-2 inhibitor had adverse effect on random skin flap survival. Suppression of neovascularization induced by low level of VEGF was supposed to be the biological mechanism.

  6. Content analysis of a stratified random selection of JVME articles: 1974-2004.

    Science.gov (United States)

    Olson, Lynne E

    2011-01-01

    A content analysis was performed on a random sample (N = 168) of 25% of the articles published in the Journal of Veterinary Medical Education (JVME) per year from 1974 through 2004. Over time, there were increased numbers of authors per paper, more cross-institutional collaborations, greater prevalence of references or endnotes, and lengthier articles, which could indicate a trend toward publications describing more complex or complete work. The number of first authors that could be identified as female was greatest for the most recent time period studied (2000-2004). Two different categorization schemes were created to assess the content of the publications. The first categorization scheme identified the most frequently published topics as admissions, descriptions of courses, the effect of changing teaching methods, issues facing the profession, and examples of uses of technology. The second categorization scheme identified the subset of articles that described medical education research on the basis of the purpose of the research, which represented only 14% of the sample articles (24 of 168). Of that group, only three of 24, or 12%, represented studies based on a firm conceptual framework that could be confirmed or refuted by the study's results. The results indicate that JVME is meeting its broadly based mission and that publications in the veterinary medical education literature have features common to publications in medicine and medical education.

  7. Rapid selection of accessible and cleavable sites in RNA by Escherichia coli RNase P and random external guide sequences

    OpenAIRE

    Lundblad, Eirik W.; Xiao, Gaoping; Ko, Jae-hyeong; Altman, Sidney

    2008-01-01

    A method of inhibiting the expression of particular genes by using external guide sequences (EGSs) has been improved in its rapidity and specificity. Random EGSs that have 14-nt random sequences are used in the selection procedure for an EGS that attacks the mRNA for a gene in a particular location. A mixture of the random EGSs, the particular target RNA, and RNase P is used in the diagnostic procedure, which, after completion, is analyzed in a gel with suitable control lanes. Within a few ho...

  8. Random genetic drift, natural selection, and noise in human cranial evolution.

    Science.gov (United States)

    Roseman, Charles C

    2016-08-01

    This study assesses the extent to which relationships among groups complicate comparative studies of adaptation in recent human cranial variation and the extent to which departures from neutral additive models of evolution hinder the reconstruction of population relationships among groups using cranial morphology. Using a maximum likelihood evolutionary model fitting approach and a mixed population genomic and cranial data set, I evaluate the relative fits of several widely used models of human cranial evolution. Moreover, I compare the goodness of fit of models of cranial evolution constrained by genomic variation to test hypotheses about population specific departures from neutrality. Models from population genomics are much better fits to cranial variation than are traditional models from comparative human biology. There is not enough evolutionary information in the cranium to reconstruct much of recent human evolution but the influence of population history on cranial variation is strong enough to cause comparative studies of adaptation serious difficulties. Deviations from a model of random genetic drift along a tree-like population history show the importance of environmental effects, gene flow, and/or natural selection on human cranial variation. Moreover, there is a strong signal of the effect of natural selection or an environmental factor on a group of humans from Siberia. The evolution of the human cranium is complex and no one evolutionary process has prevailed at the expense of all others. A holistic unification of phenome, genome, and environmental context, gives us a strong point of purchase on these problems, which is unavailable to any one traditional approach alone. Am J Phys Anthropol 160:582-592, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  9. Combined-sewer overflow data and methods of sample collection for selected sites, Detroit, Michigan

    Science.gov (United States)

    Sweat, M.J.; Wolf, J.R.

    1997-01-01

    for inorganic pollutants, and between 14 and 22 times for organic pollutants, depending on the site. These samples represented between 8 and 17 storms during which one or more of the four selected CSOs discharged. The monitored pollutants included fecal coliform, fecal streptococci, and Escherichia coli; antimony, arsenic, beryllium, cadmium, hexavalent chromium, total chromium, cobalt, copper, iron, lead, manganese, mercury, nickel, silver, thallium and zinc; and polychlorinated biphenyl congeners, volatile organic compounds, and polynuclear aromatic hydrocarbons. Metal and non-metal inorganic pollutants were detected at all sites. Many organic pollutants were not detected at all.

  10. Noise-induced hearing loss in randomly selected New York dairy farmers.

    Science.gov (United States)

    May, J J; Marvel, M; Regan, M; Marvel, L H; Pratt, D S

    1990-01-01

    To understand better the effects of noise levels associated with dairy farming, we randomly selected 49 full-time dairy farmers from an established cohort. Medical and occupational histories were taken and standard audiometric testing was done. Forty-six males (94%) and three females (6%) with a mean age of 43.5 (+/- 13) years and an average of 29.4 (+/- 14) years in farming were tested. Pure Tone Average thresholds (PTA4) at 0.5, 1.0, 2.0, and 3.0 kHz plus High Frequency Average thresholds (HFA3) at 3.0, 4.0, and 6.0 kHz were calculated. Subjects with a loss of greater than or equal to 20 db in either ear were considered abnormal. Eighteen subjects (37%) had abnormal PTA4S and 32 (65%) abnormal HFA3S. The left ear was more severely affected in both groups (p less than or equal to .05, t-test). Significant associations were found between hearing loss and years worked (odds ratio 4.1, r = .53) and age (odds ratio 4.1, r = .59). No association could be found between hearing loss and measles; mumps; previous ear infections; or use of power tools, guns, motorcycles, snowmobiles, or stereo headphones. Our data suggest that among farmers, substantial hearing loss occurs especially in the high-frequency ranges. Presbycusis is an important confounding variable.

  11. Modeling Slotted Aloha as a Stochastic Game with Random Discrete Power Selection Algorithms

    Directory of Open Access Journals (Sweden)

    Rachid El-Azouzi

    2009-01-01

    Full Text Available We consider the uplink case of a cellular system where bufferless mobiles transmit over a common channel to a base station, using the slotted aloha medium access protocol. We study the performance of this system under several power differentiation schemes. Indeed, we consider a random set of selectable transmission powers and further study the impact of priorities given either to new arrival packets or to the backlogged ones. Later, we address a general capture model where a mobile transmits successfully a packet if its instantaneous SINR (signal to interferences plus noise ratio is lager than some fixed threshold. Under this capture model, we analyze both the cooperative team in which a common goal is jointly optimized as well as the noncooperative game problem where mobiles reach to optimize their own objectives. Furthermore, we derive the throughput and the expected delay and use them as the objectives to optimize and provide a stability analysis as alternative study. Exhaustive performance evaluations were carried out, we show that schemes with power differentiation improve significantly the individual as well as global performances, and could eliminate in some cases the bi-stable nature of slotted aloha.

  12. Random sampling technique for ultra-fast computations of molecular opacities for exoplanet atmospheres

    Science.gov (United States)

    Min, M.

    2017-10-01

    Context. Opacities of molecules in exoplanet atmospheres rely on increasingly detailed line-lists for these molecules. The line lists available today contain for many species up to several billions of lines. Computation of the spectral line profile created by pressure and temperature broadening, the Voigt profile, of all of these lines is becoming a computational challenge. Aims: We aim to create a method to compute the Voigt profile in a way that automatically focusses the computation time into the strongest lines, while still maintaining the continuum contribution of the high number of weaker lines. Methods: Here, we outline a statistical line sampling technique that samples the Voigt profile quickly and with high accuracy. The number of samples is adjusted to the strength of the line and the local spectral line density. This automatically provides high accuracy line shapes for strong lines or lines that are spectrally isolated. The line sampling technique automatically preserves the integrated line opacity for all lines, thereby also providing the continuum opacity created by the large number of weak lines at very low computational cost. Results: The line sampling technique is tested for accuracy when computing line spectra and correlated-k tables. Extremely fast computations ( 3.5 × 105 lines per second per core on a standard current day desktop computer) with high accuracy (≤1% almost everywhere) are obtained. A detailed recipe on how to perform the computations is given.

  13. Hybrid random walk-linear discriminant analysis method for unwrapping quantitative phase microscopy images of biological samples

    Science.gov (United States)

    Kim, Diane N. H.; Teitell, Michael A.; Reed, Jason; Zangle, Thomas A.

    2015-11-01

    Standard algorithms for phase unwrapping often fail for interferometric quantitative phase imaging (QPI) of biological samples due to the variable morphology of these samples and the requirement to image at low light intensities to avoid phototoxicity. We describe a new algorithm combining random walk-based image segmentation with linear discriminant analysis (LDA)-based feature detection, using assumptions about the morphology of biological samples to account for phase ambiguities when standard methods have failed. We present three versions of our method: first, a method for LDA image segmentation based on a manually compiled training dataset; second, a method using a random walker (RW) algorithm informed by the assumed properties of a biological phase image; and third, an algorithm which combines LDA-based edge detection with an efficient RW algorithm. We show that the combination of LDA plus the RW algorithm gives the best overall performance with little speed penalty compared to LDA alone, and that this algorithm can be further optimized using a genetic algorithm to yield superior performance for phase unwrapping of QPI data from biological samples.

  14. Changes in prevalence of, and risk factors for, lameness in random samples of English sheep flocks: 2004-2013.

    Science.gov (United States)

    Winter, Joanne R; Kaler, Jasmeet; Ferguson, Eamonn; KilBride, Amy L; Green, Laura E

    2015-11-01

    The aims of this study were to update the prevalence of lameness in sheep in England and identify novel risk factors. A total of 1260 sheep farmers responded to a postal survey. The survey captured detailed information on the period prevalence of lameness from May 2012-April 2013 and the prevalence and farmer naming of lesions attributable to interdigital dermatitis (ID), severe footrot (SFR), contagious ovine digital dermatitis (CODD) and shelly hoof (SH), management and treatment of lameness, and farm and flock details. The global mean period prevalence of lameness fell between 2004 and 2013 from 10.6% to 4.9% and the geometric mean period prevalence of lameness fell from 5.4% (95% CL: 4.7%-6.0%) to 3.5% (95% CI: 3.3%-3.7%). In 2013, more farmers were using vaccination and antibiotic treatment for ID and SFR and fewer farmers were using foot trimming as a routine or therapeutic treatment than in 2004. Two over-dispersed Poisson regression models were developed with the outcome the period prevalence of lameness, one investigated associations with farmer estimates of prevalence of the four foot lesions and one investigated associations with management practices to control and treat lameness and footrot. A prevalence of ID>10%, SFR>2.5% and CODD>2.5% were associated with a higher prevalence of lameness compared with those lesions being absent, however, the prevalence of SH was not associated with a change in risk of lameness. A key novel management risk associated with higher prevalence of lameness was the rate of feet bleeding/100 ewes trimmed/year. In addition, vaccination of ewes once per year and selecting breeding replacements from never-lame ewes were associated with a decreased risk of lameness. Other factors associated with a lower risk of lameness for the first time in a random sample of farmers and a full risk model were: recognising lameness in sheep at locomotion score 1 compared with higher scores, treatment of the first lame sheep in a group compared

  15. Models of self-peptide sampling by developing T cells identify candidate mechanisms of thymic selection.

    Directory of Open Access Journals (Sweden)

    Iren Bains

    Full Text Available Conventional and regulatory T cells develop in the thymus where they are exposed to samples of self-peptide MHC (pMHC ligands. This probabilistic process selects for cells within a range of responsiveness that allows the detection of foreign antigen without excessive responses to self. Regulatory T cells are thought to lie at the higher end of the spectrum of acceptable self-reactivity and play a crucial role in the control of autoimmunity and tolerance to innocuous antigens. While many studies have elucidated key elements influencing lineage commitment, we still lack a full understanding of how thymocytes integrate signals obtained by sampling self-peptides to make fate decisions. To address this problem, we apply stochastic models of signal integration by T cells to data from a study quantifying the development of the two lineages using controllable levels of agonist peptide in the thymus. We find two models are able to explain the observations; one in which T cells continually re-assess fate decisions on the basis of multiple summed proximal signals from TCR-pMHC interactions; and another in which TCR sensitivity is modulated over time, such that contact with the same pMHC ligand may lead to divergent outcomes at different stages of development. Neither model requires that T(conv and T(reg are differentially susceptible to deletion or that the two lineages need qualitatively different signals for development, as have been proposed. We find additional support for the variable-sensitivity model, which is able to explain apparently paradoxical observations regarding the effect of partial and strong agonists on T(conv and T(reg development.

  16. THE RHETORICAL USE OF RANDOM SAMPLING: CRAFTING AND COMMUNICATING THE PUBLIC IMAGE OF POLLS AS A SCIENCE (1935-1948).

    Science.gov (United States)

    Lusinchi, Dominic

    2017-03-01

    The scientific pollsters (Archibald Crossley, George H. Gallup, and Elmo Roper) emerged onto the American news media scene in 1935. Much of what they did in the following years (1935-1948) was to promote both the political and scientific legitimacy of their enterprise. They sought to be recognized as the sole legitimate producers of public opinion. In this essay I examine the, mostly overlooked, rhetorical work deployed by the pollsters to publicize the scientific credentials of their polling activities, and the central role the concept of sampling has had in that pursuit. First, they distanced themselves from the failed straw poll by claiming that their sampling methodology based on quotas was informed by science. Second, although in practice they did not use random sampling, they relied on it rhetorically to derive the symbolic benefits of being associated with the "laws of probability." © 2017 Wiley Periodicals, Inc.

  17. Multivariate Multi-Objective Allocation in Stratified Random Sampling: A Game Theoretic Approach.

    Science.gov (United States)

    Muhammad, Yousaf Shad; Hussain, Ijaz; Shoukry, Alaa Mohamd

    2016-01-01

    We consider the problem of multivariate multi-objective allocation where no or limited information is available within the stratum variance. Results show that a game theoretic approach (based on weighted goal programming) can be applied to sample size allocation problems. We use simulation technique to determine payoff matrix and to solve a minimax game.

  18. The effect of dead time on randomly sampled power spectral estimates

    DEFF Research Database (Denmark)

    Buchhave, Preben; Velte, Clara Marika; George, William K.

    2014-01-01

    consider both the effect on the measured spectrum of a finite sampling time, i.e., a finite time during which the signal is acquired, and a finite dead time, that is a time in which the signal processor is busy evaluating a data point and therefore unable to measure a subsequent data point arriving within...... the dead time delay....

  19. Phase microscopy of technical and biological samples through random phase modulation with a difuser

    DEFF Research Database (Denmark)

    Almoro, Percival; Pedrini, Giancarlo; Gundu, Phanindra Narayan

    2010-01-01

    A technique for phase microscopy using a phase diffuser and a reconstruction algorithm is proposed. A magnified specimen wavefront is projected on the diffuser plane that modulates the wavefront into a speckle field. The speckle patterns at axially displaced planes are sampled and used in an iter...

  20. The prevalence of symptoms associated with pulmonary tuberculosis in randomly selected children from a high burden community

    OpenAIRE

    Marais, B.; Obihara, C; Gie, R.; Schaaf, H; Hesseling, A.; Lombard, C.; Enarson, D; Bateman, E; Beyers, N

    2005-01-01

    Background: Diagnosis of childhood tuberculosis is problematic and symptom based diagnostic approaches are often promoted in high burden settings. This study aimed (i) to document the prevalence of symptoms associated with tuberculosis among randomly selected children living in a high burden community, and (ii) to compare the prevalence of these symptoms in children without tuberculosis to those in children with newly diagnosed tuberculosis.

  1. Men who have sex with men in Great Britain: comparison of a self‐selected internet sample with a national probability sample

    Science.gov (United States)

    Evans, Alison Ruth; Wiggins, Richard D; Mercer, Catherine H; Bolding, Graham J; Elford, Jonathan

    2007-01-01

    Objectives To compare the characteristics of a self‐selected, convenience sample of men who have sex with men (MSM) recruited through the internet with MSM drawn from a national probability survey in Great Britain. Methods The internet sample (n = 2065) was recruited through two popular websites for homosexual men in Great Britain in May and June 2003. This sample was compared with MSM (n = 117) from the National Survey of Sexual Attitudes and Lifestyles (Natsal), a probability sample survey of adults resident in Great Britain conducted between May 1999 and February 2001. Results No significant differences were observed between the samples on a range of sociodemographic and behavioural variables (p>0.05). However, men from the internet sample were younger (p<0.001) and more likely to be students (p = 0.001), but less likely to live in London (p = 0.001) or report good health (p = 0.014). Although both samples were equally likely to report testing for HIV, men from the internet sample were more likely to report a sexually transmitted infection in the past year (16.9% v 4.8%, adjusted odds ratio 4.14, 95% CI 1.76 to 9.74; p = 0.001), anal intercourse (76.9% v 63.3%; p = 0.001) and unprotected anal intercourse in the past 3 months (45% v 36.6%; p = 0.064). Conclusions The internet provides a means of recruiting a self‐selected, convenience sample of MSM whose social and demographic characteristics are broadly similar to those of MSM drawn from a national probability survey. However, estimates of high‐risk sexual behaviour based on internet convenience samples are likely to overestimate levels of sexual risk behaviour in the wider MSM population. PMID:17135330

  2. Gamma radiation measurement in select sand samples from Camburi beach - Vitoria, Espirito Santo, Brazil: preliminary results

    Energy Technology Data Exchange (ETDEWEB)

    Barros, Livia F.; Pecequilo, Brigitte R.S.; Aquino, Reginaldo R., E-mail: lfbarros@ipen.b, E-mail: brigitte@ipen.b, E-mail: raquino@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The variation of natural radioactivity along the surface of the beach sands of Camburi, located in Vitoria, capital of Espirito Santo, southeastern Brazil, was determined from the contents of {sup 226}Ra, {sup 232}Th and {sup 40}K. Eleven collecting points was selected along all the 6 km extension of the Camburi beach. Sand samples collected from all established points on January 2011 were dried and sealed in standard 100 mL polyethylene flasks and measured by high resolution gamma spectrometry after a 4 weeks ingrowth period, in order to allow the secular equilibrium in the {sup 238}U and {sup 232}Th series. The {sup 226}Ra concentration was determined from the weighted average concentrations of {sup 214}Pb and {sup 214}Bi. The {sup 232}Th concentration was determined from the weighted average concentrations of {sup 228}Ac, {sup 212}Pb and {sup 212}Bi and the {sup 40}K from its single gamma transition. Preliminary results show activity concentrations varying from 5 Bq.kg{sup -1} to {sup 222} Bq.kg{sup -1} for {sup 226}Ra and from 14 Bq.kg{sup -1} to 1074 Bq.kg{sup -'}1 for {sup 232}Th, both with the highest values for Camburi South and Central. For {sup 40}K, the activity concentrations ranged from 14 Bq.kg{sup -1} to 179 Bq.kg{sup -1} and the highest values were obtained for Camburi South. (author)

  3. Crude protein, fibre and phytic acid in vitro digestibility of selected legume and buckwheat samples

    Directory of Open Access Journals (Sweden)

    Petra Vojtíšková

    2013-01-01

    Full Text Available The aim of this study was to determine crude protein, fibre and phytic acid in vitro digestibility of selected legumes and buckwheat products. All analyses except the phytic acid contents were performed in the line with the Commission Regulation (EC No. 152/2009. A modified version of Holt’s Method was used for phytic acid (phytate determination. None of all samples contained more than 11% of moisture. Soybeans are rich in crude protein; they contain nearly 40% of this compound. The content of crude protein in buckwheat flours was about 14%. The highest amount of phytate was found in common beans and soybeans-about 2 g/100 g of dry matter. On the other hand, the lowest phytate content was observed in buckwheat pasta (F. esculentum groats was 1.9 g per 100 g of dry matter. In vitro digestibility was determined using an incubator Daisy and pepsin enzymes and the combination of pepsin and pancreatin. The highest coefficient of crude protein digestibility was discovered to be in peels and wholemeal flour. The greatest fibre digestibility coefficients were obtained for peels, which contain about 65% of fibre in their dry matter. When pepsin was used, a higher phytic acid digestibility coefficient for G. max, Ph. vulgaris, peels, flour, groats and broken groats was observed; while when the combination of pepsin and pancreatin was used, higher phytic acid digestibility coefficients for peas, lentil and wholemeal flour were observed.

  4. Passive sampling methods for contaminated sediments: Practical guidance for selection, calibration, and implementation

    Science.gov (United States)

    Ghosh, Upal; Driscoll, Susan Kane; Burgess, Robert M; Jonker, Michiel To; Reible, Danny; Gobas, Frank; Choi, Yongju; Apitz, Sabine E; Maruya, Keith A; Gala, William R; Mortimer, Munro; Beegan, Chris

    2014-01-01

    This article provides practical guidance on the use of passive sampling methods (PSMs) that target the freely dissolved concentration (Cfree) for improved exposure assessment of hydrophobic organic chemicals in sediments. Primary considerations for selecting a PSM for a specific application include clear delineation of measurement goals for Cfree, whether laboratory-based “ex situ” and/or field-based “in situ” application is desired, and ultimately which PSM is best-suited to fulfill the measurement objectives. Guidelines for proper calibration and validation of PSMs, including use of provisional values for polymer–water partition coefficients, determination of equilibrium status, and confirmation of nondepletive measurement conditions are defined. A hypothetical example is described to illustrate how the measurement of Cfree afforded by PSMs reduces uncertainty in assessing narcotic toxicity for sediments contaminated with polycyclic aromatic hydrocarbons. The article concludes with a discussion of future research that will improve the quality and robustness of Cfree measurements using PSMs, providing a sound scientific basis to support risk assessment and contaminated sediment management decisions. Integr Environ Assess Manag 2014;10:210–223. © 2014 The Authors. Integrated Environmental Assessment and Management published by Wiley Periodicals, Inc. on behalf of SETAC. PMID:24288273

  5. A comparison of selection at list time and time-stratified sampling for estimating suspended sediment loads

    Science.gov (United States)

    Robert B. Thomas; Jack Lewis

    1993-01-01

    Time-stratified sampling of sediment for estimating suspended load is introduced and compared to selection at list time (SALT) sampling. Both methods provide unbiased estimates of load and variance. The magnitude of the variance of the two methods is compared using five storm populations of suspended sediment flux derived from turbidity data. Under like conditions,...

  6. Why sample selection matters in exploratory factor analysis: implications for the 12-item World Health Organization Disability Assessment Schedule 2.0.

    Science.gov (United States)

    Gaskin, Cadeyrn J; Lambert, Sylvie D; Bowe, Steven J; Orellana, Liliana

    2017-03-11

    Sample selection can substantially affect the solutions generated using exploratory factor analysis. Validation studies of the 12-item World Health Organization (WHO) Disability Assessment Schedule 2.0 (WHODAS 2.0) have generally involved samples in which substantial proportions of people had no, or minimal, disability. With the WHODAS 2.0 oriented towards measuring disability across six life domains (cognition, mobility, self-care, getting along, life activities, and participation in society), performing factor analysis with samples of people with disability may be more appropriate. We determined the influence of the sampling strategy on (a) the number of factors extracted and (b) the factor structure of the WHODAS 2.0. Using data from adults aged 50+ from the six countries in Wave 1 of the WHO's longitudinal Study on global AGEing and adult health (SAGE), we repeatedly selected samples (n = 750) using two strategies: (1) simple random sampling that reproduced nationally representative distributions of WHODAS 2.0 summary scores for each country (i.e., positively skewed distributions with many zero scores indicating the absence of disability), and (2) stratified random sampling with weights designed to obtain approximately symmetric distributions of summary scores for each country (i.e. predominantly including people with varying degrees of disability). Samples with skewed distributions typically produced one-factor solutions, except for the two countries with the lowest percentages of zero scores, in which the majority of samples produced two factors. Samples with approximately symmetric distributions, generally produced two- or three-factor solutions. In the two-factor solutions, the getting along domain items loaded on one factor (commonly with a cognition domain item), with remaining items loading on a second factor. In the three-factor solutions, the getting along and self-care domain items loaded separately on two factors and three other domains

  7. Rapid selection of accessible and cleavable sites in RNA by Escherichia coli RNase P and random external guide sequences.

    Science.gov (United States)

    Lundblad, Eirik W; Xiao, Gaoping; Ko, Jae-Hyeong; Altman, Sidney

    2008-02-19

    A method of inhibiting the expression of particular genes by using external guide sequences (EGSs) has been improved in its rapidity and specificity. Random EGSs that have 14-nt random sequences are used in the selection procedure for an EGS that attacks the mRNA for a gene in a particular location. A mixture of the random EGSs, the particular target RNA, and RNase P is used in the diagnostic procedure, which, after completion, is analyzed in a gel with suitable control lanes. Within a few hours, the procedure is complete. The action of EGSs designed by an older method is compared with EGSs designed by the random EGS method on mRNAs from two bacterial pathogens.

  8. Dual to Ratio-Cum-Product Estimator in Simple and Stratified Random Sampling

    OpenAIRE

    Yunusa Olufadi

    2013-01-01

    New estimators for estimating the finite population mean using two auxiliary variables under simple and stratified sampling design is proposed. Their properties (e.g., mean square error) are studied to the first order of approximation. More so, some estimators are shown to be a particular member of this estimator. Furthermore, comparison of the proposed estimator with the usual unbiased estimator and other estimators considered in this paper reveals interesting results. These results are fur...

  9. The psychometric properties of the AUDIT: a survey from a random sample of elderly Swedish adults.

    Science.gov (United States)

    Källmén, Håkan; Wennberg, Peter; Ramstedt, Mats; Hallgren, Mats

    2014-07-01

    Increasing alcohol consumption and related harms have been reported among the elderly population of Europe. Consequently, it is important to monitor patterns of alcohol use, and to use a valid and reliable tool when screening for risky consumption in this age group. The aim was to evaluate the internal consistency reliability and construct validity of the Alcohol Use Disorders Identification Test (AUDIT) in elderly Swedish adults, and to compare the results with the general Swedish population. Another aim was to calculate the level of alcohol consumption (AUDIT-C) to be used for comparison in future studies. The questionnaire was sent to 1459 Swedish adults aged 79-80 years with a response rate of 73.3%. Internal consistency reliability, were assessed using Cronbach alpha, and confirmatory factor analysis assessed construct validity of the Alcohol Use Disorders Identification Test (AUDIT) in elderly population as compared to a Swedish general population sample. The results showed that AUDIT was more reliable and valid among the Swedish general population sample than among the elderly and that Item 1 and 4 in AUDIT was less reliable and valid among the elderly. While the AUDIT showed acceptable psychometric properties in the general population sample, it's performance was of less quality among the elderly respondents. Further psychometric assessments of the AUDIT in elderly populations are required before it is implemented more widely.

  10. An econometric method for estimating population parameters from non-random samples: An application to clinical case finding.

    Science.gov (United States)

    Burger, Rulof P; McLaren, Zoë M

    2017-09-01

    The problem of sample selection complicates the process of drawing inference about populations. Selective sampling arises in many real world situations when agents such as doctors and customs officials search for targets with high values of a characteristic. We propose a new method for estimating population characteristics from these types of selected samples. We develop a model that captures key features of the agent's sampling decision. We use a generalized method of moments with instrumental variables and maximum likelihood to estimate the population prevalence of the characteristic of interest and the agents' accuracy in identifying targets. We apply this method to tuberculosis (TB), which is the leading infectious disease cause of death worldwide. We use a national database of TB test data from South Africa to examine testing for multidrug resistant TB (MDR-TB). Approximately one quarter of MDR-TB cases was undiagnosed between 2004 and 2010. The official estimate of 2.5% is therefore too low, and MDR-TB prevalence is as high as 3.5%. Signal-to-noise ratios are estimated to be between 0.5 and 1. Our approach is widely applicable because of the availability of routinely collected data and abundance of potential instruments. Using routinely collected data to monitor population prevalence can guide evidence-based policy making. Copyright © 2017 John Wiley & Sons, Ltd.

  11. The quality of the reported sample size calculations in randomized controlled trials indexed in PubMed.

    Science.gov (United States)

    Lee, Paul H; Tse, Andy C Y

    2017-05-01

    There are limited data on the quality of reporting of information essential for replication of the calculation as well as the accuracy of the sample size calculation. We examine the current quality of reporting of the sample size calculation in randomized controlled trials (RCTs) published in PubMed and to examine the variation in reporting across study design, study characteristics, and journal impact factor. We also reviewed the targeted sample size reported in trial registries. We reviewed and analyzed all RCTs published in December 2014 with journals indexed in PubMed. The 2014 Impact Factors for the journals were used as proxies for their quality. Of the 451 analyzed papers, 58.1% reported an a priori sample size calculation. Nearly all papers provided the level of significance (97.7%) and desired power (96.6%), and most of the papers reported the minimum clinically important effect size (73.3%). The median (inter-quartile range) of the percentage difference of the reported and calculated sample size calculation was 0.0% (IQR -4.6%;3.0%). The accuracy of the reported sample size was better for studies published in journals that endorsed the CONSORT statement and journals with an impact factor. A total of 98 papers had provided targeted sample size on trial registries and about two-third of these papers (n=62) reported sample size calculation, but only 25 (40.3%) had no discrepancy with the reported number in the trial registries. The reporting of the sample size calculation in RCTs published in PubMed-indexed journals and trial registries were poor. The CONSORT statement should be more widely endorsed. Copyright © 2016 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.

  12. Differential privacy-based evaporative cooling feature selection and classification with relief-F and random forests.

    Science.gov (United States)

    Le, Trang T; Simmons, W Kyle; Misaki, Masaya; Bodurka, Jerzy; White, Bill C; Savitz, Jonathan; McKinney, Brett A

    2017-09-15

    Classification of individuals into disease or clinical categories from high-dimensional biological data with low prediction error is an important challenge of statistical learning in bioinformatics. Feature selection can improve classification accuracy but must be incorporated carefully into cross-validation to avoid overfitting. Recently, feature selection methods based on differential privacy, such as differentially private random forests and reusable holdout sets, have been proposed. However, for domains such as bioinformatics, where the number of features is much larger than the number of observations p≫n , these differential privacy methods are susceptible to overfitting. We introduce private Evaporative Cooling, a stochastic privacy-preserving machine learning algorithm that uses Relief-F for feature selection and random forest for privacy preserving classification that also prevents overfitting. We relate the privacy-preserving threshold mechanism to a thermodynamic Maxwell-Boltzmann distribution, where the temperature represents the privacy threshold. We use the thermal statistical physics concept of Evaporative Cooling of atomic gases to perform backward stepwise privacy-preserving feature selection. On simulated data with main effects and statistical interactions, we compare accuracies on holdout and validation sets for three privacy-preserving methods: the reusable holdout, reusable holdout with random forest, and private Evaporative Cooling, which uses Relief-F feature selection and random forest classification. In simulations where interactions exist between attributes, private Evaporative Cooling provides higher classification accuracy without overfitting based on an independent validation set. In simulations without interactions, thresholdout with random forest and private Evaporative Cooling give comparable accuracies. We also apply these privacy methods to human brain resting-state fMRI data from a study of major depressive disorder. Code

  13. Factors Influencing the Likelihood of Overeducation: A Bivariate Probit with Sample Selection Framework

    Science.gov (United States)

    Rubb, Stephen

    2014-01-01

    Contrary to expectations, the likelihood of overeducation is shown to be inversely related to unemployment rates when not control for selectivity. Furthermore, incidence data show that overeducation is more common among men than women and among Whites than Blacks. At issue is selectivity: employment must be selected for overeducation to occur.…

  14. Effect of the Mediterranean diet on heart failure biomarkers: a randomized sample from the PREDIMED trial.

    Science.gov (United States)

    Fitó, Montserrat; Estruch, Ramón; Salas-Salvadó, Jordi; Martínez-Gonzalez, Miguel Angel; Arós, Fernando; Vila, Joan; Corella, Dolores; Díaz, Oscar; Sáez, Guillermo; de la Torre, Rafael; Mitjavila, María-Teresa; Muñoz, Miguel Angel; Lamuela-Raventós, Rosa-María; Ruiz-Gutierrez, Valentina; Fiol, Miquel; Gómez-Gracia, Enrique; Lapetra, José; Ros, Emilio; Serra-Majem, Lluis; Covas, María-Isabel

    2014-05-01

    Scarce data are available on the effect of the traditional Mediterranean diet (TMD) on heart failure biomarkers. We assessed the effect of TMD on biomarkers related to heart failure in a high cardiovascular disease risk population. A total of 930 subjects at high cardiovascular risk (420 men and 510 women) were recruited in the framework of a multicentre, randomized, controlled, parallel-group clinical trial directed at testing the efficacy of the TMD on the primary prevention of cardiovascular disease (The PREDIMED Study). Participants were assigned to a low-fat diet (control, n = 310) or one of two TMDs [TMD + virgin olive oil (VOO) or TMD + nuts]. Depending on group assignment, participants received free provision of extra-virgin olive oil, mixed nuts, or small non-food gifts. After 1 year of intervention, both TMDs decreased plasma N-terminal pro-brain natriuretic peptide, with changes reaching significance vs. control group (P cardiovascular disease (CVD) who improved their diet toward a TMD pattern reduced their N-terminal pro-brain natriuretic peptide compared with those assigned to a low-fat diet. The same was found for in vivo oxidized low-density lipoprotein and lipoprotein(a) plasma concentrations after the TMD + VOO diet. From our results TMD could be a useful tool to mitigate against risk factors for heart failure. From our results TMD could modify markers of heart failure towards a more protective mode. © 2014 The Authors. European Journal of Heart Failure © 2014 European Society of Cardiology.

  15. Sequential sampling model for multiattribute choice alternatives with random attention time and processing order.

    Science.gov (United States)

    Diederich, Adele; Oswald, Peter

    2014-01-01

    A sequential sampling model for multiattribute binary choice options, called multiattribute attention switching (MAAS) model, assumes a separate sampling process for each attribute. During the deliberation process attention switches from one attribute consideration to the next. The order in which attributes are considered as well for how long each attribute is considered-the attention time-influences the predicted choice probabilities and choice response times. Several probability distributions for the attention time with different variances are investigated. Depending on the time and order schedule the model predicts a rich choice probability/choice response time pattern including preference reversals and fast errors. Furthermore, the difference between finite and infinite decision horizons for the attribute considered last is investigated. For the former case the model predicts a probability p 0 > 0 of not deciding within the available time. The underlying stochastic process for each attribute is an Ornstein-Uhlenbeck process approximated by a discrete birth-death process. All predictions are also true for the widely applied Wiener process.

  16. Sequential sampling model for multiattribute choice alternatives with random attention time and processing order

    Directory of Open Access Journals (Sweden)

    Adele eDiederich

    2014-09-01

    Full Text Available A sequential sampling model for multiattribute binary choice options, called Multiattribute attention switching (MAAS model, assumes a separate sampling process for each attribute. During the deliberation process attention switches from one attribute consideration to the next. The order in which attributes are considered as well for how long each attribute is considered - the attention time - influences the predicted choice probabilities and choice response times. Several probability distributions for the attention time including deterministic, Poisson, binomial, geometric, and uniform with different variances are investigated. Depending on the time and order schedule the model predicts a rich choice probability/choice response time pattern including preference reversals and fast errors. Furthermore, the difference between a finite and infinite decision horizons for the attribute considered last is investigated. For the former case the model predicts a probability $p_0> 0$ of not deciding within the available time. The underlying stochastic process for each attribute is an Ornstein-Uhlenbeck process approximated by a discrete birth-death process. All predictions are also true for the widely applied Wiener process.

  17. Incorporating covariance estimation uncertainty in spatial sampling design for prediction with trans-Gaussian random fields

    Directory of Open Access Journals (Sweden)

    Gunter eSpöck

    2015-05-01

    Full Text Available Recently, Spock and Pilz [38], demonstratedthat the spatial sampling design problem forthe Bayesian linear kriging predictor can betransformed to an equivalent experimentaldesign problem for a linear regression modelwith stochastic regression coefficients anduncorrelated errors. The stochastic regressioncoefficients derive from the polar spectralapproximation of the residual process. Thus,standard optimal convex experimental designtheory can be used to calculate optimal spatialsampling designs. The design functionals ̈considered in Spock and Pilz [38] did nottake into account the fact that kriging isactually a plug-in predictor which uses theestimated covariance function. The resultingoptimal designs were close to space-fillingconfigurations, because the design criteriondid not consider the uncertainty of thecovariance function.In this paper we also assume that thecovariance function is estimated, e.g., byrestricted maximum likelihood (REML. Wethen develop a design criterion that fully takesaccount of the covariance uncertainty. Theresulting designs are less regular and space-filling compared to those ignoring covarianceuncertainty. The new designs, however, alsorequire some closely spaced samples in orderto improve the estimate of the covariancefunction. We also relax the assumption ofGaussian observations and assume that thedata is transformed to Gaussianity by meansof the Box-Cox transformation. The resultingprediction method is known as trans-Gaussiankriging. We apply the Smith and Zhu [37]approach to this kriging method and show thatresulting optimal designs also depend on theavailable data. We illustrate our results witha data set of monthly rainfall measurementsfrom Upper Austria.

  18. Geochemistry of Selected Coal Samples from Sumatra, Kalimantan, Sulawesi, and Papua, Indonesia

    Science.gov (United States)

    Belkin, Harvey E.; Tewalt, Susan J.

    2007-01-01

    and ash (generally initiative has collected and published extensive coal quality data from the world's largest coal producers and consumers. The important aspects of the WoCQI program are; (1) samples from active mines are collected, (2) the data have a high degree of internal consistency with a broad array of coal quality parameters, and (3) the data are linked to GIS and available through the world-wide-web. The coal quality parameters include proximate and ultimate analysis, sulfur forms, major-, minor-, and trace-element concentrations and various technological tests. This report contains geochemical data from a selected group of Indonesian coal samples from a range of coal types, localities, and ages collected for the WoCQI program.

  19. Generalized SAMPLE SIZE Determination Formulas for Investigating Contextual Effects by a Three-Level Random Intercept Model.

    Science.gov (United States)

    Usami, Satoshi

    2017-03-01

    Behavioral and psychological researchers have shown strong interests in investigating contextual effects (i.e., the influences of combinations of individual- and group-level predictors on individual-level outcomes). The present research provides generalized formulas for determining the sample size needed in investigating contextual effects according to the desired level of statistical power as well as width of confidence interval. These formulas are derived within a three-level random intercept model that includes one predictor/contextual variable at each level to simultaneously cover various kinds of contextual effects that researchers can show interest. The relative influences of indices included in the formulas on the standard errors of contextual effects estimates are investigated with the aim of further simplifying sample size determination procedures. In addition, simulation studies are performed to investigate finite sample behavior of calculated statistical power, showing that estimated sample sizes based on derived formulas can be both positively and negatively biased due to complex effects of unreliability of contextual variables, multicollinearity, and violation of assumption regarding the known variances. Thus, it is advisable to compare estimated sample sizes under various specifications of indices and to evaluate its potential bias, as illustrated in the example.

  20. Correcting Type Ia Supernova Distances for Selection Biases and Contamination in Photometrically Identified Samples

    Science.gov (United States)

    Kessler, R.; Scolnic, D.

    2017-02-01

    We present a new technique to create a bin-averaged Hubble diagram (HD) from photometrically identified SN Ia data. The resulting HD is corrected for selection biases and contamination from core-collapse (CC) SNe, and can be used to infer cosmological parameters. This method, called “BEAMS with Bias Corrections” (BBC), includes two fitting stages. The first BBC fitting stage uses a posterior distribution that includes multiple SN likelihoods, a Monte Carlo simulation to bias-correct the fitted SALT-II parameters, and CC probabilities determined from a machine-learning technique. The BBC fit determines (1) a bin-averaged HD (average distance versus redshift), and (2) the nuisance parameters α and β, which multiply the stretch and color (respectively) to standardize the SN brightness. In the second stage, the bin-averaged HD is fit to a cosmological model where priors can be imposed. We perform high-precision tests of the BBC method by simulating large (150,000 event) data samples corresponding to the Dark Energy Survey Supernova Program. Our tests include three models of intrinsic scatter, each with two different CC rates. In the BBC fit, the SALT-II nuisance parameters α and β are recovered to within 1% of their true values. In the cosmology fit, we determine the dark energy equation of state parameter w using a fixed value of {{{Ω }}}{{M}} as a prior: averaging over all six tests based on 6 × 150,000 = 900,000 SNe, there is a small w-bias of 0.006+/- 0.002. Finally, the BBC fitting code is publicly available in the SNANA package.

  1. Comparison of Detrusor Muscle Sampling Rate in Monopolar and Bipolar Transurethral Resection of Bladder Tumor: A Randomized Trial.

    Science.gov (United States)

    Teoh, Jeremy Yuen-Chun; Chan, Eddie Shu-Yin; Yip, Siu-Ying; Tam, Ho-Man; Chiu, Peter Ka-Fung; Yee, Chi-Hang; Wong, Hon-Ming; Chan, Chi-Kwok; Hou, Simon See-Ming; Ng, Chi-Fai

    2017-05-01

    Our aim was to investigate the detrusor muscle sampling rate after monopolar versus bipolar transurethral resection of bladder tumor (TURBT). This was a single-center, prospective, randomized, phase III trial on monopolar versus bipolar TURBT. Baseline patient characteristics, disease characteristics and perioperative outcomes were compared, with the primary outcome being the detrusor muscle sampling rate in the TURBT specimen. Multivariate logistic regression analyses on detrusor muscle sampling were performed. From May 2012 to December 2015, a total of 160 patients with similar baseline characteristics were randomized to receive monopolar or bipolar TURBT. Fewer patients in the bipolar TURBT group required postoperative irrigation than patients in the monopolar TURBT group (18.7 vs. 43%; p = 0.001). In the whole cohort, no significant difference in the detrusor muscle sampling rates was observed between the bipolar and monopolar TURBT groups (77.3 vs. 63.3%; p = 0.057). In patients with urothelial carcinoma, bipolar TURBT achieved a higher detrusor muscle sampling rate than monopolar TURBT (84.6 vs. 67.7%; p = 0.025). On multivariate analyses, bipolar TURBT (odds ratio [OR] 2.23, 95% confidence interval [CI] 1.03-4.81; p = 0.042) and larger tumor size (OR 1.04, 95% CI 1.01-1.08; p = 0.022) were significantly associated with detrusor muscle sampling in the whole cohort. In addition, bipolar TURBT (OR 2.88, 95% CI 1.10-7.53; p = 0.031), larger tumor size (OR 1.05, 95% CI 1.01-1.10; p = 0.035), and female sex (OR 3.25, 95% CI 1.10-9.59; p = 0.033) were significantly associated with detrusor muscle sampling in patients with urothelial carcinoma. There was a trend towards a superior detrusor muscle sampling rate after bipolar TURBT. Further studies are needed to determine its implications on disease recurrence and progression.

  2. Reduced plasma aldosterone concentrations in randomly selected patients with insulin-dependent diabetes mellitus.

    LENUS (Irish Health Repository)

    Cronin, C C

    2012-02-03

    Abnormalities of the renin-angiotensin system have been reported in patients with diabetes mellitus and with diabetic complications. In this study, plasma concentrations of prorenin, renin, and aldosterone were measured in a stratified random sample of 110 insulin-dependent (Type 1) diabetic patients attending our outpatient clinic. Fifty-four age- and sex-matched control subjects were also examined. Plasma prorenin concentration was higher in patients without complications than in control subjects when upright (geometric mean (95% confidence intervals (CI): 75.9 (55.0-105.6) vs 45.1 (31.6-64.3) mU I-1, p < 0.05). There was no difference in plasma prorenin concentration between patients without and with microalbuminuria and between patients without and with background retinopathy. Plasma renin concentration, both when supine and upright, was similar in control subjects, in patients without complications, and in patients with varying degrees of diabetic microangiopathy. Plasma aldosterone was suppressed in patients without complications in comparison to control subjects (74 (58-95) vs 167 (140-199) ng I-1, p < 0.001) and was also suppressed in patients with microvascular disease. Plasma potassium was significantly higher in patients than in control subjects (mean +\\/- standard deviation: 4.10 +\\/- 0.36 vs 3.89 +\\/- 0.26 mmol I-1; p < 0.001) and plasma sodium was significantly lower (138 +\\/- 4 vs 140 +\\/- 2 mmol I-1; p < 0.001). We conclude that plasma prorenin is not a useful early marker for diabetic microvascular disease. Despite apparently normal plasma renin concentrations, plasma aldosterone is suppressed in insulin-dependent diabetic patients.

  3. Determining optimal sample sizes for multistage adaptive randomized clinical trials from an industry perspective using value of information methods.

    Science.gov (United States)

    Chen, Maggie H; Willan, Andrew R

    2013-02-01

    Most often, sample size determinations for randomized clinical trials are based on frequentist approaches that depend on somewhat arbitrarily chosen factors, such as type I and II error probabilities and the smallest clinically important difference. As an alternative, many authors have proposed decision-theoretic (full Bayesian) approaches, often referred to as value of information methods that attempt to determine the sample size that maximizes the difference between the trial's expected utility and its expected cost, referred to as the expected net gain. Taking an industry perspective, Willan proposes a solution in which the trial's utility is the increase in expected profit. Furthermore, Willan and Kowgier, taking a societal perspective, show that multistage designs can increase expected net gain. The purpose of this article is to determine the optimal sample size using value of information methods for industry-based, multistage adaptive randomized clinical trials, and to demonstrate the increase in expected net gain realized. At the end of each stage, the trial's sponsor must decide between three actions: continue to the next stage, stop the trial and seek regulatory approval, or stop the trial and abandon the drug. A model for expected total profit is proposed that includes consideration of per-patient profit, disease incidence, time horizon, trial duration, market share, and the relationship between trial results and probability of regulatory approval. The proposed method is extended to include multistage designs with a solution provided for a two-stage design. An example is given. Significant increases in the expected net gain are realized by using multistage designs. The complexity of the solutions increases with the number of stages, although far simpler near-optimal solutions exist. The method relies on the central limit theorem, assuming that the sample size is sufficiently large so that the relevant statistics are normally distributed. From a value of

  4. Global Stratigraphy of Venus: Analysis of a Random Sample of Thirty-Six Test Areas

    Science.gov (United States)

    Basilevsky, Alexander T.; Head, James W., III

    1995-01-01

    The age relations between 36 impact craters with dark paraboloids and other geologic units and structures at these localities have been studied through photogeologic analysis of Magellan SAR images of the surface of Venus. Geologic settings in all 36 sites, about 1000 x 1000 km each, could be characterized using only 10 different terrain units and six types of structures. These units and structures form a major stratigraphic and geologic sequence (from oldest to youngest): (1) tessera terrain; (2) densely fractured terrains associated with coronae and in the form of remnants among plains; (3) fractured and ridged plains and ridge belts; (4) plains with wrinkle ridges; (5) ridges associated with coronae annulae and ridges of arachnoid annulae which are contemporary with wrinkle ridges of the ridged plains; (6) smooth and lobate plains; (7) fractures of coronae annulae, and fractures not related to coronae annulae, which disrupt ridged and smooth plains; (8) rift-associated fractures; and (9) craters with associated dark paraboloids, which represent the youngest 1O% of the Venus impact crater population (Campbell et al.), and are on top of all volcanic and tectonic units except the youngest episodes of rift-associated fracturing and volcanism; surficial streaks and patches are approximately contemporary with dark-paraboloid craters. Mapping of such units and structures in 36 randomly distributed large regions (each approximately 10(exp 6) sq km) shows evidence for a distinctive regional and global stratigraphic and geologic sequence. On the basis of this sequence we have developed a model that illustrates several major themes in the history of Venus. Most of the history of Venus (that of its first 80% or so) is not preserved in the surface geomorphological record. The major deformation associated with tessera formation in the period sometime between 0.5-1.0 b.y. ago (Ivanov and Basilevsky) is the earliest event detected. In the terminal stages of tessera fon

  5. Use of pornography in a random sample of Norwegian heterosexual couples.

    Science.gov (United States)

    Daneback, Kristian; Traeen, Bente; Månsson, Sven-Axel

    2009-10-01

    This study examined the use of pornography in couple relationships to enhance the sex-life. The study contained a representative sample of 398 heterosexual couples aged 22-67 years. Data collection was carried out by self-administered postal questionnaires. The majority (77%) of the couples did not report any kind of pornography use to enhance the sex-life. In 15% of the couples, both had used pornography; in 3% of the couples, only the female partner had used pornography; and, in 5% of the couples, only the male partner had used pornography for this purpose. Based on the results of a discriminant function analysis, it is suggested that couples where one or both used pornography had a more permissive erotic climate compared to the couples who did not use pornography. In couples where only one partner used pornography, we found more problems related to arousal (male) and negative (female) self-perception. These findings could be of importance for clinicians who work with couples.

  6. Determination of selected metals in coal samples from Lafia-Obi and ...

    African Journals Online (AJOL)

    coal samples were determined using atomic absorption spectroscopy (AAS). All the samples have comparable chromium and copper contents, while iron, aluminum, magnesium and potassium content vary to some extent. Metals concentrations in both Lafia-Obi and Chikila coal samples are within the limits allowed by the ...

  7. The team selection inventory: Empirical data from a New Zealand sample

    NARCIS (Netherlands)

    Burch, G.St.J.; Anderson, N.

    2008-01-01

    Within personnel selection there is an increasing emphasis on multi-level selection. However, while there are a range of psychometrically robust tools available for assessing person—job fit, substantially fewer are available for assessing person—team or person—organization fit. One of the few tools

  8. A nonparametric approach to the sample selection problem in survey data

    NARCIS (Netherlands)

    Vazquez-Alvarez, R.

    2001-01-01

    Responses to economic surveys are usually noisy. Item non-response, as a particular type of censored data, is a common problem for key economic variables such as income and earnings, consumption or accumulated assets. If such non-response is non-random, the consequence can be a bias in the results

  9. Estimating screening-mammography receiver operating characteristic (ROC) curves from stratified random samples of screening mammograms: a simulation study.

    Science.gov (United States)

    Zur, Richard M; Pesce, Lorenzo L; Jiang, Yulei

    2015-05-01

    To evaluate stratified random sampling (SRS) of screening mammograms by (1) Breast Imaging Reporting and Data System (BI-RADS) assessment categories, and (2) the presence of breast cancer in mammograms, for estimation of screening-mammography receiver operating characteristic (ROC) curves in retrospective observer studies. We compared observer study case sets constructed by (1) random sampling (RS); (2) SRS with proportional allocation (SRS-P) with BI-RADS 1 and 2 noncancer cases accounting for 90.6% of all noncancer cases; (3) SRS with disproportional allocation (SRS-D) with BI-RADS 1 and 2 noncancer cases accounting for 10%-80%; and (4) SRS-D and multiple imputation (SRS-D + MI) with missing BI-RADS 1 and 2 noncancer cases imputed to recover the 90.6% proportion. Monte Carlo simulated case sets were drawn from a large case population modeled after published Digital Mammography Imaging Screening Trial data. We compared the bias, root-mean-square error, and coverage of 95% confidence intervals of area under the ROC curve (AUC) estimates from the sampling methods (200-2000 cases, of which 25% were cancer cases) versus from the large case population. AUC estimates were unbiased from RS, SRS-P, and SRS-D + MI, but biased from SRS-D. AUC estimates from SRS-P and SRS-D + MI had 10% smaller root-mean-square error than RS. Both SRS-P and SRS-D + MI can be used to obtain unbiased and 10% more efficient estimate of screening-mammography ROC curves. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.

  10. Surveillance for cancer recurrence in long-term young breast cancer survivors randomly selected from a statewide cancer registry.

    Science.gov (United States)

    Jones, Tarsha; Duquette, Debra; Underhill, Meghan; Ming, Chang; Mendelsohn-Victor, Kari E; Anderson, Beth; Milliron, Kara J; Copeland, Glenn; Janz, Nancy K; Northouse, Laurel L; Duffy, Sonia M; Merajver, Sofia D; Katapodi, Maria C

    2018-01-20

    This study examined clinical breast exam (CBE) and mammography surveillance in long-term young breast cancer survivors (YBCS) and identified barriers and facilitators to cancer surveillance practices. Data collected with a self-administered survey from a statewide, randomly selected sample of YBCS diagnosed with invasive breast cancer or ductal carcinoma in situ younger than 45 years old, stratified by race (Black vs. White/Other). Multivariate logistic regression models identified predictors of annual CBEs and mammograms. Among 859 YBCS (n = 340 Black; n = 519 White/Other; mean age = 51.0 ± 5.9; diagnosed 11.0 ± 4.0 years ago), the majority (> 85%) reported an annual CBE and a mammogram. Black YBCS in the study were more likely to report lower rates of annual mammography and more barriers accessing care compared to White/Other YBCS. Having a routine source of care, confidence to use healthcare services, perceived expectations from family members and healthcare providers to engage in cancer surveillance, and motivation to comply with these expectations were significant predictors of having annual CBEs and annual mammograms. Cost-related lack of access to care was a significant barrier to annual mammograms. Routine source of post-treatment care facilitated breast cancer surveillance above national average rates. Persistent disparities regarding access to mammography surveillance were identified for Black YBCS, primarily due to lack of access to routine source of care and high out-of-pocket costs. Public health action targeting cancer surveillance in YBCS should ensure routine source of post-treatment care and address cost-related barriers. Clinical Trials Registration Number: NCT01612338.

  11. The basic science and mathematics of random mutation and natural selection.

    Science.gov (United States)

    Kleinman, Alan

    2014-12-20

    The mutation and natural selection phenomenon can and often does cause the failure of antimicrobial, herbicidal, pesticide and cancer treatments selection pressures. This phenomenon operates in a mathematically predictable behavior, which when understood leads to approaches to reduce and prevent the failure of the use of these selection pressures. The mathematical behavior of mutation and selection is derived using the principles given by probability theory. The derivation of the equations describing the mutation and selection phenomenon is carried out in the context of an empirical example. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Field sampling and selecting on-site analytical methods for explosives in soil

    Energy Technology Data Exchange (ETDEWEB)

    Crockett, A.B.; Craig, H.D.; Jenkins, T.F.; Sisk, W.E.

    1996-12-01

    A large number of defense-related sites are contaminated with elevated levels of secondary explosives. Levels of contamination range from barely detectable to levels above 10% that need special handling because of the detonation potential. Characterization of explosives-contaminated sites is particularly difficult because of the very heterogeneous distribution of contamination in the environment and within samples. To improve site characterization, several options exist including collecting more samples, providing on-site analytical data to help direct the investigation, compositing samples, improving homogenization of the samples, and extracting larger samples. This publication is intended to provide guidance to Remedial Project Managers regarding field sampling and on-site analytical methods for detecting and quantifying secondary explosive compounds in soils, and is not intended to include discussions of the safety issues associated with sites contaminated with explosive residues.

  13. Impact of Selection Bias on Treatment Effect Size Estimates in Randomized Trials of Oral Health Interventions: A Meta-epidemiological Study.

    Science.gov (United States)

    Saltaji, H; Armijo-Olivo, S; Cummings, G G; Amin, M; da Costa, B R; Flores-Mir, C

    2018-01-01

    Emerging evidence suggests that design flaws of randomized controlled trials can result in over- or underestimation of the treatment effect size (ES). The objective of this study was to examine associations between treatment ES estimates and adequacy of sequence generation, allocation concealment, and baseline comparability among a sample of oral health randomized controlled trials. For our analysis, we selected all meta-analyses that included a minimum of 5 oral health randomized controlled trials and used continuous outcomes. We extracted data, in duplicate, related to items of selection bias (sequence generation, allocation concealment, and baseline comparability) in the Cochrane Risk of Bias tool. Using a 2-level meta-meta-analytic approach with a random effects model to allow for intra- and inter-meta-analysis heterogeneity, we quantified the impact of selection bias on the magnitude of ES estimates. We identified 64 meta-analyses, including 540 randomized controlled trials analyzing 137,957 patients. Sequence generation was judged to be adequate (at low risk of bias) in 32% ( n = 173) of trials, and baseline comparability was judged to be adequate in 77.8% of trials. Allocation concealment was unclear in the majority of trials ( n = 458, 84.8%). We identified significantly larger treatment ES estimates in trials that had inadequate/unknown sequence generation (difference in ES = 0.13; 95% CI: 0.01 to 0.25) and inadequate/unknown allocation concealment (difference in ES = 0.15; 95% CI: 0.02 to 0.27). In contrast, baseline imbalance (difference in ES = 0.01, 95% CI: -0.09 to 0.12) was not associated with inflated or underestimated ES. In conclusion, treatment ES estimates were 0.13 and 0.15 larger in trials with inadequate/unknown sequence generation and inadequate/unknown allocation concealment, respectively. Therefore, authors of systematic reviews using oral health randomized controlled trials should perform sensitivity analyses based on the adequacy of

  14. Selective Extraction and Effective Separation of Galactosylsphingosine (Psychosine) and Glucosylsphingosine from Other Glycosphingolipids in Pathological Tissue Samples

    OpenAIRE

    Li, Yu-Teh; Li, Su-Chen; Buck, Wayne R.; Haskins, Mark E.; Wu, Sz-Wei; Khoo, Kay-Hooi; Sidransky, Ellen; Bunnell, Bruce A.

    2010-01-01

    To facilitate the study of the chemical pathology of galactosylsphingosine (psychosine, GalSph) in Krabbe disease and glucosylsphingosine (GlcSph) in Gaucher disease, we have devised a facile method for the effective separation of these two glycosylsphingosines from other glycosphingolipids (GSLs) in Krabbe brain and Gaucher spleen samples. The procedure involves the use of acetone to selectively extract GalSph and GlcSph, respectively, from Krabbe brain and Gaucher spleen samples. Since acet...

  15. Automated Gel Size Selection to Improve the Quality of Next-generation Sequencing Libraries Prepared from Environmental Water Samples.

    Science.gov (United States)

    Uyaguari-Diaz, Miguel I; Slobodan, Jared R; Nesbitt, Matthew J; Croxen, Matthew A; Isaac-Renton, Judith; Prystajecky, Natalie A; Tang, Patrick

    2015-04-17

    Next-generation sequencing of environmental samples can be challenging because of the variable DNA quantity and quality in these samples. High quality DNA libraries are needed for optimal results from next-generation sequencing. Environmental samples such as water may have low quality and quantities of DNA as well as contaminants that co-precipitate with DNA. The mechanical and enzymatic processes involved in extraction and library preparation may further damage the DNA. Gel size selection enables purification and recovery of DNA fragments of a defined size for sequencing applications. Nevertheless, this task is one of the most time-consuming steps in the DNA library preparation workflow. The protocol described here enables complete automation of agarose gel loading, electrophoretic analysis, and recovery of targeted DNA fragments. In this study, we describe a high-throughput approach to prepare high quality DNA libraries from freshwater samples that can be applied also to other environmental samples. We used an indirect approach to concentrate bacterial cells from environmental freshwater samples; DNA was extracted using a commercially available DNA extraction kit, and DNA libraries were prepared using a commercial transposon-based protocol. DNA fragments of 500 to 800 bp were gel size selected using Ranger Technology, an automated electrophoresis workstation. Sequencing of the size-selected DNA libraries demonstrated significant improvements to read length and quality of the sequencing reads.

  16. Randomized Comparison of Two Vaginal Self-Sampling Methods for Human Papillomavirus Detection: Dry Swab versus FTA Cartridge.

    Directory of Open Access Journals (Sweden)

    Rosa Catarino

    Full Text Available Human papillomavirus (HPV self-sampling (self-HPV is valuable in cervical cancer screening. HPV testing is usually performed on physician-collected cervical smears stored in liquid-based medium. Dry filters and swabs are an alternative. We evaluated the adequacy of self-HPV using two dry storage and transport devices, the FTA cartridge and swab.A total of 130 women performed two consecutive self-HPV samples. Randomization determined which of the two tests was performed first: self-HPV using dry swabs (s-DRY or vaginal specimen collection using a cytobrush applied to an FTA cartridge (s-FTA. After self-HPV, a physician collected a cervical sample using liquid-based medium (Dr-WET. HPV types were identified by real-time PCR. Agreement between collection methods was measured using the kappa statistic.HPV prevalence for high-risk types was 62.3% (95%CI: 53.7-70.2 detected by s-DRY, 56.2% (95%CI: 47.6-64.4 by Dr-WET, and 54.6% (95%CI: 46.1-62.9 by s-FTA. There was overall agreement of 70.8% between s-FTA and s-DRY samples (kappa = 0.34, and of 82.3% between self-HPV and Dr-WET samples (kappa = 0.56. Detection sensitivities for low-grade squamous intraepithelial lesion or worse (LSIL+ were: 64.0% (95%CI: 44.5-79.8 for s-FTA, 84.6% (95%CI: 66.5-93.9 for s-DRY, and 76.9% (95%CI: 58.0-89.0 for Dr-WET. The preferred self-collection method among patients was s-DRY (40.8% vs. 15.4%. Regarding costs, FTA card was five times more expensive than the swab (~5 US dollars (USD/per card vs. ~1 USD/per swab.Self-HPV using dry swabs is sensitive for detecting LSIL+ and less expensive than s-FTA.International Standard Randomized Controlled Trial Number (ISRCTN: 43310942.

  17. Why choose Random Forest to predict rare species distribution with few samples in large undersampled areas? Three Asian crane species models provide supporting evidence

    Directory of Open Access Journals (Sweden)

    Chunrong Mi

    2017-01-01

    Full Text Available Species distribution models (SDMs have become an essential tool in ecology, biogeography, evolution and, more recently, in conservation biology. How to generalize species distributions in large undersampled areas, especially with few samples, is a fundamental issue of SDMs. In order to explore this issue, we used the best available presence records for the Hooded Crane (Grus monacha, n = 33, White-naped Crane (Grus vipio, n = 40, and Black-necked Crane (Grus nigricollis, n = 75 in China as three case studies, employing four powerful and commonly used machine learning algorithms to map the breeding distributions of the three species: TreeNet (Stochastic Gradient Boosting, Boosted Regression Tree Model, Random Forest, CART (Classification and Regression Tree and Maxent (Maximum Entropy Models. In addition, we developed an ensemble forecast by averaging predicted probability of the above four models results. Commonly used model performance metrics (Area under ROC (AUC and true skill statistic (TSS were employed to evaluate model accuracy. The latest satellite tracking data and compiled literature data were used as two independent testing datasets to confront model predictions. We found Random Forest demonstrated the best performance for the most assessment method, provided a better model fit to the testing data, and achieved better species range maps for each crane species in undersampled areas. Random Forest has been generally available for more than 20 years and has been known to perform extremely well in ecological predictions. However, while increasingly on the rise, its potential is still widely underused in conservation, (spatial ecological applications and for inference. Our results show that it informs ecological and biogeographical theories as well as being suitable for conservation applications, specifically when the study area is undersampled. This method helps to save model-selection time and effort, and allows robust and rapid

  18. Media Use and Source Trust among Muslims in Seven Countries: Results of a Large Random Sample Survey

    Directory of Open Access Journals (Sweden)

    Steven R. Corman

    2013-12-01

    Full Text Available Despite the perceived importance of media in the spread of and resistance against Islamist extremism, little is known about how Muslims use different kinds of media to get information about religious issues, and what sources they trust when doing so. This paper reports the results of a large, random sample survey among Muslims in seven countries Southeast Asia, West Africa and Western Europe, which helps fill this gap. Results show a diverse set of profiles of media use and source trust that differ by country, with overall low trust in mediated sources of information. Based on these findings, we conclude that mass media is still the most common source of religious information for Muslims, but that trust in mediated information is low overall. This suggests that media are probably best used to persuade opinion leaders, who will then carry anti-extremist messages through more personal means.

  19. Enhancing positive parent-child interactions and family functioning in a poverty sample: a randomized control trial.

    Science.gov (United States)

    Negrão, Mariana; Pereira, Mariana; Soares, Isabel; Mesman, Judi

    2014-01-01

    This study tested the attachment-based intervention program Video-feedback Intervention to promote Positive Parenting and Sensitive Discipline (VIPP-SD) in a randomized controlled trial with poor families of toddlers screened for professional's concerns about the child's caregiving environment. The VIPP-SD is an evidence-based intervention, but has not yet been tested in the context of poverty. The sample included 43 families with 1- to 4-year-old children: mean age at the pretest was 29 months and 51% were boys. At the pretest and posttest, mother-child interactions were observed at home, and mothers reported on family functioning. The VIPP-SD proved to be effective in enhancing positive parent-child interactions and positive family relations in a severely deprived context. Results are discussed in terms of implications for support services provided to such poor families in order to reduce intergenerational risk transmission.

  20. A Review of Selected Engineered Nanoparticles in the Atmosphere: Sources, Transformations, and Techniques for Sampling and Analysis

    Science.gov (United States)

    A state-of-the-science review was undertaken to identify and assess sampling and analysis methods to detect and quantify selected nanomaterials (NMs) in the ambient atmosphere. The review is restricted to five types of NMs of interest to the Office of Research and Development Nan...

  1. Novel random peptide libraries displayed on AAV serotype 9 for selection of endothelial cell-directed gene transfer vectors.

    Science.gov (United States)

    Varadi, K; Michelfelder, S; Korff, T; Hecker, M; Trepel, M; Katus, H A; Kleinschmidt, J A; Müller, O J

    2012-08-01

    We have demonstrated the potential of random peptide libraries displayed on adeno-associated virus (AAV)2 to select for AAV2 vectors with improved efficiency for cell type-directed gene transfer. AAV9, however, may have advantages over AAV2 because of a lower prevalence of neutralizing antibodies in humans and more efficient gene transfer in vivo. Here we provide evidence that random peptide libraries can be displayed on AAV9 and can be utilized to select for AAV9 capsids redirected to the cell type of interest. We generated an AAV9 peptide display library, which ensures that the displayed peptides correspond to the packaged genomes and performed four consecutive selection rounds on human coronary artery endothelial cells in vitro. This screening yielded AAV9 library capsids with distinct peptide motifs enabling up to 40-fold improved transduction efficiencies compared with wild-type (wt) AAV9 vectors. Incorporating sequences selected from AAV9 libraries into AAV2 capsids could not increase transduction as efficiently as in the AAV9 context. To analyze the potential on endothelial cells in the intact natural vascular context, human umbilical veins were incubated with the selected AAV in situ and endothelial cells were isolated. Fluorescence-activated cell sorting analysis revealed a 200-fold improved transduction efficiency compared with wt AAV9 vectors. Furthermore, AAV9 vectors with targeting sequences selected from AAV9 libraries revealed an increased transduction efficiency in the presence of human intravenous immunoglobulins, suggesting a reduced immunogenicity. We conclude that our novel AAV9 peptide library is functional and can be used to select for vectors for future preclinical and clinical gene transfer applications.

  2. Automatic Samples Selection Using Histogram of Oriented Gradients (HOG Feature Distance

    Directory of Open Access Journals (Sweden)

    Inzar Salfikar

    2018-01-01

    Full Text Available Finding victims at a disaster site is the primary goal of Search-and-Rescue (SAR operations. Many technologies created from research for searching disaster victims through aerial imaging. but, most of them are difficult to detect victims at tsunami disaster sites with victims and backgrounds which are look similar. This research collects post-tsunami aerial imaging data from the internet to builds dataset and model for detecting tsunami disaster victims. Datasets are built based on distance differences from features every sample using Histogram-of-Oriented-Gradient (HOG method. We use the longest distance to collect samples from photo to generate victim and non-victim samples. We claim steps to collect samples by measuring HOG feature distance from all samples. the longest distance between samples will take as a candidate to build the dataset, then classify victim (positives and non-victim (negatives samples manually. The dataset of tsunami disaster victims was re-analyzed using cross-validation Leave-One-Out (LOO with Support-Vector-Machine (SVM method. The experimental results show the performance of two test photos with 61.70% precision, 77.60% accuracy, 74.36% recall and f-measure 67.44% to distinguish victim (positives and non-victim (negatives.

  3. Rationale, Design, Samples, and Baseline Sun Protection in a Randomized Trial on a Skin Cancer Prevention Intervention in Resort Environments

    Science.gov (United States)

    Buller, David B.; Andersen, Peter A.; Walkosz, Barbara J.; Scott, Michael D.; Beck, Larry; Cutter, Gary R.

    2016-01-01

    Introduction Exposure to solar ultraviolet radiation during recreation is a risk factor for skin cancer. A trial evaluating an intervention to promote advanced sun protection (sunscreen pre-application/reapplication; protective hats and clothing; use of shade) during vacations. Materials and Methods Adult visitors to hotels/resorts with outdoor recreation (i.e., vacationers) participated in a group-randomized pretest-posttest controlled quasi-experimental design in 2012–14. Hotels/resorts were pair-matched and randomly assigned to the intervention or untreated control group. Sun protection (e.g., clothing, hats, shade and sunscreen) was measured in cross-sectional samples by observation and a face-to-face intercept survey during two-day visits. Results Initially, 41 hotel/resorts (11%) participated but 4 dropped out before posttest. Hotel/resorts were diverse (employees=30 to 900; latitude=24o 78′ N to 50o 52′ N; elevation=2 ft. to 9,726 ft. above sea level), and had a variety of outdoor venues (beaches/pools, court/lawn games, golf courses, common areas, and chairlifts). At pretest, 4,347 vacationers were observed and 3,531 surveyed. More females were surveyed (61%) than observed (50%). Vacationers were mostly 35–60 years old, highly educated (college education = 68%) and non-Hispanic white (93%), with high-risk skin types (22%). Vacationers reported covering 60% of their skin with clothing. Also, 40% of vacationers used shade; 60% applied sunscreen; and 42% had been sunburned. Conclusions The trial faced challenges recruiting resorts but result show that the large, multi-state sample of vacationers were at high risk for solar UV exposure. PMID:26593781

  4. Woody species diversity in forest plantations in a mountainous region of Beijing, China: effects of sampling scale and species selection.

    Directory of Open Access Journals (Sweden)

    Yuxin Zhang

    Full Text Available The role of forest plantations in biodiversity conservation has gained more attention in recent years. However, most work on evaluating the diversity of forest plantations focuses only on one spatial scale; thus, we examined the effects of sampling scale on diversity in forest plantations. We designed a hierarchical sampling strategy to collect data on woody species diversity in planted pine (Pinus tabuliformis Carr., planted larch (Larix principis-rupprechtii Mayr., and natural secondary deciduous broadleaf forests in a mountainous region of Beijing, China. Additive diversity partition analysis showed that, compared to natural forests, the planted pine forests had a different woody species diversity partitioning pattern at multi-scales (except the Simpson diversity in the regeneration layer, while the larch plantations did not show multi-scale diversity partitioning patterns that were obviously different from those in the natural secondary broadleaf forest. Compare to the natural secondary broadleaf forests, the effects of planted pine forests on woody species diversity are dependent on the sampling scale and layers selected for analysis. Diversity in the planted larch forest, however, was not significantly different from that in the natural forest for all diversity components at all sampling levels. Our work demonstrated that the species selected for afforestation and the sampling scales selected for data analysis alter the conclusions on the levels of diversity supported by plantations. We suggest that a wide range of scales should be considered in the evaluation of the role of forest plantations on biodiversity conservation.

  5. Pilot studies for the North American Soil Geochemical Landscapes Project - Site selection, sampling protocols, analytical methods, and quality control protocols

    Science.gov (United States)

    Smith, D.B.; Woodruff, L.G.; O'Leary, R. M.; Cannon, W.F.; Garrett, R.G.; Kilburn, J.E.; Goldhaber, M.B.

    2009-01-01

    In 2004, the US Geological Survey (USGS) and the Geological Survey of Canada sampled and chemically analyzed soils along two transects across Canada and the USA in preparation for a planned soil geochemical survey of North America. This effort was a pilot study to test and refine sampling protocols, analytical methods, quality control protocols, and field logistics for the continental survey. A total of 220 sample sites were selected at approximately 40-km intervals along the two transects. The ideal sampling protocol at each site called for a sample from a depth of 0-5 cm and a composite of each of the O, A, and C horizons. The analysis, the USGS QC officer, and the principal investigator for the study. This level of review resulted in an average of one QC sample for every 20 field samples, which proved to be minimally adequate for such a large-scale survey. Additional QC samples should be added to monitor within-batch quality to the extent that no more than 10 samples are analyzed between a QC sample. Only Cr (77%), Y (82%), and Sb (80%) fell outside the acceptable limits of accuracy (% recovery between 85 and 115%) because of likely residence in mineral phases resistant to the acid digestion. A separate sample of 0-5-cm material was collected at each site for determination of organic compounds. A subset of 73 of these samples was analyzed for a suite of 19 organochlorine pesticides by gas chromatography. Only three of these samples had detectable pesticide concentrations. A separate sample of A-horizon soil was collected for microbial characterization by phospholipid fatty acid analysis (PLFA), soil enzyme assays, and determination of selected human and agricultural pathogens. Collection, preservation and analysis of samples for both organic compounds and microbial characterization add a great degree of complication to the sampling and preservation protocols and a significant increase to the cost for a continental-scale survey. Both these issues must be

  6. The Jackprot Simulation Couples Mutation Rate with Natural Selection to Illustrate How Protein Evolution Is Not Random

    Science.gov (United States)

    Espinosa, Avelina; Bai, Chunyan Y.

    2016-01-01

    Protein evolution is not a random process. Views which attribute randomness to molecular change, deleterious nature to single-gene mutations, insufficient geological time, or population size for molecular improvements to occur, or invoke “design creationism” to account for complexity in molecular structures and biological processes, are unfounded. Scientific evidence suggests that natural selection tinkers with molecular improvements by retaining adaptive peptide sequence. We used slot-machine probabilities and ion channels to show biological directionality on molecular change. Because ion channels reside in the lipid bilayer of cell membranes, their residue location must be in balance with the membrane's hydrophobic/philic nature; a selective “pore” for ion passage is located within the hydrophobic region. We contrasted the random generation of DNA sequence for KcsA, a bacterial two-transmembrane-domain (2TM) potassium channel, from Streptomyces lividans, with an under-selection scenario, the “jackprot,” which predicted much faster evolution than by chance. We wrote a computer program in JAVA APPLET version 1.0 and designed an online interface, The Jackprot Simulation http://faculty.rwu.edu/cbai/JackprotSimulation.htm, to model a numerical interaction between mutation rate and natural selection during a scenario of polypeptide evolution. Winning the “jackprot,” or highest-fitness complete-peptide sequence, required cumulative smaller “wins” (rewarded by selection) at the first, second, and third positions in each of the 161 KcsA codons (“jackdons” that led to “jackacids” that led to the “jackprot”). The “jackprot” is a didactic tool to demonstrate how mutation rate coupled with natural selection suffices to explain the evolution of specialized proteins, such as the complex six-transmembrane (6TM) domain potassium, sodium, or calcium channels. Ancestral DNA sequences coding for 2TM-like proteins underwent nucleotide

  7. The Jackprot Simulation Couples Mutation Rate with Natural Selection to Illustrate How Protein Evolution Is Not Random.

    Science.gov (United States)

    Paz-Y-Miño C, Guillermo; Espinosa, Avelina; Bai, Chunyan Y

    2011-09-01

    Protein evolution is not a random process. Views which attribute randomness to molecular change, deleterious nature to single-gene mutations, insufficient geological time, or population size for molecular improvements to occur, or invoke "design creationism" to account for complexity in molecular structures and biological processes, are unfounded. Scientific evidence suggests that natural selection tinkers with molecular improvements by retaining adaptive peptide sequence. We used slot-machine probabilities and ion channels to show biological directionality on molecular change. Because ion channels reside in the lipid bilayer of cell membranes, their residue location must be in balance with the membrane's hydrophobic/philic nature; a selective "pore" for ion passage is located within the hydrophobic region. We contrasted the random generation of DNA sequence for KcsA, a bacterial two-transmembrane-domain (2TM) potassium channel, from Streptomyces lividans, with an under-selection scenario, the "jackprot," which predicted much faster evolution than by chance. We wrote a computer program in JAVA APPLET version 1.0 and designed an online interface, The Jackprot Simulation http://faculty.rwu.edu/cbai/JackprotSimulation.htm, to model a numerical interaction between mutation rate and natural selection during a scenario of polypeptide evolution. Winning the "jackprot," or highest-fitness complete-peptide sequence, required cumulative smaller "wins" (rewarded by selection) at the first, second, and third positions in each of the 161 KcsA codons ("jackdons" that led to "jackacids" that led to the "jackprot"). The "jackprot" is a didactic tool to demonstrate how mutation rate coupled with natural selection suffices to explain the evolution of specialized proteins, such as the complex six-transmembrane (6TM) domain potassium, sodium, or calcium channels. Ancestral DNA sequences coding for 2TM-like proteins underwent nucleotide "edition" and gene duplications to generate the 6

  8. Pseudo cluster randomization: a treatment allocation method to minimize contamination and selection bias.

    NARCIS (Netherlands)

    Borm, G.F.; Melis, R.J.F.; Teerenstra, S.; Peer, P.G.M.

    2005-01-01

    In some clinical trials, treatment allocation on a patient level is not feasible, and whole groups or clusters of patients are allocated to the same treatment. If, for example, a clinical trial is investigating the efficacy of various patient coaching methods and randomization is done on a patient

  9. Selectivity and limitations of carbon sorption tubes for capturing siloxanes in biogas during field sampling.

    Science.gov (United States)

    Tansel, Berrin; Surita, Sharon C

    2016-06-01

    Siloxane levels in biogas can jeopardize the warranties of the engines used at the biogas to energy facilities. The chemical structure of siloxanes consists of silicon and oxygen atoms, alternating in position, with hydrocarbon groups attached to the silicon side chain. Siloxanes can be either in cyclic (D) or linear (L) configuration and referred with a letter corresponding to their structure followed by a number corresponding to the number of silicon atoms present. When siloxanes are burned, the hydrocarbon fraction is lost and silicon is converted to silicates. The purpose of this study was to evaluate the adequacy of activated carbon gas samplers for quantitative analysis of siloxanes in biogas samples. Biogas samples were collected from a landfill and an anaerobic digester using multiple carbon sorbent tubes assembled in series. One set of samples was collected for 30min (sampling 6-L gas), and the second set was collected for 60min (sampling 12-L gas). Carbon particles were thermally desorbed and analyzed by Gas Chromatography Mass Spectrometry (GC/MS). The results showed that biogas sampling using a single tube would not adequately capture octamethyltrisiloxane (L3), hexamethylcyclotrisiloxane (D3), octamethylcyclotetrasiloxane (D4), decamethylcyclopentasiloxane (D5) and dodecamethylcyclohexasiloxane (D6). Even with 4 tubes were used in series, D5 was not captured effectively. The single sorbent tube sampling method was adequate only for capturing trimethylsilanol (TMS) and hexamethyldisiloxane (L2). Affinity of siloxanes for activated carbon decreased with increasing molecular weight. Using multiple carbon sorbent tubes in series can be an appropriate method for developing a standard procedure for determining siloxane levels for low molecular weight siloxanes (up to D3). Appropriate quality assurance and quality control procedures should be developed for adequately quantifying the levels of the higher molecular weight siloxanes in biogas with sorbent tubes

  10. Keck DEIMOS Spectroscopy of a GALEX UV-Selected Sample from the Medium Imaging Survey

    Science.gov (United States)

    Mallery, Ryan P.; Rich, R. Michael; Salim, Samir; Small, Todd; Charlot, Stephane; Seibert, Mark; Wyder, Ted; Barlow, Tom A.; Forster, Karl; Friedman, Peter G.; Martin, D. Christopher; Morrissey, Patrick; Neff, Susan G.; Schiminovich, David; Bianchi, Luciana; Donas, José; Heckman, Timothy M.; Lee, Young-Wook; Madore, Barry F.; Milliard, Bruno; Szalay, Alex S.; Welsh, Barry Y.; Yi, Sukyoung

    2007-12-01

    We report results from a pilot program to obtain spectroscopy for objects detected in the Galaxy Evolution Explorer (GALEX) Medium Imaging Survey (MIS). Our study examines the properties of galaxies detected by GALEX fainter than the Sloan Digital Sky Survey (SDSS) spectroscopic survey. This is the first study to extend the techniques of Salim and coworkers to estimate stellar masses, star formation rates (SFRs), and the b (star formation history) parameter for star-forming galaxies out to z~0.7. We obtain redshifts for 50 GALEX MIS sources reaching NUV=23.9 (AB mag) having counterparts in the SDSS Data Release 4 (DR4). Of our sample, 43 are star-forming galaxies with z1 are QSOs, 3 of which are not previously cataloged. We compare our sample to a much larger sample of ~50,000 matched GALEX/SDSS galaxies with SDSS spectroscopy; while our survey is shallow, the optical counterparts to our sources reach ~3 mag fainter in SDSS r than the SDSS spectroscopic sample. We use emission-line diagnostics for the galaxies to determine that the sample contains mostly star-forming galaxies. The galaxies in the sample populate the blue sequence in the NUV-r versus Mr color-magnitude diagram. The derived stellar masses of the galaxies range from 108 to 1011 Msolar, and derived SFRs are between 10-1 and 102 Msolar yr-1. Our sample has SFRs, luminosities, and velocity dispersions that are similar to the samples of faint compact blue galaxies studied previously in the same redshift range by Koo and collaborators, Guzmán and collaborators, and Phillips and collaborators. However, our sample is ~2 mag fainter in surface brightness than the compact blue galaxies. We find that the star formation histories for a majority of the galaxies are consistent with a recent starburst within the last 100 Myr. Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of

  11. The impact of sample size and marker selection on the study of haplotype structures

    Directory of Open Access Journals (Sweden)

    Sun Xiao

    2004-03-01

    Full Text Available Abstract Several studies of haplotype structures in the human genome in various populations have found that the human chromosomes are structured such that each chromosome can be divided into many blocks, within which there is limited haplotype diversity. In addition, only a few genetic markers in a putative block are needed to capture most of the diversity within a block. There has been no systematic empirical study of the effects of sample size and marker set on the identified block structures and representative marker sets, however. The purpose of this study was to conduct a detailed empirical study to examine such impacts. Towards this goal, we have analysed three representative autosomal regions from a large genome-wide study of haplotypes with samples consisting of African-Americans and samples consisting of Japanese and Chinese individuals. For both populations, we have found that the sample size and marker set have significant impact on the number of blocks and the total number of representative markers identified. The marker set in particular has very strong impacts, and our results indicate that the marker density in the original datasets may not be adequate to allow a meaningful characterisation of haplotype structures. In general, we conclude that we need a relatively large sample size and a very dense marker panel in the study of haplotype structures in human populations.

  12. Fast Convolutional Neural Network Training Using Selective Data Sampling: Application to Hemorrhage Detection in Color Fundus Images.

    Science.gov (United States)

    van Grinsven, Mark J J P; van Ginneken, Bram; Hoyng, Carel B; Theelen, Thomas; Sanchez, Clara I

    2016-05-01

    Convolutional neural networks (CNNs) are deep learning network architectures that have pushed forward the state-of-the-art in a range of computer vision applications and are increasingly popular in medical image analysis. However, training of CNNs is time-consuming and challenging. In medical image analysis tasks, the majority of training examples are easy to classify and therefore contribute little to the CNN learning process. In this paper, we propose a method to improve and speed-up the CNN training for medical image analysis tasks by dynamically selecting misclassified negative samples during training. Training samples are heuristically sampled based on classification by the current status of the CNN. Weights are assigned to the training samples and informative samples are more likely to be included in the next CNN training iteration. We evaluated and compared our proposed method by training a CNN with (SeS) and without (NSeS) the selective sampling method. We focus on the detection of hemorrhages in color fundus images. A decreased training time from 170 epochs to 60 epochs with an increased performance-on par with two human experts-was achieved with areas under the receiver operating characteristics curve of 0.894 and 0.972 on two data sets. The SeS CNN statistically outperformed the NSeS CNN on an independent test set.

  13. Development of a SPME-GC-ECD methodology for selected pesticides in must and wine samples.

    Science.gov (United States)

    Correia, M; Delerue-Matos, C; Alves, A

    2001-04-01

    A method for the determination of some pesticide residues in must and wine samples was developed using solid-phase microextraction (SPME) and gas chromatography-electron capture detection (GC/ECD). The procedure only needs dilution as sample pre-treatment and is therefore simple, fast and solvent-free. Eight fungicides (vinclozolin, procymidone, iprodione, penconazole, fenarimol, folpet, nuarimol and hexaconazole), one insecticide (chlorpyriphos) and two acaricides (bromopropylate and tetradifon) can be quantified. Good linearity was observed for all the compounds in the range 5-100 microg/L. The reproducibility of the measurements was found acceptable (with RSD's below 20%). Detection limits of 11 microg/L, on average, are sufficiently below the proposed maximum residue limits (MRL's) for these compounds in wine. The analytical method was applied to the determination of these compounds in Portuguese must and wine samples from the Demarcated Region of Alentejo, where any residues could be detected.

  14. Electrochemical behavior and determination of major phenolic antioxidants in selected coffee samples.

    Science.gov (United States)

    Oliveira-Neto, Jerônimo Raimundo; Rezende, Stefani Garcia; de Fátima Reis, Carolina; Benjamin, Stephen Rathinaraj; Rocha, Matheus Lavorenti; de Souza Gil, Eric

    2016-01-01

    The redox behavior of commercial roasted coffee products were evaluated by electroanalysis, whereas high performance liquid chromatography (HPLC) was used for determination of cinnamic acid markers, the total phenolic content (TPC) was achieved by Folin-Ciocalteu assay, and the traditional DPPH (1-diphenyl-2-picrylhydrazyl) method for antioxidant power determination. In turn, an optimized electrochemical index was proposed to estimate the antioxidant power of different samples and it was found a great correlation between all methods. The voltammetric profile of all coffee samples was quite similar, presenting the same number of peaks at the same potential values. Minor differences on current levels were in agreement with the total phenolic and major markers content, as well as, to the antioxidant power. Therefore, the electrochemical methods showed to be practical, low cost and very useful to evaluate the antioxidant properties of coffee samples, which is a relevant quality parameter of this widely consumed beverage. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Analysis of selected phthalates in Canadian indoor dust collected using household vacuum and standardized sampling techniques.

    Science.gov (United States)

    Kubwabo, C; Rasmussen, P E; Fan, X; Kosarac, I; Wu, F; Zidek, A; Kuchta, S L

    2013-12-01

    Phthalates have been used extensively as plasticizers to improve the flexibility of polymers, and they also have found many industrial applications. They are ubiquitous in the environment and have been detected in a variety of environmental and biological matrices. The goal of this study was to develop a method for the determination of 17 phthalate esters in house dust. This method involved sonication extraction, sample cleanup using solid phase extraction, and isotope dilution GC/MS/MS analysis. Method detection limits (MDLs) and recoveries ranged from 0.04 to 2.93 μg/g and from 84 to 117%, respectively. The method was applied to the analysis of phthalates in 38 paired household vacuum samples (HD) and fresh dust (FD) samples. HD and FD samples compared well for the majority of phthalates detected in house dust. Data obtained from 126 household dust samples confirmed the historical widespread use of bis(2-ethylhexyl) phthalate (DEHP), with a concentration range of 36 μg/g to 3840 μg/g. Dibutyl phthalate (DBP), benzyl butyl phthalate (BzBP), diisononyl phthalate (DINP), and diisodecyl phthalate (DIDP) were also found in most samples at relatively high concentrations. Another important phthalate, diisobutyl phthalate (DIBP), was detected at a frequency of 98.4% with concentrations ranging from below its MDL of 0.51 μg/g to 69 μg/g. © 2013 Her Majesty the Queen in Right of Canada Indoor Air © 2013 John Wiley & Sons Ltd. Reproduced with the permission of the Minister of Health Canada.

  16. Multivariate genetic analysis of atopy phenotypes in a selected sample of twins

    DEFF Research Database (Denmark)

    Thomsen, S F; Ulrik, C S; Kyvik, K O

    2006-01-01

    , airway hyper-responsiveness (AHR), and positive skin prick test (posSPT) in a sample of adult twins. METHODS: Within a sampling frame of 21,162 twin subjects, 20-49 years of age, from the Danish Twin Registry, a total of 575 subjects (256 intact pairs and 63 single twins), who either themselves and...... traits were estimated and latent factor models of genetic and environmental effects were fitted to the observed data using maximum likelihood methods. RESULTS: The various phenotypic correlations between wheeze, rhinitis, AHR and posSPT were all significant and ranged between 0.50 and 0.86. Traits...

  17. Multivariate genetic analysis of atopy phenotypes in a selected sample of twins

    DEFF Research Database (Denmark)

    Thomsen, SF; Ulrik, Charlotte Suppli; Kyvik, KO

    2006-01-01

    , airway hyper-responsiveness (AHR), and positive skin prick test (posSPT) in a sample of adult twins. METHODS: Within a sampling frame of 21,162 twin subjects, 20-49 years of age, from the Danish Twin Registry, a total of 575 subjects (256 intact pairs and 63 single twins), who either themselves and....../or their co-twins reported a history of asthma at a nationwide questionnaire survey, were clinically examined. Symptoms of wheeze and rhinitis were obtained by interview; airway responsiveness and skin test reactivity were measured using standard techniques. Correlations in liability between the different...

  18. Indoor Air Quality in Selected Samples of Primary Schools in Kuala Terengganu, Malaysia

    OpenAIRE

    Marzuki Ismail

    2010-01-01

    Studies have found out that indoor air quality affects human especially children and the elderly more compared to ambient atmospheric air. This study aims to investigate indoor air pollutants concentration in selected vernacular schools with different surrounding human activities in Kuala Terengganu, the administrative and commercial center of Terengganu state. Failure to identify and establish indoor air pollution status can increase the chance of long-term and short-term health problems for...

  19. Top-down attention and selection history in psychopathy: Evidence from a community sample.

    Science.gov (United States)

    Hoppenbrouwers, Sylco S; Van der Stigchel, Stefan; Sergiou, Carmen S; Theeuwes, Jan

    2016-04-01

    Psychopathy is a severe personality disorder, the core of which pertains to callousness, an entitled and grandiose interpersonal style often accompanied by impulsive and reckless endangerment of oneself and others. The response modulation theory of psychopathy states that psychopathic individuals have difficulty modulating top-down attention to incorporate bottom-up stimuli that may signal important information but are irrelevant to current goals. However, it remains unclear which particular aspects of attention are impaired in psychopathy. Here, we used 2 visual search tasks that selectively tap into bottom-up and top-down attention. In addition, we also looked at intertrial priming, which reflects a separate class of processes that influence attention (i.e., selection history). The research group consisted of 65 participants that were recruited from the community. Psychopathic traits were measured with the Psychopathic Personality Inventory (PPI; Uzieblo, Verschuere, & Crombez, 2007). We found that bottom-up attention was unrelated to psychopathic traits, whereas elevated psychopathic traits were related to deficits in the use of cues to facilitate top-down attention. Further, participants with elevated psychopathic traits were more strongly influenced by their previous response to the target. These results show that attentional deficits in psychopathy are largely confined to top-down attention and selection history. (c) 2016 APA, all rights reserved).

  20. Sample descriptions and summary logs of selected wells within the Hanford Reservation

    Energy Technology Data Exchange (ETDEWEB)

    Summers, W.K.; Hanson, R.T.

    1977-01-01

    This report presents the description and summary logs of cuttings samples from 114 wells and test holes drilled within the Hanford Reservation. Written descriptive matter as required, including color, fossils, trace constituents, total drilled depth, and any other nonstandard features observed is included.

  1. Use of space-filling curves to select sample locations in natural resource monitoring studies

    Science.gov (United States)

    Andrew Lister; Charles T. Scott

    2009-01-01

    The establishment of several large area monitoring networks over the past few decades has led to increased research into ways to spatially balance sample locations across the landscape. Many of these methods are well documented and have been used in the past with great success. In this paper, we present a method using geographic information systems (GIS) and fractals...

  2. ACUMEN DELIVERABLE 5.3: Selection of Samples, Part 1 & 2

    DEFF Research Database (Denmark)

    Wildgaard, Lorna Elizabeth; Larsen, Birger; Schneider, Jesper

    2013-01-01

    undertaken in WoS and Google Scholar using a set of indicators designed for assessment at the individual level. The sample of 64 indicators were previously identified in the review of 114 bibliometric indicators, as presented in Madrid in January 2013. The set of 64 indicators has been reduced to 40 using...

  3. Multi-Factor Policy Evaluation and Selection in the One-Sample Situation

    NARCIS (Netherlands)

    C.M. Chen (Chien-Ming)

    2008-01-01

    textabstractFirms nowadays need to make decisions with fast information obsolesce. In this paper I deal with one class of decision problems in this situation, called the “one-sample” problems: we have finite options and one sample of the multiple criteria with which we use to evaluate those options.

  4. Risk Factors for Drug Abuse among Nepalese Samples Selected from a Town of Eastern Nepal

    Science.gov (United States)

    Niraula, Surya Raj; Chhetry, Devendra Bahadur; Singh, Girish Kumar; Nagesh, S.; Shyangwa, Pramod Mohan

    2009-01-01

    The study focuses on the serious issue related to the adolescents' and adults' behavior and health. It aims to identify the risk factors for drug abuse from samples taken from a town of Eastern Nepal. This is a matched case-control study. The conditional logistic regression method was adopted for data analysis. The diagnosis cut off was determined…

  5. Sample descriptions and summary logs of selected wells within the Hanford Reservation

    Energy Technology Data Exchange (ETDEWEB)

    Summers, W.K.; Hanson, R.T.

    1977-01-01

    This report presents the description and summary logs of cuttings samples from 114 wells and test holes drilled within the Hanford Reservation. Written descriptive matter is included as required, including color, fossils, trace constituents, total drilled depth, and any other nonstandard features observed.

  6. Feature selection and classification of mechanical fault of an induction motor using random forest classifier

    Directory of Open Access Journals (Sweden)

    Raj Kumar Patel

    2016-09-01

    Full Text Available Fault detection and diagnosis is the most important technology in condition-based maintenance (CBM system for rotating machinery. This paper experimentally explores the development of a random forest (RF classifier, a recently emerged machine learning technique, for multi-class mechanical fault diagnosis in bearing of an induction motor. Firstly, the vibration signals are collected from the bearing using accelerometer sensor. Parameters from the vibration signal are extracted in the form of statistical features and used as input feature for the classification problem. These features are classified through RF classifiers for four class problems. The prime objective of this paper is to evaluate effectiveness of random forest classifier on bearing fault diagnosis. The obtained results compared with the existing artificial intelligence techniques, neural network. The analysis of results shows the better performance and higher accuracy than the well existing techniques.

  7. Synthesis and Evaluation of a Molecularly Imprinted Polymer for Selective Solid-Phase Extraction of Irinotecan from Human Serum Samples

    Directory of Open Access Journals (Sweden)

    Isabelle Lefebvre-Tournier

    2012-02-01

    Full Text Available A molecularly imprinted polymer (MIP was synthesized by non-covalent imprinting polymerization using irinotecan as template. Methacrylic acid and 4-vinylpyridine were selected as functional monomers. An optimized procedure coupled to LC-PDA analysis was developed for the selective solid-phase extraction of irinotecan from various organic media. A specific capacity of 0.65 µmol•g−1 for the MIP was determined. The high specificity of this MIP was demonstrated by studying the retention behaviour of two related compounds, camptothecin and SN-38. This support was applied for the extraction of irinotecan from human serum samples.

  8. The AlSi10Mg samples produced by selective laser melting: single track, densification, microstructure and mechanical behavior

    Science.gov (United States)

    Wei, Pei; Wei, Zhengying; Chen, Zhen; Du, Jun; He, Yuyang; Li, Junfeng; Zhou, Yatong

    2017-06-01

    This densification behavior and attendant microstructural characteristics of the selective laser melting (SLM) processed AlSi10Mg alloy affected by the processing parameters were systematically investigated. The samples with a single track were produced by SLM to study the influences of laser power and scanning speed on the surface morphologies of scan tracks. Additionally, the bulk samples were produced to investigate the influence of the laser power, scanning speed, and hatch spacing on the densification level and the resultant microstructure. The experimental results showed that the level of porosity of the SLM-processed samples was significantly governed by energy density of laser beam and the hatch spacing. The tensile properties of SLM-processed samples and the attendant fracture surface can be enhanced by decreasing the level of porosity. The microstructure of SLM-processed samples consists of supersaturated Al-rich cellular structure along with eutectic Al/Si situated at the cellular boundaries. The Si content in the cellular boundaries increases with increasing the laser power and decreasing the scanning speed. The hardness of SLM-processed samples was significantly improved by this fine microstructure compared with the cast samples. Moreover, the hardness of SLM-processed samples at overlaps was lower than the hardness observed at track cores.

  9. High Field In Vivo 13C Magnetic Resonance Spectroscopy of Brain by Random Radiofrequency Heteronuclear Decoupling and Data Sampling

    Science.gov (United States)

    Li, Ningzhi; Li, Shizhe; Shen, Jun

    2017-06-01

    In vivo 13C magnetic resonance spectroscopy (MRS) is a unique and effective tool for studying dynamic human brain metabolism and the cycling of neurotransmitters. One of the major technical challenges for in vivo 13C-MRS is the high radio frequency (RF) power necessary for heteronuclear decoupling. In the common practice of in vivo 13C-MRS, alkanyl carbons are detected in the spectra range of 10-65ppm. The amplitude of decoupling pulses has to be significantly greater than the large one-bond 1H-13C scalar coupling (1JCH=125-145 Hz). Two main proton decoupling methods have been developed: broadband stochastic decoupling and coherent composite or adiabatic pulse decoupling (e.g., WALTZ); the latter is widely used because of its efficiency and superb performance under inhomogeneous B1 field. Because the RF power required for proton decoupling increases quadratically with field strength, in vivo 13C-MRS using coherent decoupling is often limited to low magnetic fields (Drug Administration (FDA). Alternately, carboxylic/amide carbons are coupled to protons via weak long-range 1H-13C scalar couplings, which can be decoupled using low RF power broadband stochastic decoupling. Recently, the carboxylic/amide 13C-MRS technique using low power random RF heteronuclear decoupling was safely applied to human brain studies at 7T. Here, we review the two major decoupling methods and the carboxylic/amide 13C-MRS with low power decoupling strategy. Further decreases in RF power deposition by frequency-domain windowing and time-domain random under-sampling are also discussed. Low RF power decoupling opens the possibility of performing in vivo 13C experiments of human brain at very high magnetic fields (such as 11.7T), where signal-to-noise ratio as well as spatial and temporal spectral resolution are more favorable than lower fields.

  10. Rationale, design, methodology and sample characteristics for the Vietnam pre-conceptual micronutrient supplementation trial (PRECONCEPT: a randomized controlled study

    Directory of Open Access Journals (Sweden)

    Nguyen Phuong H

    2012-10-01

    Full Text Available Abstract Background Low birth weight and maternal anemia remain intractable problems in many developing countries. The adequacy of the current strategy of providing iron-folic acid (IFA supplements only during pregnancy has been questioned given many women enter pregnancy with poor iron stores, the substantial micronutrient demand by maternal and fetal tissues, and programmatic issues related to timing and coverage of prenatal care. Weekly IFA supplementation for women of reproductive age (WRA improves iron status and reduces the burden of anemia in the short term, but few studies have evaluated subsequent pregnancy and birth outcomes. The Preconcept trial aims to determine whether pre-pregnancy weekly IFA or multiple micronutrient (MM supplementation will improve birth outcomes and maternal and infant iron status compared to the current practice of prenatal IFA supplementation only. This paper provides an overview of study design, methodology and sample characteristics from baseline survey data and key lessons learned. Methods/design We have recruited 5011 WRA in a double-blind stratified randomized controlled trial in rural Vietnam and randomly assigned them to receive weekly supplements containing either: 1 2800 μg folic acid 2 60 mg iron and 2800 μg folic acid or 3 MM. Women who become pregnant receive daily IFA, and are being followed through pregnancy, delivery, and up to three months post-partum. Study outcomes include birth outcomes and maternal and infant iron status. Data are being collected on household characteristics, maternal diet and mental health, anthropometry, infant feeding practices, morbidity and compliance. Discussion The study is timely and responds to the WHO Global Expert Consultation which identified the need to evaluate the long term benefits of weekly IFA and MM supplementation in WRA. Findings will generate new information to help guide policy and programs designed to reduce the burden of anemia in women and

  11. The use of data from sampling for bacteriology for genetic selection against clinical mastitis.

    Science.gov (United States)

    Ouweltjes, W; Windig, J J; de Jong, G; Lam, T J G M; ten Napel, J; de Haas, Y

    2008-12-01

    One breeding objective of Dutch cattle breeders is to improve genetic resistance against clinical and subclinical mastitis. Because of a lack of direct mastitis information, udder health breeding values are based on indirect traits. Inclusion of direct information on clinical mastitis could improve reliability of breeding values. The aim of this study was to investigate whether data from milk samples sent in for bacteriology are potential sources of information for the occurrence of mastitis, which may be used in animal breeding, and if so how this data can be used. Although there are 2 separate flows of milk samples for bacteriology in the Netherlands, it was not considered necessary to account for the origin of the samples. In both flows, the majority of the samples are visually normal and flow-specific traits are highly correlated. Therefore, information from these flows is combined for genetic analysis. Nearly two-thirds of the bacteriology data could be linked to milk recording and pedigree records. Relatively few farmers (bacteriology between January 1, 2003, and March 31, 2006. Their herds had, on average, greater milk production and lower cell counts than herds for which no samples were taken. However, the range and variation within both groups of herds for these variables was similar and there was a large overlap in sires used within both groups. Whether or not samples were taken for bacteriology turned out to be a potentially useful indicator for clinical mastitis at the cow level, because this trait had a strong positive genetic correlation with clinical mastitis registered by farmers (0.84 or 0.89, depending on the data set) and similar heritability (2%) and genetic variation. Also, genetic correlations of bacteriology with SCC traits were similar to those for farmer-registered clinical mastitis. An important advantage of these bacteriology data is that they are already collected routinely and stored in a central database in the Netherlands; this is not

  12. Use of hyaluronan in the selection of sperm for intracytoplasmic sperm injection (ICSI): significant improvement in clinical outcomes--multicenter, double-blinded and randomized controlled trial.

    Science.gov (United States)

    Worrilow, K C; Eid, S; Woodhouse, D; Perloe, M; Smith, S; Witmyer, J; Ivani, K; Khoury, C; Ball, G D; Elliot, T; Lieberman, J

    2013-02-01

    and 247 participants were randomized to the NP group. Of the 318 patients stratified to the I-HB ≤ 65% arm, 164 participants were randomized to the control group and 154 participants were randomized to the HYAL group. HYAL patients with an F-HB score ≤ 65% demonstrated an IR of 37.4% compared with 30.7% for control [n = 63, 58, P > 0.05, (95% CI of the difference -7.7 to 21.3)]. In addition, the CPR associated with patients randomized to the HYAL group was 50.8% when compared with 37.9% for those randomized to the control group (n = 63, 58, P > 0.05). The 12.9% difference was associated with a risk ratio (RR) of 1.340 (RR 95% CI 0.89-2.0). HYAL patients with I-HB and F-HB scores ≤ 65% revealed a statistically significant reduction in their PLR (I-HB: 3.3 versus 15.1%, n = 73, 60, P = 0.021, RR of 0.22 (RR 95% CI 0.05-0.96) (F-HB: 0.0%, 18.5%, n = 27, 32, P = 0.016, RR not applicable due to 0.0% value) over control patients. The study was originally planned to have 200 participants per arm providing 86.1% power to detect an increase in CPR from 35 to 50% at α = 0.05 but was stopped early for financial reasons. As a pilot study had demonstrated that sperm preparation protocols may increase the HB score, the design of the current study incorporated a priori collection and analysis of the data by both the I-HB and the F-HB scores. Analysis by both the I-HB and F-HB score acknowledged the potential impact of sperm preparation protocols. Selection bias was controlled by randomization. Geographic and seasonal bias was controlled by recruiting from 10 geographically unique sites and by sampling over a 2-year period. The potential for population effect was controlled by adjusting for higher prevalence rates of >65% I-HB that naturally occur by adding the NP arm and to concurrently recruit >65% and ≤ 65% I-HB subjects. Monitoring and site audits occurred regularly to ensure standardization of data collection, adherence to the study protocol and subject recruitment

  13. Selective nerve root blocks vs. caudal epidural injection for single level prolapsed lumbar intervertebral disc - A prospective randomized study.

    Science.gov (United States)

    Singh, Sudhir; Kumar, Sanjiv; Chahal, Gaurav; Verma, Reetu

    2017-01-01

    Chronic lumbar radiculopathy has a lifetime prevalence of 5.3% in men and 3.7% in women. It usually resolves spontaneously, but up to 30% cases will have pronounced symptoms even after one year. A prospective randomized single-blind study was conducted to compare the efficacy of caudal epidural steroid injection and selective nerve root block in management of pain and disability in cases of lumbar disc herniation. Eighty patients with confirmed single-level lumbar disc herniation were equally divided in two groups: (a) caudal epidural and (b) selective nerve root block group, by a computer-generated random allocation method. The caudal group received three injections of steroid mixed with local anesthetics while selective nerve root block group received single injection of steroid mixed with local anesthetic agent. Patients were assessed for pain relief and reduction in disability. In SNRB group, pain reduced by more than 50% up till 6 months, while in caudal group more than 50% reduction of pain was maintained till 1 year. The reduction in ODI in SNRB group was 52.8% till 3 months, 48.6% till 6 months, and 46.7% at 1 year, while in caudal group the improvement was 59.6%, 64.6%, 65.1%, and 65.4% at corresponding follow-up periods, respectively. Caudal epidural block is an easy and safe method with better pain relief and improvement in functional disability than selective nerve root block. Selective nerve root block injection is technically more demanding and has to be given by a skilled anesthetist.

  14. Assessment of selected contaminants in streambed- and suspended-sediment samples collected in Bexar County, Texas, 2007-09

    Science.gov (United States)

    Wilson, Jennifer T.

    2011-01-01

    Elevated concentrations of sediment-associated contaminants are typically associated with urban areas such as San Antonio, Texas, in Bexar County, the seventh most populous city in the United States. This report describes an assessment of selected sediment-associated contaminants in samples collected in Bexar County from sites on the following streams: Medio Creek, Medina River, Elm Creek, Martinez Creek, Chupaderas Creek, Leon Creek, Salado Creek, and San Antonio River. During 2007-09, the U.S. Geological Survey periodically collected surficial streambed-sediment samples during base flow and suspended-sediment (large-volume suspended-sediment) samples from selected streams during stormwater runoff. All sediment samples were analyzed for major and trace elements and for organic compounds including halogenated organic compounds and polycyclic aromatic hydrocarbons (PAHs). Selected contaminants in streambed and suspended sediments in watersheds of the eight major streams in Bexar County were assessed by using a variety of methods—observations of occurrence and distribution, comparison to sediment-quality guidelines and data from previous studies, statistical analyses, and source indicators. Trace elements concentrations were low compared to the consensus-based sediment-quality guidelines threshold effect concentration (TEC) and probable effect concentration (PEC). Trace element concentrations were greater than the TEC in 28 percent of the samples and greater than the PEC in 1.5 percent of the samples. Chromium concentrations exceeded sediment-quality guidelines more frequently than concentrations of any other constituents analyzed in this study (greater than the TEC in 69 percent of samples and greater than the PEC in 8 percent of samples). Mean trace element concentrations generally are lower in Bexar County samples compared to concentrations in samples collected during previous studies in the Austin and Fort Worth, Texas, areas, but considering the relatively

  15. Multivariate genetic analysis of atopy phenotypes in a selected sample of twins

    DEFF Research Database (Denmark)

    Thomsen, SF; Ulrik, Charlotte Suppli; Kyvik, KO

    2006-01-01

    , airway hyper-responsiveness (AHR), and positive skin prick test (posSPT) in a sample of adult twins. METHODS: Within a sampling frame of 21,162 twin subjects, 20-49 years of age, from the Danish Twin Registry, a total of 575 subjects (256 intact pairs and 63 single twins), who either themselves and...... that showed highest genetic correlations were wheeze-rhinitis (rho(A)=0.95), wheeze-AHR (rho(A)=0.85) and rhinitis-posSPT (rho(A)=0.92), whereas lower genetic correlations were observed for rhinitis-AHR (rho(A)=0.43) and AHR-posSPT (rho(A)=0.59). Traits with a high degree of environmental sharing were...

  16. Collecting Genetic Samples in Population Wide (Panel) Surveys: Feasibility, Nonresponse and Selectivity

    OpenAIRE

    Matthias Schonlau; Martin Reuter; Juergen Schupp; Christian Montag; Bernd Weber; Thomas Dohmen; Nico A. Siegel; Uwe Sunde; Wagner, Gert G.; Armin Falk

    2010-01-01

    "Collecting biomarkers as part of general purpose surveys offers scientists - and social scientists in particular - the ability to study biosocial phenomena, e.g. the relation between genes and human behavior. The authors explore the feasibility of collecting buccal cells for genetic analyses with normal interviewers as part of a pretest for the German Socio-economic Panel Study (SOEP) using a probability sample. They introduce a new non-invasive technique for collecting cell material for gen...

  17. Suitability of selected free-gas and dissolved-gas sampling containers for carbon isotopic analysis.

    Science.gov (United States)

    Eby, P; Gibson, J J; Yi, Y

    2015-07-15

    Storage trials were conducted for 2 to 3 months using a hydrocarbon and carbon dioxide gas mixture with known carbon isotopic composition to simulate typical hold times for gas samples prior to isotopic analysis. A range of containers (both pierced and unpierced) was periodically sampled to test for δ(13)C isotopic fractionation. Seventeen containers were tested for free-gas storage (20°C, 1 atm pressure) and 7 containers were tested for dissolved-gas storage, the latter prepared by bubbling free gas through tap water until saturated (20°C, 1 atm) and then preserved to avoid biological activity by acidifying to pH 2 with phosphoric acid and stored in the dark at 5°C. Samples were extracted using valves or by piercing septa, and then introduced into an isotope ratio mass spectrometer for compound-specific δ(13)C measurements. For free gas, stainless steel canisters and crimp-top glass serum bottles with butyl septa were most effective at preventing isotopic fractionation (pierced and unpierced), whereas silicone and PTFE-butyl septa allowed significant isotopic fractionation. FlexFoil and Tedlar bags were found to be effective only for storage of up to 1 month. For dissolved gas, crimp-top glass serum bottles with butyl septa were again effective, whereas silicone and PTFE-butyl were not. FlexFoil bags were reliable for up to 2 months. Our results suggest a range of preferred containers as well as several that did not perform very well for isotopic analysis. Overall, the results help establish better QA/QC procedures to avoid isotopic fractionation when storing environmental gas samples. Recommended containers for air transportation include steel canisters and glass serum bottles with butyl septa (pierced and unpierced). Copyright © 2015 John Wiley & Sons, Ltd.

  18. New procedure of selected biogenic amines determination in wine samples by HPLC

    Energy Technology Data Exchange (ETDEWEB)

    Piasta, Anna M.; Jastrzębska, Aneta, E-mail: aj@chem.uni.torun.pl; Krzemiński, Marek P.; Muzioł, Tadeusz M.; Szłyk, Edward

    2014-06-27

    Highlights: • We proposed new procedure for derivatization of biogenic amines. • The NMR and XRD analysis confirmed the purity and uniqueness of derivatives. • Concentration of biogenic amines in wine samples were analyzed by RP-HPLC. • Sample contamination and derivatization reactions interferences were minimized. - Abstract: A new procedure for determination of biogenic amines (BA): histamine, phenethylamine, tyramine and tryptamine, based on the derivatization reaction with 2-chloro-1,3-dinitro-5-(trifluoromethyl)-benzene (CNBF), is proposed. The amines derivatives with CNBF were isolated and characterized by X-ray crystallography and {sup 1}H, {sup 13}C, {sup 19}F NMR spectroscopy in solution. The novelty of the procedure is based on the pure and well-characterized products of the amines derivatization reaction. The method was applied for the simultaneous analysis of the above mentioned biogenic amines in wine samples by the reversed phase-high performance liquid chromatography. The procedure revealed correlation coefficients (R{sup 2}) between 0.9997 and 0.9999, and linear range: 0.10–9.00 mg L{sup −1} (histamine); 0.10–9.36 mg L{sup -1} (tyramine); 0.09–8.64 mg L{sup −1} (tryptamine) and 0.10–8.64 mg L{sup −1} (phenethylamine), whereas accuracy was 97%–102% (recovery test). Detection limit of biogenic amines in wine samples was 0.02–0.03 mg L{sup −1}, whereas quantification limit ranged 0.05–0.10 mg L{sup −1}. The variation coefficients for the analyzed amines ranged between 0.49% and 3.92%. Obtained BA derivatives enhanced separation the analytes on chromatograms due to the inhibition of hydrolysis reaction and the reduction of by-products formation.

  19. Authorship and sampling practice in selected biomechanics and sports science journals.

    Science.gov (United States)

    Knudson, Duane V

    2011-06-01

    In some biomedical sciences, changes in patterns of collaboration and authorship have complicated the assignment of credit and responsibility for research. It is unclear if this problem of "promiscuous coauthorship" or "hyperauthorship" (defined as six or more authors) is also apparent in the applied research disciplines within sport and exercise science. This study documented the authorship and sampling of patterns of original research reports in three applied biomechanics (Clinical Biomechanics, Journal of Applied Biomechanics, and Sports Biomechanics) and five similar subdisciplinary journals within sport and exercise science (International Journal of Sports Physiology and Performance, Journal of Sport Rehabilitation, Journal of Teaching Physical Education, Measurement in Physical Education and Exercise Sciences, and Motor Control). Original research reports from the 2009 volumes of these biomechanics and sport and exercise journals were reviewed. Single authorship of papers was rare (2.6%) in these journals, with the mean number of authors ranging from 2.7 to 4.5. Sample sizes and the ratio of sample to authors varied widely, and these variables tended not to be associated with number of authors. Original research reports published in these journals in 2009 tended to be published by small teams of collaborators, so currently there may be few problems with promiscuous coauthorship in these subdisciplines of sport and exercise science.

  20. Métodos de amostragem para estimação da cobertura vacinal Methods of sample selection for estimates of vaccination coverage

    Directory of Open Access Journals (Sweden)

    Eunice Pinho de Castro Silva

    1986-10-01

    Full Text Available Apresentam-se sucintamente dois métodos de amostragem que visam a seleção de uma amostra de crianças de fixada faixa de idade, residentes em determinada área geográfica de interesse, para estimação de cobertura vacinal: o método de R.H. Henderson e T. Sundaresan, o método do Departamento de Epidemiologia e de Métodos Quantitativos em Saúde da Escola Nacional de Saúde Pública. Um terceiro método de amostragem é proposto. O primeiro método apresentado (Henderson e Sundaresan é empregado no Programa Ampliado de Imunização e nesse programa é considerado eficiente, simples e não dispendioso. O segundo e o novo método, que apresentam modificações do primeiro, constituindo alternativas deste, visam diminuir o erro quadrático médio nas estimativas, conquanto sejam menos simples e mais dispendiosos que aquele. Visto que, na estimação da cobertura vacinal, o estimador empregado pressupõe amostra autoponderada, a preocupação maior do método aqui proposto foi a de proporcionar uma amostragem segundo a qual se tenha equiprobabilidade de seleção para qualquer criança do grupo etário estudado residente na área de interesse, independentemente de qualquer condição.It is presented two sampling methods for selecting a sample of children of a given age range, living in a geographical area of interest, to estimate vaccinal coverage: the R.H. Henderson and T. Sundaresan method, the Department of Epidemiology and Quantitative Methods of National Public Health School method. A third method of selection is then proposed. The geographical area of interest is divided into parts which are used as primary sampling units (PSU's. First method: 30 PSU's are selected with probability proportional to the population living in the PSU; a starting point ("household" is selected by random selection within each PSU selected: selection of 7 children from within each of the PSU's, begins with the starting household, and then continues to the next

  1. Comparing attitudes about legal sanctions and teratogenic effects for cocaine, alcohol, tobacco and caffeine: A randomized, independent samples design

    Directory of Open Access Journals (Sweden)

    Alanis Kelly L

    2006-02-01

    Full Text Available Abstract Background Establishing more sensible measures to treat cocaine-addicted mothers and their children is essential for improving U.S. drug policy. Favorable post-natal environments have moderated potential deleterious prenatal effects. However, since cocaine is an illicit substance having long been demonized, we hypothesized that attitudes toward prenatal cocaine exposure would be more negative than for licit substances, alcohol, nicotine and caffeine. Further, media portrayals about long-term outcomes were hypothesized to influence viewers' attitudes, measured immediately post-viewing. Reducing popular crack baby stigmas could influence future policy decisions by legislators. In Study 1, 336 participants were randomly assigned to 1 of 4 conditions describing hypothetical legal sanction scenarios for pregnant women using cocaine, alcohol, nicotine or caffeine. Participants rated legal sanctions against pregnant women who used one of these substances and risk potential for developing children. In Study 2, 139 participants were randomly assigned to positive, neutral and negative media conditions. Immediately post-viewing, participants rated prenatal cocaine-exposed or non-exposed teens for their academic performance and risk for problems at age18. Results Participants in Study 1 imposed significantly greater legal sanctions for cocaine, perceiving prenatal cocaine exposure as more harmful than alcohol, nicotine or caffeine. A one-way ANOVA for independent samples showed significant differences, beyond .0001. Post-hoc Sheffe test illustrated that cocaine was rated differently from other substances. In Study 2, a one-way ANOVA for independent samples was performed on difference scores for the positive, neutral or negative media conditions about prenatal cocaine exposure. Participants in the neutral and negative media conditions estimated significantly lower grade point averages and more problems for the teen with prenatal cocaine exposure

  2. Comparing attitudes about legal sanctions and teratogenic effects for cocaine, alcohol, tobacco and caffeine: A randomized, independent samples design

    Science.gov (United States)

    Ginsburg, Harvey J; Raffeld, Paul; Alanis, Kelly L; Boyce, Angela S

    2006-01-01

    Background Establishing more sensible measures to treat cocaine-addicted mothers and their children is essential for improving U.S. drug policy. Favorable post-natal environments have moderated potential deleterious prenatal effects. However, since cocaine is an illicit substance having long been demonized, we hypothesized that attitudes toward prenatal cocaine exposure would be more negative than for licit substances, alcohol, nicotine and caffeine. Further, media portrayals about long-term outcomes were hypothesized to influence viewers' attitudes, measured immediately post-viewing. Reducing popular crack baby stigmas could influence future policy decisions by legislators. In Study 1, 336 participants were randomly assigned to 1 of 4 conditions describing hypothetical legal sanction scenarios for pregnant women using cocaine, alcohol, nicotine or caffeine. Participants rated legal sanctions against pregnant women who used one of these substances and risk potential for developing children. In Study 2, 139 participants were randomly assigned to positive, neutral and negative media conditions. Immediately post-viewing, participants rated prenatal cocaine-exposed or non-exposed teens for their academic performance and risk for problems at age18. Results Participants in Study 1 imposed significantly greater legal sanctions for cocaine, perceiving prenatal cocaine exposure as more harmful than alcohol, nicotine or caffeine. A one-way ANOVA for independent samples showed significant differences, beyond .0001. Post-hoc Sheffe test illustrated that cocaine was rated differently from other substances. In Study 2, a one-way ANOVA for independent samples was performed on difference scores for the positive, neutral or negative media conditions about prenatal cocaine exposure. Participants in the neutral and negative media conditions estimated significantly lower grade point averages and more problems for the teen with prenatal cocaine exposure than for the non-exposed teen

  3. Selective removal of dicamba from aqueous samples using molecularly imprinted polymer nanospheres

    Directory of Open Access Journals (Sweden)

    Tooraj Beyki

    2016-07-01

    Full Text Available For the first time, uniform molecularly imprinted polymer (MIP nanoparticles were prepared using dicamba as a template. The MIP nanoparticles were successfully synthesized by precipitation polymerization using methacrylic acid (MAA as functional monomer, trimethylolpropane trimethacrylate (TRIM as cross-linker and acetonitrile as porogen. The produced polymers were characterized by differential scanning calorimetry (DSC and their morphology was precisely examined by scanning electron microscopy (SEM. The MIP nanospheres were obtained with the average diameter of 234 nm. Batch-wise guest binding experiments were carried out to determine the removal efficiency of the produced MIP nanoparticles towards the template molecule in aqueous solutions. The MIP showed outstanding affinity toward dicamba in aqueous solution with maximum removal efficiency of 87.5% at 300 mg.L-1 of dicamba solution. The MIP exhibited higher adsorption efficiency compared with the corresponding non-imprinted polymer (NIP as well as outstanding selectivity towards dicamba relative to the template analog in an aqueous solution. Moreover, effects of pH on removal efficiency and selectivity of MIP were evaluated in detail.

  4. Indoor Air Quality in Selected Samples of Primary Schools in Kuala Terengganu, Malaysia

    Directory of Open Access Journals (Sweden)

    Marzuki Ismail

    2010-01-01

    Full Text Available Studies have found out that indoor air quality affects human especially children and the elderly more compared to ambient atmospheric air. This study aims to investigate indoor air pollutants concentration in selected vernacular schools with different surrounding human activities in Kuala Terengganu, the administrative and commercial center of Terengganu state. Failure to identify and establish indoor air pollution status can increase the chance of long-term and short-term health problems for these young students and staff; reduction in productivity of teachers; and degrade the youngsters learning environment and comfort. Indoor air quality (IAQ parameters in three primary schools were conducted during the monsoon season of November 2008 for the purposes of assessing ventilation rates, levels of particulate matter (PM10 and air quality differences between schools. In each classroom, carbon monoxide (CO, CO2, air velocity, relative humidity and temperature were performed during school hours, and a complete walkthrough survey was completed. Results show a statistically significant difference for the five IAQ parameters between the three schools at the 95.0% confidence level. We conclude our findings by confirming the important influence of surrounding human activities on indoor concentrations of pollutants in selected vernacular schools in Kuala Terengganu.

  5. A description of the demographic characteristics of the New Zealand non-commercial horse population with data collected using a generalised random-tessellation stratified sampling design.

    Science.gov (United States)

    Rosanowski, S M; Cogger, N; Rogers, C W; Benschop, J; Stevenson, M A

    2012-12-01

    We conducted a cross-sectional survey to determine the demographic characteristics of non-commercial horses in New Zealand. A sampling frame of properties with non-commercial horses was derived from the national farms database, AgriBase™. Horse properties were stratified by property size and a generalised random-tessellated stratified (GRTS) sampling strategy was used to select properties (n=2912) to take part in the survey. The GRTS sampling design allowed for the selection of properties that were spatially balanced relative to the distribution of horse properties throughout the country. The registered decision maker of the property, as identified in AgriBase™, was sent a questionnaire asking them to describe the demographic characteristics of horses on the property, including the number and reason for keeping horses, as well as information about other animals kept on the property and the proximity of boundary neighbours with horses. The response rate to the survey was 38% (1044/2912) and the response rate was not associated with property size or region. A total of 5322 horses were kept for recreation, competition, racing, breeding, stock work, or as pets. The reasons for keeping horses and the number and class of horses varied significantly between regions and by property size. Of the properties sampled, less than half kept horses that could have been registered with Equestrian Sports New Zealand or either of the racing codes. Of the respondents that reported knowing whether their neighbours had horses, 58.6% (455/776) of properties had at least one boundary neighbour that kept horses. The results of this study have important implications for New Zealand, which has an equine population that is naïve to many equine diseases considered endemic worldwide. The ability to identify, and apply accurate knowledge of the population at risk to infectious disease control strategies would lead to more effective strategies to control and prevent disease spread during an

  6. Selective trace enrichment of chlorotriazine pesticides from natural waters and sediment samples using terbuthylazine molecularly imprinted polymers

    Science.gov (United States)

    Ferrer, I.; Lanza, F.; Tolokan, A.; Horvath, V.; Sellergren, B.; Horvai, G.; Barcelo, D.

    2000-01-01

    Two molecularly imprinted polymers were synthesized using either dichloromethane or toluene as the porogen and terbuthylazine as the template and were used as solid-phase extraction cartridges for the enrichment of six chlorotriazines (deisopropylatrazine, deethylatrazine, simazine, atrazine, propazine, and terbuthylazine) in natural water and sediment samples. The extracted samples were analyzed by liquid chromatography/diode array detection (LC/DAD). Several washing solvents, as well as different volumes, were tested for their ability to remove the matrix components nonspecifically adsorbed on the sorbents. This cleanup step was shown to be of prime importance to the successful extraction of the pesticides from the aqueous samples. The optimal analytical conditions were obtained when the MIP imprinted using dichloromethane was the sorbent, 2 mL of dichloromethane was used in the washing step, and the preconcentrated analytes were eluted with 8 mL of methanol. The recoveries were higher than 80% for all the chlorotriazines except for propazine (53%) when 50- or 100-mL groundwater samples, spiked at 1 ??g/L level, were analyzed. The limits of detection varied from 0.05 to 0.2 ??g/L when preconcentrating a 100-mL groundwater sample. Natural sediment samples from the Ebre Delta area (Tarragona, Spain) containing atrazine and deethylatrazine were Soxhlet extracted and analyzed by the methodology developed in this work. No significant interferences from the sample matrix were noticed, thus indicating good selectivity of the MIP sorbents used.

  7. Development of a selective molecularly imprinted polymer-based solid-phase extraction for indomethacin from water samples.

    Science.gov (United States)

    Yang, Tao; Li, Ya-Hui; Wei, Shuang; Li, Yuan; Deng, Anping

    2008-08-01

    A selective molecularly imprinted solid-phase extraction (MISPE) for indomethacin (IDM) from water samples was developed. Using IDM as template molecule, acrylamide (AM) or methacrylic acid (MAA) as functional monomer, ethylene dimethacrylate (EDMA) as crosslinker, and bulk or suspension polymerization as the synthetic method, three molecularly imprinted polymers (MIPs) were synthesized and characterized with a rebinding experiment. It was found that the MIP of AM-EDMA produced by bulk polymerization showed the highest binding capacity for IDM, and so it was chosen for subsequent experiments, such as those testing the selectivity and recognition binding sites. Scatchard analysis revealed that at least two kinds of binding sites formed in the MIP, with the dissociation constants of 7.8 micromol L(-1) and 127.2 micromol L(-1), respectively. Besides IDM, three structurally related compounds--acemetacin, oxaprozin and ibuprofen--were employed for selectivity tests. It was observed that the MIP exhibited the highest selective rebinding to IDM. Accordingly, the MIP was used as a solid-phase extraction sorbent for the extraction and enrichment of IDM in water samples. The extraction conditions of the MISPE column for IDM were optimized to be: chloroform or water as loading solvent, chloroform with 20% acetonitrile as washing solution, and methanol as eluting solvent. Water samples with or without spiking were extracted by the MISPE column and analyzed by HPLC. No detectable IDM was observed in tap water and the content of IDM in a river water sample was found to be 1.8 ng mL(-1). The extraction efficiencies of the MISPE column for IDM in spiked tap and river water were acceptable (87.2% and 83.5%, respectively), demonstrating the feasibility of the prepared MIP for IDM extraction.

  8. Location and Age Database for Selected Foraminifer Samples Collected by Exxon Petroleum Geologists in California

    Science.gov (United States)

    Brabb, Earl E.; Parker, John M.

    2003-01-01

    Most of the geologic maps published for central California before 1960 were made without the benefit of age determinations from microfossils. The ages of Cretaceous and Tertiary rocks in the mostly poorly exposed and structurally complex sedimentary rocks represented in the Coast Ranges are critical in determining stratigraphic succession or lack of it, and in determining whether the juxtaposition of similar appearing but different age formations means a fault is present. Since the 1930’s, at least, oil company geologists have used microfossils to assist them in geologic mapping and in determining the environments of deposition of the sediment containing the microfossils. This information has been so confidential that some companies even coded the names of foraminifers to prevent disclosure. In the past 20 years, however, the attitude of petroleum companies about this information has changed, and many of the formerly confidential materials and reports are now available. We report here on 1,964 Exxon foraminifer samples mostly from surface localities in the San Francisco Bay region, and elsewhere in California. Most but not all the samples were plotted on U. S. Geological Survey (USGS) 7.5’ topographic maps or on obsolete USGS 15’ maps. The information from the slides can be used to update geologic maps prepared without the benefit of microfossil data, to analyze the depth and temperature of ocean water covering parts of California during the Mesozoic and Cenozoic Eras, and for solving nomenclature and other scientific problems. A similar report on more than 30,000 slides for surface samples collected by Chevron geologists has been released (Brabb and Parker, 2003), and another report provides information on slides for more than 2000 oil test wells in Northern California (Brabb, Powell, and Brocher, 2001).

  9. Selection and application of ssDNA aptamers to detect active TB from sputum samples.

    Directory of Open Access Journals (Sweden)

    Lia S Rotherham

    Full Text Available BACKGROUND: Despite the enormous global burden of tuberculosis (TB, conventional approaches to diagnosis continue to rely on tests that have major drawbacks. The improvement of TB diagnostics relies, not only on good biomarkers, but also upon accurate detection methodologies. The 10-kDa culture filtrate protein (CFP-10 and the 6-kDa early secreted antigen target (ESAT-6 are potent T-cell antigens that are recognised by over 70% of TB patients. Aptamers, a novel sensitive and specific class of detection molecules, has hitherto, not been raised to these relatively TB-specific antigens. METHODS: DNA aptamers that bind to the CFP-10.ESAT-6 heterodimer were isolated. To assess their affinity and specificity to the heterodimer, aptamers were screened using an enzyme-linked oligonucleotide assay (ELONA. One suitable aptamer was evaluated by ELONA using sputum samples obtained from 20 TB patients and 48 control patients (those with latent TB infection, symptomatic non TB patients, and healthy laboratory volunteers. Culture positivity for Mycobacterium tuberculosis (Mtb served as the reference standard. Accuracy and cut-points were evaluated using ROC curve analysis. RESULTS: Twenty-four out of the 66 aptamers that were isolated bound significantly (p<0.05 to the CFP-10.ESAT-6 heterodimer and six were further evaluated. Their dissociation constant (K(D values were in the nanomolar range. One aptamer, designated CSIR 2.11, was evaluated using sputum samples. CSIR 2.11 had sensitivity and specificity of 100% and 68.75% using Youden's index and 35% and 95%, respectively, using a rule-in cut-point. CONCLUSION: This preliminary proof-of-concept study suggests that a diagnosis of active TB using anti-CFP-10.ESAT-6 aptamers applied to human sputum samples is feasible.

  10. Selection and Application of ssDNA Aptamers to Detect Active TB from Sputum Samples

    Science.gov (United States)

    Rotherham, Lia S.; Maserumule, Charlotte; Dheda, Keertan; Theron, Jacques; Khati, Makobetsa

    2012-01-01

    Background Despite the enormous global burden of tuberculosis (TB), conventional approaches to diagnosis continue to rely on tests that have major drawbacks. The improvement of TB diagnostics relies, not only on good biomarkers, but also upon accurate detection methodologies. The 10-kDa culture filtrate protein (CFP-10) and the 6-kDa early secreted antigen target (ESAT-6) are potent T-cell antigens that are recognised by over 70% of TB patients. Aptamers, a novel sensitive and specific class of detection molecules, has hitherto, not been raised to these relatively TB-specific antigens. Methods DNA aptamers that bind to the CFP-10.ESAT-6 heterodimer were isolated. To assess their affinity and specificity to the heterodimer, aptamers were screened using an enzyme-linked oligonucleotide assay (ELONA). One suitable aptamer was evaluated by ELONA using sputum samples obtained from 20 TB patients and 48 control patients (those with latent TB infection, symptomatic non TB patients, and healthy laboratory volunteers). Culture positivity for Mycobacterium tuberculosis (Mtb) served as the reference standard. Accuracy and cut-points were evaluated using ROC curve analysis. Results Twenty-four out of the 66 aptamers that were isolated bound significantly (pCFP-10.ESAT-6 heterodimer and six were further evaluated. Their dissociation constant (KD) values were in the nanomolar range. One aptamer, designated CSIR 2.11, was evaluated using sputum samples. CSIR 2.11 had sensitivity and specificity of 100% and 68.75% using Youden’s index and 35% and 95%, respectively, using a rule-in cut-point. Conclusion This preliminary proof-of-concept study suggests that a diagnosis of active TB using anti-CFP-10.ESAT-6 aptamers applied to human sputum samples is feasible. PMID:23056492

  11. Selective extraction of proteins and other macromolecules from biological samples using molecular imprinted polymers.

    Science.gov (United States)

    Stevenson, Derek; El-Sharif, Hazim F; Reddy, Subrayal M

    2016-11-01

    The accurate determination of intact macromolecules in biological samples, such as blood, plasma, serum, urine, tissue and feces is a challenging problem. The increased interest in macromolecules both as candidate drugs and as biomarkers for diagnostic purposes means that new method development approaches are needed. This review charts developments in the use of molecularly imprinted polymers first for small-molecular-mass compounds then for proteins and other macromolecules. Examples of the development of molecularly imprinted polymers for macromolecules are highlighted. The two main application areas to date are sensors and separation science, particularly SPE. Examples include peptides and polypeptides, lysozyme, hemoglobin, ovalbumin, bovine serum albumin and viruses.

  12. Colorimetric biomimetic sensor systems based on molecularly imprinted polymer membranes for highly-selective detection of phenol in environmental samples

    Directory of Open Access Journals (Sweden)

    Sergeyeva T. A.

    2014-05-01

    Full Text Available Aim. Development of an easy-to-use colorimetric sensor system for fast and accurate detection of phenol in envi- ronmental samples. Methods. Technique of molecular imprinting, method of in situ polymerization of molecularly imprinted polymer membranes. Results. The proposed sensor is based on free-standing molecularly imprinted polymer (MIP membranes, synthesized by in situ polymerization, and having in their structure artificial binding sites capable of selective phenol recognition. The quantitative detection of phenol, selectively adsorbed by the MIP membranes, is based on its reaction with 4-aminoantipyrine, which gives a pink-colored product. The intensity of staining of the MIP membrane is proportional to phenol concentration in the analyzed sample. Phenol can be detected within the range 50 nM–10 mM with limit of detection 50 nM, which corresponds to the concentrations that have to be detected in natural and waste waters in accordance with environmental protection standards. Stability of the MIP-membrane-based sensors was assessed during 12 months storage at room temperature. Conclusions. The sensor system provides highly-selective and sensitive detection of phenol in both mo- del and real (drinking, natural, and waste water samples. As compared to traditional methods of phenol detection, the proposed system is characterized by simplicity of operation and can be used in non-laboratory conditions.

  13. Specific and selective probes for Staphylococcus aureus from phage-displayed random peptide libraries.

    Science.gov (United States)

    De Plano, Laura M; Carnazza, Santina; Messina, Grazia M L; Rizzo, Maria Giovanna; Marletta, Giovanni; Guglielmino, Salvatore P P

    2017-09-01

    Staphylococcus aureus is a major human pathogen causing health care-associated and community-associated infections. Early diagnosis is essential to prevent disease progression and to reduce complications that can be serious. In this study, we selected, from a 9-mer phage peptide library, a phage clone displaying peptide capable of specific binding to S. aureus cell surface, namely St.au9IVS5 (sequence peptide RVRSAPSSS).The ability of the isolated phage clone to interact specifically with S. aureus and the efficacy of its bacteria-binding properties were established by using enzyme linked immune-sorbent assay (ELISA). We also demonstrated by Western blot analysis that the most reactive and selective phage peptide binds a 78KDa protein on the bacterial cell surface. Furthermore, we observed selectivity of phage-bacteria-binding allowing to identify clinical isolates of S. aureus in comparison with a panel of other bacterial species. In order to explore the possibility of realizing a selective bacteria biosensor device, based on immobilization of affinity-selected phage, we have studied the physisorbed phage deposition onto a mica surface. Atomic Force Microscopy (AFM) was used to determine the organization of phage on mica surface and then the binding performance of mica-physisorbed phage to bacterial target was evaluated during the time by fluorescent microscopy. The system is able to bind specifically about 50% of S. aureus cells after 15' and 90% after one hour. Due to specificity and rapidness, this biosensing strategy paves the way to the further development of new cheap biosensors to be used in developing countries, as lab-on-chip (LOC) to detect bacterial agents in clinical diagnostics applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. BEHAVIORAL RESPONSES TO TAXPAYER AUDITS: EVIDENCE FROM RANDOM TAXPAYER INQUIRIES

    National Research Council Canada - National Science Library

    Norman Gemmell; Marisa Ratto

    2012-01-01

    .... Comparing samples of randomly selected audited and non-audited UK taxpayers, the evidence confirms predictions that audited taxpayers found to be "compliant" reduce their subsequent compliance...

  15. Assessment of DDT levels in selected environmental media and biological samples from Mexico and Central America.

    Science.gov (United States)

    Pérez-Maldonado, Iván N; Trejo, Antonio; Ruepert, Clemens; Jovel, Reyna del Carmen; Méndez, Mónica Patricia; Ferrari, Mirtha; Saballos-Sobalvarro, Emilio; Alexander, Carlos; Yáñez-Estrada, Leticia; Lopez, Dania; Henao, Samuel; Pinto, Emilio R; Díaz-Barriga, Fernando

    2010-03-01

    Taking into account the environmental persistence and the toxicity of DDT, the Pan American Health Organization (PAHO) organized a surveillance program in Mesoamerica which included the detection of residual DDT in environmental (soil) and biological samples (fish tissue and children's blood). This program was carried out in communities from Mexico, Guatemala, El Salvador, Honduras, Nicaragua, Costa Rica and Panama. This paper presents the first report of that program. As expected, the results show that the levels for [summation operator] DDT in soil (outdoor or indoor) and fish samples in the majority of the locations studied are below guidelines. However, in some locations, we found children with high concentrations of DDT as in Mexico (mean level 50.2 ng/mL). Furthermore, in some communities and for some matrices, the DDT/DDE quotient is higher than one and this may reflect a recent DDT exposure. Therefore, more efforts are needed to avoid exposure and to prevent the reintroduction of DDT into the region. In this regard it is important to know that under the surveillance of PAHO and with the support of UNEP, a regional program in Mesoamerica for the collection and disposal of DDT and other POPs stockpiles is in progress. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  16. EVALUATION OF THE CONTENT OF SELECTED HEAVY METALS IN SAMPLES OF POLISH HONEYS

    Directory of Open Access Journals (Sweden)

    Elżbieta Sitarz-Palczak

    2015-06-01

    Full Text Available This paper presents the result of the determination of the total content of Cu, Pb and Zn by the method of atomic absorption spectrometry with atomization in an air-acetylene flame in Polish honeys samples. The research material was a honeydew, monofloral honey and buckwheat honey. For the mineralization of samples, the following solutions were applied: 1 HNO3(conc, 2 HNO3(conc and H2O2(conc in the volume ratio equal to 4:1 and 3:1. On the basis of the results and recommended food standards the percentage of the recommended dietary allowances (RDA in connection with the consumption of 100 g of product were estimated. To verify the results validation of analytical method used was carried out. It was included defining the following validation parameters values: the limit of detection and quantification; linearity and measurement range; repeatability and accuracy of the results. The contamination of the analyzed honeys by Pb is higher than the acceptable level of contamination of this element. The highest contents of Cu and Zn were characterized by monofloral honeys.

  17. Selection of locations of knots for linear splines in random regression test-day models.

    Science.gov (United States)

    Jamrozik, J; Bohmanova, J; Schaeffer, L R

    2010-04-01

    Using spline functions (segmented polynomials) in regression models requires the knowledge of the location of the knots. Knots are the points at which independent linear segments are connected. Optimal positions of knots for linear splines of different orders were determined in this study for different scenarios, using existing estimates of covariance functions and an optimization algorithm. The traits considered were test-day milk, fat and protein yields, and somatic cell score (SCS) in the first three lactations of Canadian Holsteins. Two ranges of days in milk (from 5 to 305 and from 5 to 365) were taken into account. In addition, four different populations of Holstein cows, from Australia, Canada, Italy and New Zealand, were examined with respect to first lactation (305 days) milk only. The estimates of genetic and permanent environmental covariance functions were based on single- and multiple-trait test-day models, with Legendre polynomials of order 4 as random regressions. A differential evolution algorithm was applied to find the best location of knots for splines of orders 4 to 7 and the criterion for optimization was the goodness-of-fit of the spline covariance function. Results indicated that the optimal position of knots for linear splines differed between genetic and permanent environmental effects, as well as between traits and lactations. Different populations also exhibited different patterns of optimal knot locations. With linear splines, different positions of knots should therefore be used for different effects and traits in random regression test-day models when analysing milk production traits.

  18. Genome wide signatures of positive selection: The comparison of independent samples and the identification of regions associated to traits

    Directory of Open Access Journals (Sweden)

    Thomas Merle B

    2009-04-01

    Full Text Available Abstract Background The goal of genome wide analyses of polymorphisms is to achieve a better understanding of the link between genotype and phenotype. Part of that goal is to understand the selective forces that have operated on a population. Results In this study we compared the signals of selection, identified through population divergence in the Bovine HapMap project, to those found in an independent sample of cattle from Australia. Evidence for population differentiation across the genome, as measured by FST, was highly correlated in the two data sets. Nevertheless, 40% of the variance in FST between the two studies was attributed to the differences in breed composition. Seventy six percent of the variance in FST was attributed to differences in SNP composition and density when the same breeds were compared. The difference between FST of adjacent loci increased rapidly with the increase in distance between SNP, reaching an asymptote after 20 kb. Using 129 SNP that have highly divergent FST values in both data sets, we identified 12 regions that had additive effects on the traits residual feed intake, beef yield or intramuscular fatness measured in the Australian sample. Four of these regions had effects on more than one trait. One of these regions includes the R3HDM1 gene, which is under selection in European humans. Conclusion Firstly, many different populations will be necessary for a full description of selective signatures across the genome, not just a small set of highly divergent populations. Secondly, it is necessary to use the same SNP when comparing the signatures of selection from one study to another. Thirdly, useful signatures of selection can be obtained where many of the groups have only minor genetic differences and may not be clearly separated in a principal component analysis. Fourthly, combining analyses of genome wide selection signatures and genome wide associations to traits helps to define the trait under selection or

  19. Assessing causality in associations between cannabis use and schizophrenia risk: a two-sample Mendelian randomization study.

    Science.gov (United States)

    Gage, S H; Jones, H J; Burgess, S; Bowden, J; Davey Smith, G; Zammit, S; Munafò, M R

    2017-04-01

    Observational associations between cannabis and schizophrenia are well documented, but ascertaining causation is more challenging. We used Mendelian randomization (MR), utilizing publicly available data as a method for ascertaining causation from observational data. We performed bi-directional two-sample MR using summary-level genome-wide data from the International Cannabis Consortium (ICC) and the Psychiatric Genomics Consortium (PGC2). Single nucleotide polymorphisms (SNPs) associated with cannabis initiation (p schizophrenia (p cannabis initiation on risk of schizophrenia [odds ratio (OR) 1.04 per doubling odds of cannabis initiation, 95% confidence interval (CI) 1.01-1.07, p = 0.019]. There was strong evidence consistent with a causal effect of schizophrenia risk on likelihood of cannabis initiation (OR 1.10 per doubling of the odds of schizophrenia, 95% CI 1.05-1.14, p = 2.64 × 10-5). Findings were as predicted for the negative control (height: OR 1.00, 95% CI 0.99-1.01, p = 0.90) but weaker than predicted for the positive control (years in education: OR 0.99, 95% CI 0.97-1.00, p = 0.066) analyses. Our results provide some that cannabis initiation increases the risk of schizophrenia, although the size of the causal estimate is small. We find stronger evidence that schizophrenia risk predicts cannabis initiation, possibly as genetic instruments for schizophrenia are stronger than for cannabis initiation.

  20. Multiple-image authentication with a cascaded multilevel architecture based on amplitude field random sampling and phase information multiplexing.

    Science.gov (United States)

    Fan, Desheng; Meng, Xiangfeng; Wang, Yurong; Yang, Xiulun; Pan, Xuemei; Peng, Xiang; He, Wenqi; Dong, Guoyan; Chen, Hongyi

    2015-04-10

    A multiple-image authentication method with a cascaded multilevel architecture in the Fresnel domain is proposed, in which a synthetic encoded complex amplitude is first fabricated, and its real amplitude component is generated by iterative amplitude encoding, random sampling, and space multiplexing for the low-level certification images, while the phase component of the synthetic encoded complex amplitude is constructed by iterative phase information encoding and multiplexing for the high-level certification images. Then the synthetic encoded complex amplitude is iteratively encoded into two phase-type ciphertexts located in two different transform planes. During high-level authentication, when the two phase-type ciphertexts and the high-level decryption key are presented to the system and then the Fresnel transform is carried out, a meaningful image with good quality and a high correlation coefficient with the original certification image can be recovered in the output plane. Similar to the procedure of high-level authentication, in the case of low-level authentication with the aid of a low-level decryption key, no significant or meaningful information is retrieved, but it can result in a remarkable peak output in the nonlinear correlation coefficient of the output image and the corresponding original certification image. Therefore, the method realizes different levels of accessibility to the original certification image for different authority levels with the same cascaded multilevel architecture.

  1. Mental health impact of the 2010 Haiti earthquake on the Miami Haitian population: A random-sample survey.

    Science.gov (United States)

    Messiah, Antoine; Acuna, Juan M; Castro, Grettel; de la Vega, Pura Rodríguez; Vaiva, Guillaume; Shultz, James; Neria, Yuval; De La Rosa, Mario

    2014-07-01

    This study examined the mental health consequences of the January 2010 Haiti earthquake on Haitians living in Miami-Dade County, Florida, 2-3 years following the event. A random-sample household survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants (N = 421) were assessed for their earthquake exposure and its impact on family, friends, and household finances; and for symptoms of posttraumatic stress disorder (PTSD), anxiety, and major depression; using standardized screening measures and thresholds. Exposure was considered as "direct" if the interviewee was in Haiti during the earthquake. Exposure was classified as "indirect" if the interviewee was not in Haiti during the earthquake but (1) family members or close friends were victims of the earthquake, and/or (2) family members were hosted in the respondent's household, and/or (3) assets or jobs were lost because of the earthquake. Interviewees who did not qualify for either direct or indirect exposure were designated as "lower" exposure. Eight percent of respondents qualified for direct exposure, and 63% qualified for indirect exposure. Among those with direct exposure, 19% exceeded threshold for PTSD, 36% for anxiety, and 45% for depression. Corresponding percentages were 9%, 22% and 24% for respondents with indirect exposure, and 6%, 14%, and 10% for those with lower exposure. A majority of Miami Haitians were directly or indirectly exposed to the earthquake. Mental health distress among them remains considerable two to three years post-earthquake.

  2. Mental Health Impact of Hosting Disaster Refugees: Analyses from a Random Sample Survey Among Haitians Living in Miami.

    Science.gov (United States)

    Messiah, Antoine; Lacoste, Jérôme; Gokalsing, Erick; Shultz, James M; Rodríguez de la Vega, Pura; Castro, Grettel; Acuna, Juan M

    2016-08-01

    Studies on the mental health of families hosting disaster refugees are lacking. This study compares participants in households that hosted 2010 Haitian earthquake disaster refugees with their nonhost counterparts. A random sample survey was conducted from October 2011 through December 2012 in Miami-Dade County, Florida. Haitian participants were assessed regarding their 2010 earthquake exposure and impact on family and friends and whether they hosted earthquake refugees. Using standardized scores and thresholds, they were evaluated for symptoms of three common mental disorders (CMDs): posttraumatic stress disorder, generalized anxiety disorder, and major depressive disorder (MDD). Participants who hosted refugees (n = 51) had significantly higher percentages of scores beyond thresholds for MDD than those who did not host refugees (n = 365) and for at least one CMD, after adjusting for participants' earthquake exposures and effects on family and friends. Hosting refugees from a natural disaster appears to elevate the risk for MDD and possibly other CMDs, independent of risks posed by exposure to the disaster itself. Families hosting refugees deserve special attention.

  3. Randomization modeling to ascertain clustering patterns of human papillomavirus types detected in cervicovaginal samples in the United States.

    Directory of Open Access Journals (Sweden)

    Troy David Querec

    Full Text Available Detection of multiple human papillomavirus (HPV types in the genital tract is common. Associations among HPV types may impact HPV vaccination modeling and type replacement. The objectives were to determine the distribution of concurrent HPV type infections in cervicovaginal samples and examine type-specific associations. We analyzed HPV genotyping results from 32,245 cervicovaginal specimens collected from women aged 11 to 83 years in the United States from 2001 through 2011. Statistical power was enhanced by combining 6 separate studies. Expected concurrent infection frequencies from a series of permutation models, each with increasing fidelity to the real data, were compared with the observed data. Statistics were computed based on the distributional properties of the randomized data. Concurrent detection occurred more than expected with 0 or ≥3 HPV types and less than expected with 1 and 2 types. Some women bear a disproportionate burden of the HPV type prevalence. Type associations were observed that exceeded multiple hypothesis corrected significance. Multiple HPV types were detected more frequently than expected by chance and associations among particular HPV types were detected. However vaccine-targeted types were not specifically affected, supporting the expectation that current bivalent/quadrivalent HPV vaccination will not result in type replacement with other high-risk types.

  4. Moganite in selected Polish chert samples: the evidence from MIR, Raman and X-ray studies.

    Science.gov (United States)

    Sitarz, M; Wyszomirski, P; Handke, B; Jeleń, P

    2014-03-25

    The authors discuss the results of structural investigations (XRD, MIR, Raman) of Polish cherts from different geological formations. The X-ray diffraction analyses explicitly confirmed the presence of moganite, which was identified on the basis of satellite XRD peaks positioned/occurring close to the quartz reflections and the additional reflections with the dhkl values 4.456 and 3.101 Ǻ, and established its amounts as varying between about 1 and above 17 wt%. The mid-infrared and Raman spectroscopy also proved the presence of moganite, indicated by the 695 and 560-555 cm(-)(1) bands, respectively. These analytical finds allow to identify moganite in samples containing various SiO2 polymorphs. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. A robust, selective, and flexible RF front-end for wideband sampling receivers

    Directory of Open Access Journals (Sweden)

    Itamar Melamed

    2017-06-01

    Full Text Available In this paper, we describe the design and evaluation of a second-generation front-end unit for wideband sampling radio receivers. The unit contains a surface acoustic wave (SAW filter to protect the receiver from strong out-of-band signals, an RF limiter to protect both the filter and the receiver from physical damage due to strong signals, and a bias tee with a DC limiter to provide DC power to a masthead low-noise amplifier, if one is used. The unit allows receivers such as those of the universal software radio peripheral (USRP N-series type to be effectively used in RF environments with weak signals and strong in-band and out-of-band interferences.

  6. The AlSi10Mg samples produced by selective laser melting: single track, densification, microstructure and mechanical behavior

    Energy Technology Data Exchange (ETDEWEB)

    Wei, Pei; Wei, Zhengying, E-mail: zywei@mail.xjtu.edu.cn; Chen, Zhen; Du, Jun; He, Yuyang; Li, Junfeng; Zhou, Yatong

    2017-06-30

    Highlights: • The thermal behavior of AlSi10Mg molten pool was analyzed. • The SLM-processed sample with a relatively low surface roughness was obtained. • Effects of parameters on surface topography of scan track were investigated. • Effects of parameters on microstructure of parts were investigated. • Optimum processing parameters for AlSi10Mg SLM was obtained. - Abstract: This densification behavior and attendant microstructural characteristics of the selective laser melting (SLM) processed AlSi10Mg alloy affected by the processing parameters were systematically investigated. The samples with a single track were produced by SLM to study the influences of laser power and scanning speed on the surface morphologies of scan tracks. Additionally, the bulk samples were produced to investigate the influence of the laser power, scanning speed, and hatch spacing on the densification level and the resultant microstructure. The experimental results showed that the level of porosity of the SLM-processed samples was significantly governed by energy density of laser beam and the hatch spacing. The tensile properties of SLM-processed samples and the attendant fracture surface can be enhanced by decreasing the level of porosity. The microstructure of SLM-processed samples consists of supersaturated Al-rich cellular structure along with eutectic Al/Si situated at the cellular boundaries. The Si content in the cellular boundaries increases with increasing the laser power and decreasing the scanning speed. The hardness of SLM-processed samples was significantly improved by this fine microstructure compared with the cast samples. Moreover, the hardness of SLM-processed samples at overlaps was lower than the hardness observed at track cores.

  7. A Randomized Controlled Trial of Cognitive Debiasing Improves Assessment and Treatment Selection for Pediatric Bipolar Disorder

    Science.gov (United States)

    Jenkins, Melissa M.; Youngstrom, Eric A.

    2015-01-01

    Objective This study examined the efficacy of a new cognitive debiasing intervention in reducing decision-making errors in the assessment of pediatric bipolar disorder (PBD). Method The study was a randomized controlled trial using case vignette methodology. Participants were 137 mental health professionals working in different regions of the US (M=8.6±7.5 years of experience). Participants were randomly assigned to a (1) brief overview of PBD (control condition), or (2) the same brief overview plus a cognitive debiasing intervention (treatment condition) that educated participants about common cognitive pitfalls (e.g., base-rate neglect; search satisficing) and taught corrective strategies (e.g., mnemonics, Bayesian tools). Both groups evaluated four identical case vignettes. Primary outcome measures were clinicians’ diagnoses and treatment decisions. The vignette characters’ race/ethnicity was experimentally manipulated. Results Participants in the treatment group showed better overall judgment accuracy, p < .001, and committed significantly fewer decision-making errors, p < .001. Inaccurate and somewhat accurate diagnostic decisions were significantly associated with different treatment and clinical recommendations, particularly in cases where participants missed comorbid conditions, failed to detect the possibility of hypomania or mania in depressed youths, and misdiagnosed classic manic symptoms. In contrast, effects of patient race were negligible. Conclusions The cognitive debiasing intervention outperformed the control condition. Examining specific heuristics in cases of PBD may identify especially problematic mismatches between typical habits of thought and characteristics of the disorder. The debiasing intervention was brief and delivered via the Web; it has the potential to generalize and extend to other diagnoses as well as to various practice and training settings. PMID:26727411

  8. Rock magnetic evidence of non-random raw material selection criteria in Cerro Toledo Obsidian Artifacts from Valles Caldera, New Mexico

    Science.gov (United States)

    Gregovich, A.; Feinberg, J. M.; Steffen, A.; Sternberg, R. S.

    2014-12-01

    Stone tools are one of the most enduring forms of ancient human behavior available to anthropologists. The geologic materials that comprise stone tools are a reflection of the rocks that were available locally or through trade, as are the intended use of the tools and the knapping technology needed to produce them. Investigation of the rock magnetic and geochemical characteristics of the artifacts and the geological source materials provides a baseline to explore these past behaviors. This study uses rock magnetic properties to explore the raw material selection criteria involved in the production of obsidian tools in the region around Valles Caldera in northern New Mexico. Obsidian is locally abundant and was traded by tribes across the central United States. Here we compare the rock magnetic properties of a sample of obsidian projectile points (N =25) that have been geochemically sourced to the Cerro Toledo obsidian flow with geological samples collected from four sites within the same flow (N =135). This collection of archaeological artifacts, albeit small, contains representatives of at least 8 different point styles that were used over 6000 years from the Archaic into the Late Prehistoric. Bulk rock hysteresis parameters (Mr, Ms, Bc, and Bcr) and low-field susceptibility (Χ) measurements show that the projectile points generally contain a lower concentration of magnetic minerals than the geologic samples. For example, the artifacts' median Ms value is 2.9 x 10-3 Am2kg-1, while that of the geological samples is 6.5 x 10-3 Am2kg-1. The concentration of magnetic minerals in obsidian is a proxy for the concentration of microlites in general, and this relationship suggests that although obsidian was locally abundant, toolmakers employed non-random selection criteria resulting in generally lower concentrations of microlites in their obsidian tools.

  9. Methodology series module 5: Sampling strategies

    OpenAIRE

    Maninder Singh Setia

    2016-01-01

    Once the research question and the research design have been finalised, it is important to select the appropriate sample for the study. The method by which the researcher selects the sample is the ? Sampling Method?. There are essentially two types of sampling methods: 1) probability sampling ? based on chance events (such as random numbers, flipping a coin etc.); and 2) non-probability sampling ? based on researcher's choice, population that accessible & available. Some of the non-probabilit...

  10. Tobacco smoking surveillance: is quota sampling an efficient tool for monitoring national trends? A comparison with a random cross-sectional survey.

    Directory of Open Access Journals (Sweden)

    Romain Guignard

    Full Text Available OBJECTIVES: It is crucial for policy makers to monitor the evolution of tobacco smoking prevalence. In France, this monitoring is based on a series of cross-sectional general population surveys, the Health Barometers, conducted every five years and based on random samples. A methodological study has been carried out to assess the reliability of a monitoring system based on regular quota sampling surveys for smoking prevalence. DESIGN / OUTCOME MEASURES: In 2010, current and daily tobacco smoking prevalences obtained in a quota survey on 8,018 people were compared with those of the 2010 Health Barometer carried out on 27,653 people. Prevalences were assessed separately according to the telephone equipment of the interviewee (landline phone owner vs "mobile-only", and logistic regressions were conducted in the pooled database to assess the impact of the telephone equipment and of the survey mode on the prevalences found. Finally, logistic regressions adjusted for sociodemographic characteristics were conducted in the random sample in order to determine the impact of the needed number of calls to interwiew "hard-to-reach" people on the prevalence found. RESULTS: Current and daily prevalences were higher in the random sample (respectively 33.9% and 27.5% in 15-75 years-old than in the quota sample (respectively 30.2% and 25.3%. In both surveys, current and daily prevalences were lower among landline phone owners (respectively 31.8% and 25.5% in the random sample and 28.9% and 24.0% in the quota survey. The required number of calls was slightly related to the smoking status after adjustment for sociodemographic characteristics. CONCLUSION: Random sampling appears to be more effective than quota sampling, mainly by making it possible to interview hard-to-reach populations.

  11. Efficient and Selective Enrichment of Ultratrace Cytokinins in Plant Samples by Magnetic Perhydroxy-Cucurbit[8]uril Microspheres.

    Science.gov (United States)

    Zhang, Qianchun; Li, Gongke; Xiao, Xiaohua; Zhan, Song; Cao, Yujuan

    2016-04-05

    Cytokinins play a critical role in controlling plant growth and development, but it is difficult to be determined in plant samples due to the extremely low concentration level of picomole/gram. So it is important for efficient sample preparation with selective enrichment and rapid separation for accurate analysis of cytokinins. Herein, a supramolecular perhydroxy-cucurbit[8]uril (PCB[8]) was fabricated into the Fe3O4 magnetic particles via chemical bonding assembly and magnetic perhydroxy-cucurbit[8]uril (MPC) materials were obtained. The MPC had good enrichment capability to cytokinins and the enrichment factors were more than 208. The interaction of MPC and cytokinins was investigated by adsorption test and density functional theory (DFT) calculation, the results showed that the main drive forces were the host-guest interaction and hydrogen-bonding interaction between the perhydroxy-cucurbit[8]uril with analytes. Combined with ultra performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS), the MPC was used as a sorbent of magnetic solid-phase extraction for the analysis of cytokinins in plant samples. A sensitive and selective UPLC-MS/MS method was developed with low detection limits of 0.14-0.32 ng/L for cytokinins analysis. Five cytokinins including zeatin riboside, meta-topolin, kinetin, kinetin riboside, and zip with 6.12-87.3 ng/kg were determined in the soybean sprout and Arabidopsis thaliana. The recoveries were in the range of 76.2-110% with relative standard deviations (n = 5) of 2.3-9.7%. On the basis of these results, magnetic perhydroxy-cucurbit[8]uril materials with selective enrichment capability have good potential on the analysis of ultratrace targets from complicated sample matrixes.

  12. A clinical trial alert tool to recruit large patient samples and assess selection bias in general practice research

    Directory of Open Access Journals (Sweden)

    Scheidt-Nave Christa

    2011-02-01

    Full Text Available Abstract Background Many research projects in general practice face problems when recruiting patients, often resulting in low recruitment rates and an unknown selection bias, thus limiting their value for health services research. The objective of the study is to evaluate the recruitment performance of the practice staff in 25 participating general practices when using a clinical trial alert (CTA tool. Methods The CTA tool was developed for an osteoporosis survey of patients at risk for osteoporosis and fractures. The tool used data from electronic patient records (EPRs to automatically identify the population at risk (net sample, to apply eligibility criteria, to contact eligible patients, to enrol and survey at least 200 patients per practice. The effects of the CTA intervention were evaluated on the basis of recruitment efficiency and selection bias. Results The CTA tool identified a net sample of 16,067 patients (range 162 to 1,316 per practice, of which the practice staff reviewed 5,161 (32% cases for eligibility. They excluded 3,248 patients and contacted 1,913 patients. Of these, 1,526 patients (range 4 to 202 per practice were successfully enrolled and surveyed. This made up 9% of the net sample and 80% of the patients contacted. Men and older patients were underrepresented in the study population. Conclusion Although the recruitment target was unreachable for most practices, the practice staff in the participating practices used the CTA tool successfully to identify, document and survey a large patient sample. The tool also helped the research team to precisely determine a slight selection bias.

  13. A Principle Component Analysis of Galaxy Properties from a Large, Gas-Selected Sample

    Directory of Open Access Journals (Sweden)

    Yu-Yen Chang

    2012-01-01

    concluded that this is in conflict with the CDM model. Considering the importance of the issue, we reinvestigate the problem using the principal component analysis on a fivefold larger sample and additional near-infrared data. We use databases from the Arecibo Legacy Fast Arecibo L-band Feed Array Survey for the gas properties, the Sloan Digital Sky Survey for the optical properties, and the Two Micron All Sky Survey for the near-infrared properties. We confirm that the parameters are indeed correlated where a single physical parameter can explain 83% of the variations. When color (g-i is included, the first component still dominates but it develops a second principal component. In addition, the near-infrared color (i-J shows an obvious second principal component that might provide evidence of the complex old star formation. Based on our data, we suggest that it is premature to pronounce the failure of the CDM model and it motivates more theoretical work.

  14. Combinatorial effects of amoxicillin and metronidazole on selected periodontal bacteria and whole plaque samples.

    Science.gov (United States)

    Kulik Kunz, Eva M; Lenkeit, Krystyna; Waltimo, Tuomas; Weiger, Roland; Walter, Clemens

    2014-06-01

    The aim of the present study was to analyze in vitro the combinatorial effects of the antibiotic combination of amoxicillin plus metronidazole on subgingival bacterial isolates. Aggregatibacter (Actinobacillus) actinomycetemcomitans, Prevotella intermedia/nigrescens, Fusobacterium nucleatum and Eikenella corrodens from our strain collection and subgingival bacteria isolated from patients with periodontitis were tested for their susceptibility to amoxicillin and metronidazole using the Etest. The fractional inhibitory concentration index (FICI), which is commonly used to describe drug interactions, was calculated. Synergy, i.e. FICI values ≤ 0.5, between amoxicillin and metronidazole was shown for two A. actinomycetemcomitans (FICI: 0.3), two F. nucleatum (FICI: 0.3 and 0.5, respectively) and one E. corrodens (FICI: 0.4) isolates. Indifference, i.e. FIC indices of >0.5 but ≤4, occurred for other isolates and the 14 P. intermedia/nigrescens strains tested. Microorganisms resistant to either amoxicillin or metronidazole were detected in all samples by Etest. Combinatorial effects occur between amoxicillin and metronidazole on some strains of A. actinomycetemcomitans, F. nucleatum and E. corrodens. Synergy was shown for a few strains only. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Systematic detection and association of Entamoeba species in stool samples from selected sites in India.

    Science.gov (United States)

    Nath, J; Banyal, N; Gautam, D S; Ghosh, S K; Singha, B; Paul, J

    2015-01-01

    This study developed a fast and high throughput dot-blot technique to evaluate the presence of Entamoeba in stool samples (n = 643) followed by a PCR-based method to validate and differentiate the two species E. histolytica and E. dispar. The prevalence rate of the parasite has been detected in a cross-sectional study carried out in the population of the Eastern and Northern parts of India. Of the various demographic features, prevalence was highest in the monsoon season (P = 0·017), in the <15 years age group (P = 0·015). In HIV-positive individuals, the prevalence rate was significantly high (P = 0·008) in patients with a CD4 cell count <200 as well as in patients without antiretroviral therapy (ART) (P = 0·011). Our analysis further confirmed that risk factors such as toilet facilities, living conditions, hygienic practices, drinking water source, occupation and level of education are important predictors as they were found to contribute significantly in the prevalence of the parasite.

  16. Green Synthesis of Fluorescent Carbon Dots for Selective Detection of Tartrazine in Food Samples.

    Science.gov (United States)

    Xu, Hua; Yang, Xiupei; Li, Gu; Zhao, Chuan; Liao, Xiangjun

    2015-08-05

    A simple, economical, and green method for the preparation of water-soluble, high-fluorescent carbon quantum dots (C-dots) has been developed via hydrothermal process using aloe as a carbon source. The synthesized C-dots were characterized by atomic force microscope (AFM), transmission electron microscopy (TEM), fluorescence spectrophotometer, UV-vis absorption spectra as well as Fourier transform infrared spectroscopy (FTIR). The results reveal that the as-prepared C-dots were spherical shape with an average diameter of 5 nm and emit bright yellow photoluminescence (PL) with a quantum yield of approximately 10.37%. The surface of the C-dots was rich in hydroxyl groups and presented various merits including high fluorescent quantum yield, excellent photostability, low toxicity and satisfactory solubility. Additionally, we found that one of the widely used synthetic food colorants, tartrazine, could result in a strong fluorescence quenching of the C-dots through a static quenching process. The decrease of fluorescence intensity made it possible to determine tartrazine in the linear range extending from 0.25 to 32.50 μM, This observation was further successfully applied for the determination of tartrazine in food samples collected from local markets, suggesting its great potential toward food routine analysis. Results from our study may shed light on the production of fluorescent and biocompatible nanocarbons due to our simple and environmental benign strategy to synthesize C-dots in which aloe was used as a carbon source.

  17. Soft X-Ray Observations of a Complete Sample of X-Ray--selected BL Lacertae Objects

    Science.gov (United States)

    Perlman, Eric S.; Stocke, John T.; Wang, Q. Daniel; Morris, Simon L.

    1996-01-01

    We present the results of ROSAT PSPC observations of the X-ray selected BL Lacertae objects (XBLs) in the complete Einstein Extended Medium Sensitivity Survey (EM MS) sample. None of the objects is resolved in their respective PSPC images, but all are easily detected. All BL Lac objects in this sample are well-fitted by single power laws. Their X-ray spectra exhibit a variety of spectral slopes, with best-fit energy power-law spectral indices between α = 0.5-2.3. The PSPC spectra of this sample are slightly steeper than those typical of flat ratio-spectrum quasars. Because almost all of the individual PSPC spectral indices are equal to or slightly steeper than the overall optical to X-ray spectral indices for these same objects, we infer that BL Lac soft X-ray continua are dominated by steep-spectrum synchrotron radiation from a broad X-ray jet, rather than flat-spectrum inverse Compton radiation linked to the narrower radio/millimeter jet. The softness of the X-ray spectra of these XBLs revives the possibility proposed by Guilbert, Fabian, & McCray (1983) that BL Lac objects are lineless because the circumnuclear gas cannot be heated sufficiently to permit two stable gas phases, the cooler of which would comprise the broad emission-line clouds. Because unified schemes predict that hard self-Compton radiation is beamed only into a small solid angle in BL Lac objects, the steep-spectrum synchrotron tail controls the temperature of the circumnuclear gas at r ≤ 1018 cm and prevents broad-line cloud formation. We use these new ROSAT data to recalculate the X-ray luminosity function and cosmological evolution of the complete EMSS sample by determining accurate K-corrections for the sample and estimating the effects of variability and the possibility of incompleteness in the sample. Our analysis confirms that XBLs are evolving "negatively," opposite in sense to quasars, with Ve/Va = 0.331±0.060. The statistically significant difference between the values for X

  18. Role of selective V2-receptor-antagonism in septic shock: a randomized, controlled, experimental study

    OpenAIRE

    Rehberg, Sebastian; Ertmer, Christian; Lange, Matthias; Morelli, Andrea; Whorton, Elbert; Strohhäcker, Anne-Katrin; Dünser, Martin Wolfgang; Lipke, Erik; Kampmeier, Tim G; Aken, Hugo; Traber, Daniel L; Westphal, Martin

    2010-01-01

    ABSTRACT : INTRODUCTION : V2-receptor (V2R) stimulation potentially aggravates sepsis-induced vasodilation, fluid accumulation and microvascular thrombosis. Therefore, the present study was performed to determine the effects of a first-line therapy with the selective V2R-antagonist (Propionyl1-D-Tyr(Et)2-Val4-Abu6-Arg8,9)-Vasopressin on cardiopulmonary hemodynamics and organ function vs. the mixed V1aR/V2R-agonist arginine vasopressin (AVP) or placebo in an established ovine model of septic s...

  19. Conflicts of Interest, Selective Inertia, and Research Malpractice in Randomized Clinical Trials: An Unholy Trinity.

    Science.gov (United States)

    Berger, Vance W

    2015-08-01

    Recently a great deal of attention has been paid to conflicts of interest in medical research, and the Institute of Medicine has called for more research into this important area. One research question that has not received sufficient attention concerns the mechanisms of action by which conflicts of interest can result in biased and/or flawed research. What discretion do conflicted researchers have to sway the results one way or the other? We address this issue from the perspective of selective inertia, or an unnatural selection of research methods based on which are most likely to establish the preferred conclusions, rather than on which are most valid. In many cases it is abundantly clear that a method that is not being used in practice is superior to the one that is being used in practice, at least from the perspective of validity, and that it is only inertia, as opposed to any serious suggestion that the incumbent method is superior (or even comparable), that keeps the inferior procedure in use, to the exclusion of the superior one. By focusing on these flawed research methods we can go beyond statements of potential harm from real conflicts of interest, and can more directly assess actual (not potential) harm.

  20. PeptideManager: A Peptide Selection Tool for Targeted Proteomic Studies Involving Mixed Samples from Different Species

    Directory of Open Access Journals (Sweden)

    Kevin eDemeure

    2014-09-01

    Full Text Available The search for clinically useful protein biomarkers using advanced mass spectrometry approaches represents a major focus in cancer research. However, the direct analysis of human samples may be challenging due to limited availability, the absence of appropriate control samples, or the large background variability observed in patient material. As an alternative approach, human tumors orthotopically implanted into a different species (xenografts are clinically relevant models that have proven their utility in pre-clinical research. Patient derived xenografts for glioblastoma have been extensively characterized in our laboratory and have been shown to retain the characteristics of the parental tumor at the phenotypic and genetic level. Such models were also found to adequately mimic the behavior and treatment response of human tumors. The reproducibility of such xenograft models, the possibility to identify their host background and perform tumor-host interaction studies, are major advantages over the direct analysis of human samples.At the proteome level, the analysis of xenograft samples is challenged by the presence of proteins from two different species which, depending on tumor size, type or location, often appear at variable ratios. Any proteomics approach aimed at quantifying proteins within such samples must consider the identification of species specific peptides in order to avoid biases introduced by the host proteome. Here, we present an in-house methodology and tool developed to select peptides used as surrogates for protein candidates from a defined proteome (e.g., human in a host proteome background (e.g., mouse, rat suited for a mass spectrometry analysis. The tools presented here are applicable to any species specific proteome, provided a protein database is available. By linking the information from both proteomes, PeptideManager significantly facilitates and expedites the selection of peptides used as surrogates to analyze

  1. Participant-selected music and physical activity in older adults following cardiac rehabilitation: a randomized controlled trial.

    Science.gov (United States)

    Clark, Imogen N; Baker, Felicity A; Peiris, Casey L; Shoebridge, Georgie; Taylor, Nicholas F

    2017-03-01

    To evaluate effects of participant-selected music on older adults' achievement of activity levels recommended in the physical activity guidelines following cardiac rehabilitation. A parallel group randomized controlled trial with measurements at Weeks 0, 6 and 26. A multisite outpatient rehabilitation programme of a publicly funded metropolitan health service. Adults aged 60 years and older who had completed a cardiac rehabilitation programme. Experimental participants selected music to support walking with guidance from a music therapist. Control participants received usual care only. The primary outcome was the proportion of participants achieving activity levels recommended in physical activity guidelines. Secondary outcomes compared amounts of physical activity, exercise capacity, cardiac risk factors, and exercise self-efficacy. A total of 56 participants, mean age 68.2 years (SD = 6.5), were randomized to the experimental ( n = 28) and control groups ( n = 28). There were no differences between groups in proportions of participants achieving activity recommended in physical activity guidelines at Week 6 or 26. Secondary outcomes demonstrated between-group differences in male waist circumference at both measurements (Week 6 difference -2.0 cm, 95% CI -4.0 to 0; Week 26 difference -2.8 cm, 95% CI -5.4 to -0.1), and observed effect sizes favoured the experimental group for amounts of physical activity (d = 0.30), exercise capacity (d = 0.48), and blood pressure (d = -0.32). Participant-selected music did not increase the proportion of participants achieving recommended amounts of physical activity, but may have contributed to exercise-related benefits.

  2. CREDIT SCORING MODELING WITH STATE-DEPENDENT SAMPLE SELECTION: A COMPARISON STUDY WITH THE USUAL LOGISTIC MODELING

    Directory of Open Access Journals (Sweden)

    Paulo H. Ferreira

    2015-04-01

    Full Text Available Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model, a logistic regression with state-dependent sample selection model and a bounded logistic regression model via a large simulation study. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian retail bank portfolio. Our simulation results so far revealed that there is nostatistically significant difference in terms of predictive capacity among the naive logistic regression models, the logistic regression with state-dependent sample selection models and the bounded logistic regression models. However, there is difference between the distributions of the estimated default probabilities from these three statistical modeling techniques, with the naive logistic regression models and the boundedlogistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. Which are common in practice.

  3. Capturing the Flatness of a peer-to-peer lending network through random and selected perturbations

    Science.gov (United States)

    Karampourniotis, Panagiotis D.; Singh, Pramesh; Uparna, Jayaram; Horvat, Emoke-Agnes; Szymanski, Boleslaw K.; Korniss, Gyorgy; Bakdash, Jonathan Z.; Uzzi, Brian

    Null models are established tools that have been used in network analysis to uncover various structural patterns. They quantify the deviance of an observed network measure to that given by the null model. We construct a null model for weighted, directed networks to identify biased links (carrying significantly different weights than expected according to the null model) and thus quantify the flatness of the system. Using this model, we study the flatness of Kiva, a large international crownfinancing network of borrowers and lenders, aggregated to the country level. The dataset spans the years from 2006 to 2013. Our longitudinal analysis shows that flatness of the system is reducing over time, meaning the proportion of biased inter-country links is growing. We extend our analysis by testing the robustness of the flatness of the network in perturbations on the links' weights or the nodes themselves. Examples of such perturbations are event shocks (e.g. erecting walls) or regulatory shocks (e.g. Brexit). We find that flatness is unaffected by random shocks, but changes after shocks target links with a large weight or bias. The methods we use to capture the flatness are based on analytics, simulations, and numerical computations using Shannon's maximum entropy. Supported by ARL NS-CTA.

  4. Benefits of Selected Physical Exercise Programs in Detention: A Randomized Controlled Study

    Directory of Open Access Journals (Sweden)

    Claudia Battaglia

    2013-10-01

    Full Text Available The aim of the study was to determine which kind of physical activity could be useful to inmate populations to improve their health status and fitness levels. A repeated measure design was used to evaluate the effects of two different training protocols on subjects in a state of detention, tested pre- and post-experimental protocol.Seventy-five male subjects were enrolled in the studyand randomly allocated to three groups: the cardiovascular plus resistance training protocol group (CRT (n = 25; mean age 30.9 ± 8.9 years,the high-intensity strength training protocol group (HIST (n = 25; mean age 33.9 ± 6.8 years, and a control group (C (n = 25; mean age 32.9 ± 8.9 years receiving no treatment. All subjects underwent a clinical assessmentandfitness tests. MANOVA revealed significant multivariate effects on group (p < 0.01 and group-training interaction (p < 0.05. CRT protocol resulted the most effective protocol to reach the best outcome in fitness tests. Both CRT and HIST protocols produced significant gains in the functional capacity (cardio-respiratory capacity and cardiovascular disease risk decrease of incarcerated males. The significant gains obtained in functional capacity reflect the great potential of supervised exercise interventions for improving the health status of incarcerated people.

  5. Differentiating intraprofessional attitudes toward paradigms in health care delivery among chiropractic factions: results from a randomly sampled survey

    Science.gov (United States)

    2014-01-01

    Background As health care has increased in complexity and health care teams have been offered as a solution, so too is there an increased need for stronger interprofessional collaboration. However the intraprofessional factions that exist within every profession challenge interprofessional communication through contrary paradigms. As a contender in the conservative spinal health care market, factions within chiropractic that result in unorthodox practice behaviours may compromise interprofessional relations and that profession’s progress toward institutionalization. The purpose of this investigation was to quantify the professional stratification among Canadian chiropractic practitioners and evaluate the practice perceptions of those factions. Methods A stratified random sample of 740 Canadian chiropractors was surveyed to determine faction membership and how professional stratification could be related to views that could be considered unorthodox to current evidence-based care and guidelines. Stratification in practice behaviours is a stated concern of mainstream medicine when considering interprofessional referrals. Results Of 740 deliverable questionnaires, 503 were returned for a response rate of 68%. Less than 20% of chiropractors (18.8%) were aligned with a predefined unorthodox perspective of the conditions they treat. Prediction models suggest that unorthodox perceptions of health practice related to treatment choices, x-ray use and vaccinations were strongly associated with unorthodox group membership (X2 =13.4, p = 0.0002). Conclusion Chiropractors holding unorthodox views may be identified based on response to specific beliefs that appear to align with unorthodox health practices. Despite continued concerns by mainstream medicine, only a minority of the profession has retained a perspective in contrast to current scientific paradigms. Understanding the profession’s factions is important to the anticipation of care delivery when considering

  6. Reduction of DNA contamination in RNA samples for reverse transcription-polymerase chain reaction using selective precipitation by compaction agents.

    Science.gov (United States)

    Añez-Lingerfelt, Mariaclara; Fox, George E; Willson, Richard C

    2009-01-01

    An important problem in measurement of messenger RNA (mRNA) levels by reverse transcription-polymerase chain reaction (RT-PCR) is DNA contamination, which can produce artifactually increased mRNA concentration. Current methods to eliminate contaminating DNA can compromise the integrity of the RNA, are time-consuming, and/or are hazardous. We present a rapid, nuclease-free, and cost-effective method of eliminating contaminating DNA in RNA samples using selective precipitation by compaction agents. Compaction agents are cationic molecules that bind to double-stranded nucleic acids, driven by electrostatic interactions and steric complementarity. The effectiveness and DNA selectivity of six compaction agents were investigated: trivalent spermidine, Triquat A, and Triquat 7; tetravalent spermine and Quatro-quat; and hexavalent Quatro-diquat. Effectiveness was measured initially by supernatant UV absorbance after precipitation of salmon sperm DNA. Effectiveness and selectivity were then investigated using differences in RT-PCR C(t) values with synthetic mixtures of human genomic DNA and total RNA and with total RNA isolated from cells. With 500 microM spermidine or Triquat A, the supernatant DNA could not be detected up to 40 cycles of PCR (C(t)12.6), whereas the C(t) for the mRNA was increased by only five cycles. Therefore, spermidine and Triquat A each show strong DNA selectivity and could be used to eliminate contaminating DNA in measurements of mRNA.

  7. Reduction of DNA Contamination in RNA Samples for RT-PCR using Selective Precipitation by Compaction Agents

    Science.gov (United States)

    Añez-Lingerfelt, Mariaclara; Fox, George E.; Willson, Richard C.

    2017-01-01

    An important problem in measurement of mRNA levels by RT-PCR is DNA contamination, which can produce artifactually increased mRNA concentration. Current methods to eliminate contaminating DNA can compromise the integrity of the RNA, are time-consuming, or are hazardous. We present a rapid, nuclease-free, and cost-effective method of eliminating contaminating DNA in RNA samples using selective precipitation by compaction agents. Compaction agents are cationic molecules which bind to double-stranded nucleic acids, driven by electrostatic interactions and steric complimentarity. The effectiveness and DNA-selectivity of six compaction agents were investigated: trivalent spermidine, Triquat A, and Triquat 7; tetravalent spermine and Quatro-quat; and hexavalent Quatro-diquat. Effectiveness was measured initially by supernatant UV absorbance after precipitation of salmon sperm DNA. Effectiveness and selectivity were then investigated using differences in RT-PCR Ct values with synthetic mixtures of human genomic DNA and total RNA, and with total RNA isolated from cells. With 500 μM of spermidine or Triquat A, the supernatant DNA could not be detected up to 40 cycles of PCR (Ct ≥12.6) while the Ct for the mRNA was increased by only 5 cycles. Therefore, spermidine and Triquat A each show strong DNA-selectivity and could be used to eliminate contaminating DNA in measurements of mRNA. PMID:18831957

  8. Adjustment of pH of enrichment media might improve selective isolation of MRSA from pig samples

    DEFF Research Database (Denmark)

    Cavaco, Lina; Agersø, Yvonne; Mordhorst, Hanne

    2011-01-01

    Methicillin resistant Staphylococcus aureus (MRSA) have emerged in livestock in several countries worldwide in recent years. MRSA may colonise in low numbers which makes both epidemiological studies and the implementation of control programmes difficult. Methods for selective isolation of MRSA from...... animal samples have been developed. However, obtaining sufficient sensitivity has been a challenge. Staphylococcus aureus is normally found on the skin, surviving and growing under extreme conditions: dry environment with high salt and low pH. In the selective isolation so far used high salt...... pig swabs. Initially a total of seven strains, including: two MRSA, two enterococci, two CNS one Aerococcus viridans and one Proteus spp. strains, were tested for growth in Mueller Hinton II broth with pH ranging from 4 to 5.5 and salt addition of 4% to 7%. In the next step, these strains were tested...

  9. Coordination of Conditional Poisson Samples

    Directory of Open Access Journals (Sweden)

    Grafström Anton

    2015-12-01

    Full Text Available Sample coordination seeks to maximize or to minimize the overlap of two or more samples. The former is known as positive coordination, and the latter as negative coordination. Positive coordination is mainly used for estimation purposes and to reduce data collection costs. Negative coordination is mainly performed to diminish the response burden of the sampled units. Poisson sampling design with permanent random numbers provides an optimum coordination degree of two or more samples. The size of a Poisson sample is, however, random. Conditional Poisson (CP sampling is a modification of the classical Poisson sampling that produces a fixed-size πps sample. We introduce two methods to coordinate Conditional Poisson samples over time or simultaneously. The first one uses permanent random numbers and the list-sequential implementation of CP sampling. The second method uses a CP sample in the first selection and provides an approximate one in the second selection because the prescribed inclusion probabilities are not respected exactly. The methods are evaluated using the size of the expected sample overlap, and are compared with their competitors using Monte Carlo simulation. The new methods provide a good coordination degree of two samples, close to the performance of Poisson sampling with permanent random numbers.

  10. Using dust, gas and stellar mass-selected samples to probe dust sources and sinks in low-metallicity galaxies

    Science.gov (United States)

    De Vis, P.; Gomez, H. L.; Schofield, S. P.; Maddox, S.; Dunne, L.; Baes, M.; Cigan, P.; Clark, C. J. R.; Gomez, E. L.; Lara-López, M.; Owers, M.

    2017-10-01

    We combine samples of nearby galaxies with Herschel photometry selected on their dust, metal, H I and stellar mass content, and compare these to chemical evolution models in order to discriminate between different dust sources. In a companion paper, we used an H i-selected sample of nearby galaxies to reveal a subsample of very gas-rich (gas fraction >80 per cent) sources with dust masses significantly below predictions from simple chemical evolution models, and well below Md/M* and Md/Mgas scaling relations seen in dust and stellar-selected samples of local galaxies. We use a chemical evolution model to explain these dust-poor, but gas-rich, sources as well as the observed star formation rates (SFRs) and dust-to-gas ratios. We find that (i) a delayed star formation history is required to model the observed SFRs; (ii) inflows and outflows are required to model the observed metallicities at low gas fractions; (iii) a reduced contribution of dust from supernovae (SNe) is needed to explain the dust-poor sources with high gas fractions. These dust-poor, low stellar mass galaxies require a typical core-collapse SN to produce 0.01-0.16 M⊙ of dust. To match the observed dust masses at lower gas fractions, significant grain growth is required to counteract the reduced contribution from dust in SNe and dust destruction from SN shocks. These findings are statistically robust, though due to intrinsic scatter it is not always possible to find one single model that successfully describes all the data. We also show that the dust-to-metal ratio decreases towards lower metallicity.

  11. A Permutation Importance-Based Feature Selection Method for Short-Term Electricity Load Forecasting Using Random Forest

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-09-01

    Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

  12. Correlations of Job Burnout and Selected Features of Work Environment in a Sample of Midwives who Assist at Medical Abortions

    OpenAIRE

    Banasiewicz, Jolanta; Rozenek, Hanna; Wójtowicz, Stanisław; Pawłowski, Witold

    2017-01-01

    Banasiewicz Jolanta, Rozenek Hanna, Wójtowicz Stanisław, Pawłowski Witold. Correlations of Job Burnout and Selected Features of Work Environment in a Sample of Midwives who Assist at Medical Abortions. Journal of Education, Health and Sport. 2017;7(7):270-288. eISSN 2391-8306. DOI http://dx.doi.org/10.5281/zenodo.827402 http://ojs.ukw.edu.pl/index.php/johs/article/view/4614         The journal has had 7 points in Ministry of Science and Higher Education parametric eval...

  13. Effectiveness of a selective, personality-targeted prevention program for adolescent alcohol use and misuse: a cluster randomized controlled trial.

    Science.gov (United States)

    Conrod, Patricia J; O'Leary-Barrett, Maeve; Newton, Nicola; Topper, Lauren; Castellanos-Ryan, Natalie; Mackie, Clare; Girard, Alain

    2013-03-01

    Selective school-based alcohol prevention programs targeting youth with personality risk factors for addiction and mental health problems have been found to reduce substance use and misuse in those with elevated personality profiles. To report 24-month outcomes of the Teacher-Delivered Personality-Targeted Interventions for Substance Misuse Trial (Adventure trial) in which school staff were trained to provide interventions to students with 1 of 4 high-risk (HR) profiles: anxiety sensitivity, hopelessness, impulsivity, and sensation seeking and to examine the indirect herd effects of this program on the broader low-risk (LR) population of students who were not selected for intervention. Cluster randomized controlled trial. Secondary schools in London, United Kingdom. A total of 1210 HR and 1433 LR students in the ninth grade (mean [SD] age, 13.7 [0.33] years). Schools were randomized to provide brief personality-targeted interventions to HR youth or treatment as usual (statutory drug education in class). Participants were assessed for drinking, binge drinking, and problem drinking before randomization and at 6-monthly intervals for 2 years. Two-part latent growth models indicated long-term effects of the intervention on drinking rates (β = -0.320, SE = 0.145, P = .03) and binge drinking rates (β = -0.400, SE = 0.179, P = .03) and growth in binge drinking (β = -0.716, SE = 0.274, P = .009) and problem drinking (β = -0.452, SE = 0.193, P = .02) for HR youth. The HR youth were also found to benefit from the interventions during the 24-month follow-up on drinking quantity (β = -0.098, SE = 0.047, P = .04), growth in drinking quantity (β = -0.176, SE = 0.073, P = .02), and growth in binge drinking frequency (β = -0.183, SE = 0.092, P = .047). Some herd effects in LR youth were observed, specifically on drinking rates (β = -0.259, SE = 0.132, P = .049) and growth of binge drinking (β = -0.244, SE = 0.073, P = .001), during the 24-month follow-up. Findings further

  14. Selective extraction and effective separation of galactosylsphingosine (psychosine) and glucosylsphingosine from other glycosphingolipids in pathological tissue samples.

    Science.gov (United States)

    Li, Yu-Teh; Li, Su-Chen; Buck, Wayne R; Haskins, Mark E; Wu, Sz-Wei; Khoo, Kay-Hooi; Sidransky, Ellen; Bunnell, Bruce A

    2011-09-01

    To facilitate the study of the chemical pathology of galactosylsphingosine (psychosine, GalSph) in Krabbe disease and glucosylsphingosine (GlcSph) in Gaucher disease, we have devised a facile method for the effective separation of these two glycosylsphingosines from other glycosphingolipids (GSLs) in Krabbe brain and Gaucher spleen samples. The procedure involves the use of acetone to selectively extract GalSph and GlcSph, respectively, from Krabbe brain and Gaucher spleen samples. Since acetone does not extract other GSLs except modest amounts of galactosylceramide, sulfatide, and glucosylceramide, the positively charged GalSph or GlcSph in the acetone extract can be readily separated from other GSLs by batchwise cation-exchange chromatography using a Waters Accell Plus CM Cartridge. GalSph or GlcSph enriched by this simple procedure can be readily analyzed by thin-layer chromatography or high-performance liquid chromatography.

  15. Pesticides, selected elements, and other chemicals in infant and toddler total diet samples, October 1980-March 1982

    Energy Technology Data Exchange (ETDEWEB)

    Gartrell, M.J.; Craun, J.C.; Podrebarac, D.S.; Gunderson, E.L.

    The US Food and Drug Administration (FDA) conducts Total Diet Studies to determine the dietary intake of selected pesticides, industrial chemicals, and elements (including radionuclides). These studies involve the retail purchase and analysis of foods representative of the diets of infants, toddlers, and adults. The individual food items are separated into a number of food groups, each of which is analyzed as a composite. This report summarizes the results for infant and toddler Total Diet samples collected in 13 cities between October 1980 and March 1982. The average concentration, range of concentrations, and calculated average daily intake of each chemical found are presented by food group. The average daily intakes of the chemicals are similar to those found in the several preceding years and generally are within acceptable limits. The results for samples collected during the same period that represent the adult diet are reported separately.

  16. Comparative Study of element composition of some honey samples ...

    African Journals Online (AJOL)

    The study was carried out at the Federal College of Forestry, Ibadan with seven honey samples were randomly selected within Ibadan metropolis, labeled as: Sample A (Forestry Honey), Sample B(Pure Honey), Sample C (Mr. Honey), Sample D (Taraba Honey), Sample E (Sokoto Honey), Sample F (Saki Honey), and ...

  17. The Sloan Lens ACS Survey. I. A Large Spectroscopically Selected Sample of Massive Early-Type Lens Galaxies

    Science.gov (United States)

    Bolton, Adam S.; Burles, Scott; Koopmans, Leon V. E.; Treu, Tommaso; Moustakas, Leonidas A.

    2006-01-01

    The Sloan Lens ACS (SLACS) Survey is an efficient Hubble Space Telescope (HST) Snapshot imaging survey for new galaxy-scale strong gravitational lenses. The targeted lens candidates are selected spectroscopically from the Sloan Digital Sky Survey (SDSS) database of galaxy spectra for having multiple nebular emission lines at a redshift significantly higher than that of the SDSS target galaxy. The SLACS survey is optimized to detect bright early-type lens galaxies with faint lensed sources in order to increase the sample of known gravitational lenses suitable for detailed lensing, photometric, and dynamical modeling. In this paper, the first in a series on the current results of our HST Cycle 13 imaging survey, we present a catalog of 19 newly discovered gravitational lenses, along with nine other observed candidate systems that are either possible lenses, nonlenses, or nondetections. The survey efficiency is thus >=68%. We also present Gemini 8 m and Magellan 6.5 m integral-field spectroscopic data for nine of the SLACS targets, which further support the lensing interpretation. A new method for the effective subtraction of foreground galaxy images to reveal faint background features is presented. We show that the SLACS lens galaxies have colors and ellipticities typical of the spectroscopic parent sample from which they are drawn (SDSS luminous red galaxies and quiescent MAIN sample galaxies), but are somewhat brighter and more centrally concentrated. Several explanations for the latter bias are suggested. The SLACS survey provides the first statistically significant and homogeneously selected sample of bright early-type lens galaxies, furnishing a powerful probe of the structure of early-type galaxies within the half-light radius. The high confirmation rate of lenses in the SLACS survey suggests consideration of spectroscopic lens discovery as an explicit science goal of future spectroscopic galaxy surveys.

  18. Zeta Sperm Selection Improves Pregnancy Rate and Alters Sex Ratio in Male Factor Infertility Patients: A Double-Blind, Randomized Clinical Trial

    Directory of Open Access Journals (Sweden)

    Nasr Esfahani Mohammad Hossein

    2016-07-01

    Full Text Available Background Selection of sperm for intra-cytoplasmic sperm injection (ICSI is usually considered as the ultimate technique to alleviate male-factor infertility. In routine ICSI, selection is based on morphology and viability which does not necessarily preclude the chance injection of DNA-damaged or apoptotic sperm into the oocyte. Sperm with high negative surface electrical charge, named “Zeta potential”, are mature and more likely to have intact chromatin. In addition, X-bearing spermatozoa carry more negative charge. Therefore, we aimed to compare the clinical outcomes of Zeta procedure with routine sperm selection in infertile men candidate for ICSI. Materials and Methods From a total of 203 ICSI cycles studied, 101 cycles were allocated to density gradient centrifugation (DGC/Zeta group and the remaining 102 were included in the DGC group in this prospective study. Clinical outcomes were com- pared between the two groups. The ratios of Xand Y bearing sperm were assessed by fluorescence in situ hybridization (FISH and quantitative polymerase chain reaction (qPCR methods in 17 independent semen