WorldWideScience

Sample records for randomly selected based

  1. Differential privacy-based evaporative cooling feature selection and classification with relief-F and random forests.

    Science.gov (United States)

    Le, Trang T; Simmons, W Kyle; Misaki, Masaya; Bodurka, Jerzy; White, Bill C; Savitz, Jonathan; McKinney, Brett A

    2017-09-15

    Classification of individuals into disease or clinical categories from high-dimensional biological data with low prediction error is an important challenge of statistical learning in bioinformatics. Feature selection can improve classification accuracy but must be incorporated carefully into cross-validation to avoid overfitting. Recently, feature selection methods based on differential privacy, such as differentially private random forests and reusable holdout sets, have been proposed. However, for domains such as bioinformatics, where the number of features is much larger than the number of observations p≫n , these differential privacy methods are susceptible to overfitting. We introduce private Evaporative Cooling, a stochastic privacy-preserving machine learning algorithm that uses Relief-F for feature selection and random forest for privacy preserving classification that also prevents overfitting. We relate the privacy-preserving threshold mechanism to a thermodynamic Maxwell-Boltzmann distribution, where the temperature represents the privacy threshold. We use the thermal statistical physics concept of Evaporative Cooling of atomic gases to perform backward stepwise privacy-preserving feature selection. On simulated data with main effects and statistical interactions, we compare accuracies on holdout and validation sets for three privacy-preserving methods: the reusable holdout, reusable holdout with random forest, and private Evaporative Cooling, which uses Relief-F feature selection and random forest classification. In simulations where interactions exist between attributes, private Evaporative Cooling provides higher classification accuracy without overfitting based on an independent validation set. In simulations without interactions, thresholdout with random forest and private Evaporative Cooling give comparable accuracies. We also apply these privacy methods to human brain resting-state fMRI data from a study of major depressive disorder. Code

  2. Blocked Randomization with Randomly Selected Block Sizes

    Directory of Open Access Journals (Sweden)

    Jimmy Efird

    2010-12-01

    Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  3. Local randomization in neighbor selection improves PRM roadmap quality

    KAUST Repository

    McMahon, Troy; Jacobs, Sam; Boyd, Bryan; Tapia, Lydia; Amato, Nancy M.

    2012-01-01

    Probabilistic Roadmap Methods (PRMs) are one of the most used classes of motion planning methods. These sampling-based methods generate robot configurations (nodes) and then connect them to form a graph (roadmap) containing representative feasible pathways. A key step in PRM roadmap construction involves identifying a set of candidate neighbors for each node. Traditionally, these candidates are chosen to be the k-closest nodes based on a given distance metric. In this paper, we propose a new neighbor selection policy called LocalRand(k,K'), that first computes the K' closest nodes to a specified node and then selects k of those nodes at random. Intuitively, LocalRand attempts to benefit from random sampling while maintaining the higher levels of local planner success inherent to selecting more local neighbors. We provide a methodology for selecting the parameters k and K'. We perform an experimental comparison which shows that for both rigid and articulated robots, LocalRand results in roadmaps that are better connected than the traditional k-closest policy or a purely random neighbor selection policy. The cost required to achieve these results is shown to be comparable to k-closest. © 2012 IEEE.

  4. Local randomization in neighbor selection improves PRM roadmap quality

    KAUST Repository

    McMahon, Troy

    2012-10-01

    Probabilistic Roadmap Methods (PRMs) are one of the most used classes of motion planning methods. These sampling-based methods generate robot configurations (nodes) and then connect them to form a graph (roadmap) containing representative feasible pathways. A key step in PRM roadmap construction involves identifying a set of candidate neighbors for each node. Traditionally, these candidates are chosen to be the k-closest nodes based on a given distance metric. In this paper, we propose a new neighbor selection policy called LocalRand(k,K\\'), that first computes the K\\' closest nodes to a specified node and then selects k of those nodes at random. Intuitively, LocalRand attempts to benefit from random sampling while maintaining the higher levels of local planner success inherent to selecting more local neighbors. We provide a methodology for selecting the parameters k and K\\'. We perform an experimental comparison which shows that for both rigid and articulated robots, LocalRand results in roadmaps that are better connected than the traditional k-closest policy or a purely random neighbor selection policy. The cost required to achieve these results is shown to be comparable to k-closest. © 2012 IEEE.

  5. TEHRAN AIR POLLUTANTS PREDICTION BASED ON RANDOM FOREST FEATURE SELECTION METHOD

    Directory of Open Access Journals (Sweden)

    A. Shamsoddini

    2017-09-01

    Full Text Available Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  6. Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method

    Science.gov (United States)

    Shamsoddini, A.; Aboodi, M. R.; Karami, J.

    2017-09-01

    Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  7. 47 CFR 1.1602 - Designation for random selection.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...

  8. 47 CFR 1.1603 - Conduct of random selection.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...

  9. Minimization over randomly selected lines

    Directory of Open Access Journals (Sweden)

    Ismet Sahin

    2013-07-01

    Full Text Available This paper presents a population-based evolutionary optimization method for minimizing a given cost function. The mutation operator of this method selects randomly oriented lines in the cost function domain, constructs quadratic functions interpolating the cost function at three different points over each line, and uses extrema of the quadratics as mutated points. The crossover operator modifies each mutated point based on components of two points in population, instead of one point as is usually performed in other evolutionary algorithms. The stopping criterion of this method depends on the number of almost degenerate quadratics. We demonstrate that the proposed method with these mutation and crossover operations achieves faster and more robust convergence than the well-known Differential Evolution and Particle Swarm algorithms.

  10. A Permutation Importance-Based Feature Selection Method for Short-Term Electricity Load Forecasting Using Random Forest

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-09-01

    Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

  11. Discriminative Projection Selection Based Face Image Hashing

    Science.gov (United States)

    Karabat, Cagatay; Erdogan, Hakan

    Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.

  12. Materials selection for oxide-based resistive random access memories

    International Nuclear Information System (INIS)

    Guo, Yuzheng; Robertson, John

    2014-01-01

    The energies of atomic processes in resistive random access memories (RRAMs) are calculated for four typical oxides, HfO 2 , TiO 2 , Ta 2 O 5 , and Al 2 O 3 , to define a materials selection process. O vacancies have the lowest defect formation energy in the O-poor limit and dominate the processes. A band diagram defines the operating Fermi energy and O chemical potential range. It is shown how the scavenger metal can be used to vary the O vacancy formation energy, via controlling the O chemical potential, and the mean Fermi energy. The high endurance of Ta 2 O 5 RRAM is related to its more stable amorphous phase and the adaptive lattice rearrangements of its O vacancy

  13. Materials selection for oxide-based resistive random access memories

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Yuzheng; Robertson, John [Engineering Department, Cambridge University, Cambridge CB2 1PZ (United Kingdom)

    2014-12-01

    The energies of atomic processes in resistive random access memories (RRAMs) are calculated for four typical oxides, HfO{sub 2}, TiO{sub 2}, Ta{sub 2}O{sub 5}, and Al{sub 2}O{sub 3}, to define a materials selection process. O vacancies have the lowest defect formation energy in the O-poor limit and dominate the processes. A band diagram defines the operating Fermi energy and O chemical potential range. It is shown how the scavenger metal can be used to vary the O vacancy formation energy, via controlling the O chemical potential, and the mean Fermi energy. The high endurance of Ta{sub 2}O{sub 5} RRAM is related to its more stable amorphous phase and the adaptive lattice rearrangements of its O vacancy.

  14. Selectivity and sparseness in randomly connected balanced networks.

    Directory of Open Access Journals (Sweden)

    Cengiz Pehlevan

    Full Text Available Neurons in sensory cortex show stimulus selectivity and sparse population response, even in cases where no strong functionally specific structure in connectivity can be detected. This raises the question whether selectivity and sparseness can be generated and maintained in randomly connected networks. We consider a recurrent network of excitatory and inhibitory spiking neurons with random connectivity, driven by random projections from an input layer of stimulus selective neurons. In this architecture, the stimulus-to-stimulus and neuron-to-neuron modulation of total synaptic input is weak compared to the mean input. Surprisingly, we show that in the balanced state the network can still support high stimulus selectivity and sparse population response. In the balanced state, strong synapses amplify the variation in synaptic input and recurrent inhibition cancels the mean. Functional specificity in connectivity emerges due to the inhomogeneity caused by the generative statistical rule used to build the network. We further elucidate the mechanism behind and evaluate the effects of model parameters on population sparseness and stimulus selectivity. Network response to mixtures of stimuli is investigated. It is shown that a balanced state with unselective inhibition can be achieved with densely connected input to inhibitory population. Balanced networks exhibit the "paradoxical" effect: an increase in excitatory drive to inhibition leads to decreased inhibitory population firing rate. We compare and contrast selectivity and sparseness generated by the balanced network to randomly connected unbalanced networks. Finally, we discuss our results in light of experiments.

  15. Applications of random forest feature selection for fine-scale genetic population assignment.

    Science.gov (United States)

    Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G

    2018-02-01

    Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.

  16. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    International Nuclear Information System (INIS)

    Yu, Zhiyong

    2013-01-01

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right

  17. Continuous-Time Mean-Variance Portfolio Selection with Random Horizon

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Zhiyong, E-mail: yuzhiyong@sdu.edu.cn [Shandong University, School of Mathematics (China)

    2013-12-15

    This paper examines the continuous-time mean-variance optimal portfolio selection problem with random market parameters and random time horizon. Treating this problem as a linearly constrained stochastic linear-quadratic optimal control problem, I explicitly derive the efficient portfolios and efficient frontier in closed forms based on the solutions of two backward stochastic differential equations. Some related issues such as a minimum variance portfolio and a mutual fund theorem are also addressed. All the results are markedly different from those in the problem with deterministic exit time. A key part of my analysis involves proving the global solvability of a stochastic Riccati equation, which is interesting in its own right.

  18. Testing, Selection, and Implementation of Random Number Generators

    National Research Council Canada - National Science Library

    Collins, Joseph C

    2008-01-01

    An exhaustive evaluation of state-of-the-art random number generators with several well-known suites of tests provides the basis for selection of suitable random number generators for use in stochastic simulations...

  19. DNA-based random number generation in security circuitry.

    Science.gov (United States)

    Gearheart, Christy M; Arazi, Benjamin; Rouchka, Eric C

    2010-06-01

    DNA-based circuit design is an area of research in which traditional silicon-based technologies are replaced by naturally occurring phenomena taken from biochemistry and molecular biology. This research focuses on further developing DNA-based methodologies to mimic digital data manipulation. While exhibiting fundamental principles, this work was done in conjunction with the vision that DNA-based circuitry, when the technology matures, will form the basis for a tamper-proof security module, revolutionizing the meaning and concept of tamper-proofing and possibly preventing it altogether based on accurate scientific observations. A paramount part of such a solution would be self-generation of random numbers. A novel prototype schema employs solid phase synthesis of oligonucleotides for random construction of DNA sequences; temporary storage and retrieval is achieved through plasmid vectors. A discussion of how to evaluate sequence randomness is included, as well as how these techniques are applied to a simulation of the random number generation circuitry. Simulation results show generated sequences successfully pass three selected NIST random number generation tests specified for security applications.

  20. Genome-wide association data classification and SNPs selection using two-stage quality-based Random Forests.

    Science.gov (United States)

    Nguyen, Thanh-Tung; Huang, Joshua; Wu, Qingyao; Nguyen, Thuy; Li, Mark

    2015-01-01

    Single-nucleotide polymorphisms (SNPs) selection and identification are the most important tasks in Genome-wide association data analysis. The problem is difficult because genome-wide association data is very high dimensional and a large portion of SNPs in the data is irrelevant to the disease. Advanced machine learning methods have been successfully used in Genome-wide association studies (GWAS) for identification of genetic variants that have relatively big effects in some common, complex diseases. Among them, the most successful one is Random Forests (RF). Despite of performing well in terms of prediction accuracy in some data sets with moderate size, RF still suffers from working in GWAS for selecting informative SNPs and building accurate prediction models. In this paper, we propose to use a new two-stage quality-based sampling method in random forests, named ts-RF, for SNP subspace selection for GWAS. The method first applies p-value assessment to find a cut-off point that separates informative and irrelevant SNPs in two groups. The informative SNPs group is further divided into two sub-groups: highly informative and weak informative SNPs. When sampling the SNP subspace for building trees for the forest, only those SNPs from the two sub-groups are taken into account. The feature subspaces always contain highly informative SNPs when used to split a node at a tree. This approach enables one to generate more accurate trees with a lower prediction error, meanwhile possibly avoiding overfitting. It allows one to detect interactions of multiple SNPs with the diseases, and to reduce the dimensionality and the amount of Genome-wide association data needed for learning the RF model. Extensive experiments on two genome-wide SNP data sets (Parkinson case-control data comprised of 408,803 SNPs and Alzheimer case-control data comprised of 380,157 SNPs) and 10 gene data sets have demonstrated that the proposed model significantly reduced prediction errors and outperformed

  1. Selection for altruism through random drift in variable size populations

    Directory of Open Access Journals (Sweden)

    Houchmandzadeh Bahram

    2012-05-01

    Full Text Available Abstract Background Altruistic behavior is defined as helping others at a cost to oneself and a lowered fitness. The lower fitness implies that altruists should be selected against, which is in contradiction with their widespread presence is nature. Present models of selection for altruism (kin or multilevel show that altruistic behaviors can have ‘hidden’ advantages if the ‘common good’ produced by altruists is restricted to some related or unrelated groups. These models are mostly deterministic, or assume a frequency dependent fitness. Results Evolutionary dynamics is a competition between deterministic selection pressure and stochastic events due to random sampling from one generation to the next. We show here that an altruistic allele extending the carrying capacity of the habitat can win by increasing the random drift of “selfish” alleles. In other terms, the fixation probability of altruistic genes can be higher than those of a selfish ones, even though altruists have a smaller fitness. Moreover when populations are geographically structured, the altruists advantage can be highly amplified and the fixation probability of selfish genes can tend toward zero. The above results are obtained both by numerical and analytical calculations. Analytical results are obtained in the limit of large populations. Conclusions The theory we present does not involve kin or multilevel selection, but is based on the existence of random drift in variable size populations. The model is a generalization of the original Fisher-Wright and Moran models where the carrying capacity depends on the number of altruists.

  2. Performance Evaluation of User Selection Protocols in Random Networks with Energy Harvesting and Hardware Impairments

    Directory of Open Access Journals (Sweden)

    Tan Nhat Nguyen

    2016-01-01

    Full Text Available In this paper, we evaluate performances of various user selection protocols under impact of hardware impairments. In the considered protocols, a Base Station (BS selects one of available Users (US to serve, while the remaining USs harvest the energy from the Radio Frequency (RF transmitted by the BS. We assume that all of the US randomly appear around the BS. In the Random Selection Protocol (RAN, the BS randomly selects a US to transmit the data. In the second proposed protocol, named Minimum Distance Protocol (MIND, the US that is nearest to the BS will be chosen. In the Optimal Selection Protocol (OPT, the US providing the highest channel gain between itself and the BS will be served. For performance evaluation, we derive exact and asymptotic closed-form expressions of average Outage Probability (OP over Rayleigh fading channels. We also consider average harvested energy per a US. Finally, Monte-Carlo simulations are then performed to verify the theoretical results.

  3. Application of random effects to the study of resource selection by animals.

    Science.gov (United States)

    Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L

    2006-07-01

    1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions

  4. [Intel random number generator-based true random number generator].

    Science.gov (United States)

    Huang, Feng; Shen, Hong

    2004-09-01

    To establish a true random number generator on the basis of certain Intel chips. The random numbers were acquired by programming using Microsoft Visual C++ 6.0 via register reading from the random number generator (RNG) unit of an Intel 815 chipset-based computer with Intel Security Driver (ISD). We tested the generator with 500 random numbers in NIST FIPS 140-1 and X(2) R-Squared test, and the result showed that the random number it generated satisfied the demand of independence and uniform distribution. We also compared the random numbers generated by Intel RNG-based true random number generator and those from the random number table statistically, by using the same amount of 7500 random numbers in the same value domain, which showed that the SD, SE and CV of Intel RNG-based random number generator were less than those of the random number table. The result of u test of two CVs revealed no significant difference between the two methods. Intel RNG-based random number generator can produce high-quality random numbers with good independence and uniform distribution, and solves some problems with random number table in acquisition of the random numbers.

  5. Comparison of confirmed inactive and randomly selected compounds as negative training examples in support vector machine-based virtual screening.

    Science.gov (United States)

    Heikamp, Kathrin; Bajorath, Jürgen

    2013-07-22

    The choice of negative training data for machine learning is a little explored issue in chemoinformatics. In this study, the influence of alternative sets of negative training data and different background databases on support vector machine (SVM) modeling and virtual screening has been investigated. Target-directed SVM models have been derived on the basis of differently composed training sets containing confirmed inactive molecules or randomly selected database compounds as negative training instances. These models were then applied to search background databases consisting of biological screening data or randomly assembled compounds for available hits. Negative training data were found to systematically influence compound recall in virtual screening. In addition, different background databases had a strong influence on the search results. Our findings also indicated that typical benchmark settings lead to an overestimation of SVM-based virtual screening performance compared to search conditions that are more relevant for practical applications.

  6. Field-based random sampling without a sampling frame: control selection for a case-control study in rural Africa.

    Science.gov (United States)

    Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E

    2001-01-01

    Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.

  7. Brain Tumor Segmentation Based on Random Forest

    Directory of Open Access Journals (Sweden)

    László Lefkovits

    2016-09-01

    Full Text Available In this article we present a discriminative model for tumor detection from multimodal MR images. The main part of the model is built around the random forest (RF classifier. We created an optimization algorithm able to select the important features for reducing the dimensionality of data. This method is also used to find out the training parameters used in the learning phase. The algorithm is based on random feature properties for evaluating the importance of the variable, the evolution of learning errors and the proximities between instances. The detection performances obtained have been compared with the most recent systems, offering similar results.

  8. Fast selection of miRNA candidates based on large-scale pre-computed MFE sets of randomized sequences.

    Science.gov (United States)

    Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter

    2014-01-13

    Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.

  9. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    Science.gov (United States)

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Modeling the probability of use of land units characterized by discrete and continuous measures, we present a Bayesian random-effects model to assess resource selection. This model provides simultaneous estimation of both individual- and population-level selection. Deviance information criterion (DIC), a Bayesian alternative to AIC that is sample-size specific, is used for model selection. Aerial radiolocation data from 76 adult female caribou (Rangifer tarandus) and calf pairs during 1 year on an Arctic coastal plain calving ground were used to illustrate models and assess population-level selection of landscape attributes, as well as individual heterogeneity of selection. Landscape attributes included elevation, NDVI (a measure of forage greenness), and land cover-type classification. Results from the first of a 2-stage model-selection procedure indicated that there is substantial heterogeneity among cow-calf pairs with respect to selection of the landscape attributes. In the second stage, selection of models with heterogeneity included indicated that at the population-level, NDVI and land cover class were significant attributes for selection of different landscapes by pairs on the calving ground. Population-level selection coefficients indicate that the pairs generally select landscapes with higher levels of NDVI, but the relationship is quadratic. The highest rate of selection occurs at values of NDVI less than the maximum observed. Results for land cover-class selections coefficients indicate that wet sedge, moist sedge, herbaceous tussock tundra, and shrub tussock tundra are selected at approximately the same rate, while alpine and sparsely vegetated landscapes are selected at a lower rate. Furthermore, the variability in selection by individual caribou for moist sedge and sparsely vegetated landscapes is large relative to the variability in selection of other land cover types. The example analysis illustrates that, while sometimes computationally intense, a

  10. Organic Ferroelectric-Based 1T1T Random Access Memory Cell Employing a Common Dielectric Layer Overcoming the Half-Selection Problem.

    Science.gov (United States)

    Zhao, Qiang; Wang, Hanlin; Ni, Zhenjie; Liu, Jie; Zhen, Yonggang; Zhang, Xiaotao; Jiang, Lang; Li, Rongjin; Dong, Huanli; Hu, Wenping

    2017-09-01

    Organic electronics based on poly(vinylidenefluoride/trifluoroethylene) (P(VDF-TrFE)) dielectric is facing great challenges in flexible circuits. As one indispensable part of integrated circuits, there is an urgent demand for low-cost and easy-fabrication nonvolatile memory devices. A breakthrough is made on a novel ferroelectric random access memory cell (1T1T FeRAM cell) consisting of one selection transistor and one ferroelectric memory transistor in order to overcome the half-selection problem. Unlike complicated manufacturing using multiple dielectrics, this system simplifies 1T1T FeRAM cell fabrication using one common dielectric. To achieve this goal, a strategy for semiconductor/insulator (S/I) interface modulation is put forward and applied to nonhysteretic selection transistors with high performances for driving or addressing purposes. As a result, high hole mobility of 3.81 cm 2 V -1 s -1 (average) for 2,6-diphenylanthracene (DPA) and electron mobility of 0.124 cm 2 V -1 s -1 (average) for N,N'-1H,1H-perfluorobutyl dicyanoperylenecarboxydiimide (PDI-FCN 2 ) are obtained in selection transistors. In this work, we demonstrate this technology's potential for organic ferroelectric-based pixelated memory module fabrication. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. High-Tg Polynorbornene-Based Block and Random Copolymers for Butanol Pervaporation Membranes

    Science.gov (United States)

    Register, Richard A.; Kim, Dong-Gyun; Takigawa, Tamami; Kashino, Tomomasa; Burtovyy, Oleksandr; Bell, Andrew

    Vinyl addition polymers of substituted norbornene (NB) monomers possess desirably high glass transition temperatures (Tg); however, until very recently, the lack of an applicable living polymerization chemistry has precluded the synthesis of such polymers with controlled architecture, or copolymers with controlled sequence distribution. We have recently synthesized block and random copolymers of NB monomers bearing hydroxyhexafluoroisopropyl and n-butyl substituents (HFANB and BuNB) via living vinyl addition polymerization with Pd-based catalysts. Both series of polymers were cast into the selective skin layers of thin film composite (TFC) membranes, and these organophilic membranes investigated for the isolation of n-butanol from dilute aqueous solution (model fermentation broth) via pervaporation. The block copolymers show well-defined microphase-separated morphologies, both in bulk and as the selective skin layers on TFC membranes, while the random copolymers are homogeneous. Both block and random vinyl addition copolymers are effective as n-butanol pervaporation membranes, with the block copolymers showing a better flux-selectivity balance. While polyHFANB has much higher permeability and n-butanol selectivity than polyBuNB, incorporating BuNB units into the polymer (in either a block or random sequence) limits the swelling of the polyHFANB and thereby improves the n-butanol pervaporation selectivity.

  12. Selection of examples in case-based computer-aided decision systems

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Zurada, Jacek M; Tourassi, Georgia D

    2008-01-01

    Case-based computer-aided decision (CB-CAD) systems rely on a database of previously stored, known examples when classifying new, incoming queries. Such systems can be particularly useful since they do not need retraining every time a new example is deposited in the case base. The adaptive nature of case-based systems is well suited to the current trend of continuously expanding digital databases in the medical domain. To maintain efficiency, however, such systems need sophisticated strategies to effectively manage the available evidence database. In this paper, we discuss the general problem of building an evidence database by selecting the most useful examples to store while satisfying existing storage requirements. We evaluate three intelligent techniques for this purpose: genetic algorithm-based selection, greedy selection and random mutation hill climbing. These techniques are compared to a random selection strategy used as the baseline. The study is performed with a previously presented CB-CAD system applied for false positive reduction in screening mammograms. The experimental evaluation shows that when the development goal is to maximize the system's diagnostic performance, the intelligent techniques are able to reduce the size of the evidence database to 37% of the original database by eliminating superfluous and/or detrimental examples while at the same time significantly improving the CAD system's performance. Furthermore, if the case-base size is a main concern, the total number of examples stored in the system can be reduced to only 2-4% of the original database without a decrease in the diagnostic performance. Comparison of the techniques shows that random mutation hill climbing provides the best balance between the diagnostic performance and computational efficiency when building the evidence database of the CB-CAD system.

  13. Primitive polynomials selection method for pseudo-random number generator

    Science.gov (United States)

    Anikin, I. V.; Alnajjar, Kh

    2018-01-01

    In this paper we suggested the method for primitive polynomials selection of special type. This kind of polynomials can be efficiently used as a characteristic polynomials for linear feedback shift registers in pseudo-random number generators. The proposed method consists of two basic steps: finding minimum-cost irreducible polynomials of the desired degree and applying primitivity tests to get the primitive ones. Finally two primitive polynomials, which was found by the proposed method, used in pseudorandom number generator based on fuzzy logic (FRNG) which had been suggested before by the authors. The sequences generated by new version of FRNG have low correlation magnitude, high linear complexity, less power consumption, is more balanced and have better statistical properties.

  14. Prediction of plant promoters based on hexamers and random triplet pair analysis

    Directory of Open Access Journals (Sweden)

    Noman Nasimul

    2011-06-01

    Full Text Available Abstract Background With an increasing number of plant genome sequences, it has become important to develop a robust computational method for detecting plant promoters. Although a wide variety of programs are currently available, prediction accuracy of these still requires further improvement. The limitations of these methods can be addressed by selecting appropriate features for distinguishing promoters and non-promoters. Methods In this study, we proposed two feature selection approaches based on hexamer sequences: the Frequency Distribution Analyzed Feature Selection Algorithm (FDAFSA and the Random Triplet Pair Feature Selecting Genetic Algorithm (RTPFSGA. In FDAFSA, adjacent triplet-pairs (hexamer sequences were selected based on the difference in the frequency of hexamers between promoters and non-promoters. In RTPFSGA, random triplet-pairs (RTPs were selected by exploiting a genetic algorithm that distinguishes frequencies of non-adjacent triplet pairs between promoters and non-promoters. Then, a support vector machine (SVM, a nonlinear machine-learning algorithm, was used to classify promoters and non-promoters by combining these two feature selection approaches. We referred to this novel algorithm as PromoBot. Results Promoter sequences were collected from the PlantProm database. Non-promoter sequences were collected from plant mRNA, rRNA, and tRNA of PlantGDB and plant miRNA of miRBase. Then, in order to validate the proposed algorithm, we applied a 5-fold cross validation test. Training data sets were used to select features based on FDAFSA and RTPFSGA, and these features were used to train the SVM. We achieved 89% sensitivity and 86% specificity. Conclusions We compared our PromoBot algorithm to five other algorithms. It was found that the sensitivity and specificity of PromoBot performed well (or even better with the algorithms tested. These results show that the two proposed feature selection methods based on hexamer frequencies

  15. 40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sample selection by random number... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square... area created in accordance with paragraph (a) of this section, select two random numbers: one each for...

  16. Optimization of the Dutch Matrix Test by Random Selection of Sentences From a Preselected Subset

    Directory of Open Access Journals (Sweden)

    Rolph Houben

    2015-04-01

    Full Text Available Matrix tests are available for speech recognition testing in many languages. For an accurate measurement, a steep psychometric function of the speech materials is required. For existing tests, it would be beneficial if it were possible to further optimize the available materials by increasing the function’s steepness. The objective is to show if the steepness of the psychometric function of an existing matrix test can be increased by selecting a homogeneous subset of recordings with the steepest sentence-based psychometric functions. We took data from a previous multicenter evaluation of the Dutch matrix test (45 normal-hearing listeners. Based on half of the data set, first the sentences (140 out of 311 with a similar speech reception threshold and with the steepest psychometric function (≥9.7%/dB were selected. Subsequently, the steepness of the psychometric function for this selection was calculated from the remaining (unused second half of the data set. The calculation showed that the slope increased from 10.2%/dB to 13.7%/dB. The resulting subset did not allow the construction of enough balanced test lists. Therefore, the measurement procedure was changed to randomly select the sentences during testing. Random selection may interfere with a representative occurrence of phonemes. However, in our material, the median phonemic occurrence remained close to that of the original test. This finding indicates that phonemic occurrence is not a critical factor. The work highlights the possibility that existing speech tests might be improved by selecting sentences with a steep psychometric function.

  17. Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.

    Science.gov (United States)

    Ma, Yunbei; Zhou, Xiao-Hua

    2017-02-01

    For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.

  18. Interference-aware random beam selection for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed M.

    2012-09-01

    Spectrum sharing systems have been introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this paper, we develop interference-aware random beam selection schemes that provide enhanced throughput for the secondary link under the condition that the interference observed at the primary link is within a predetermined acceptable value. For a secondary transmitter equipped with multiple antennas, our schemes select a random beam, among a set of power- optimized orthogonal random beams, that maximizes the capacity of the secondary link while satisfying the interference constraint at the primary receiver for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the signal-to-noise and interference ratio (SINR) statistics as well as the capacity of the secondary link. Finally, we present numerical results that study the effect of system parameters including number of beams and the maximum transmission power on the capacity of the secondary link attained using the proposed schemes. © 2012 IEEE.

  19. Interference-aware random beam selection for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed M.; Sayed, Mostafa M.; Alouini, Mohamed-Slim; Qaraqe, Khalid A.

    2012-01-01

    . In this paper, we develop interference-aware random beam selection schemes that provide enhanced throughput for the secondary link under the condition that the interference observed at the primary link is within a predetermined acceptable value. For a secondary

  20. The signature of positive selection at randomly chosen loci.

    OpenAIRE

    Przeworski, Molly

    2002-01-01

    In Drosophila and humans, there are accumulating examples of loci with a significant excess of high-frequency-derived alleles or high levels of linkage disequilibrium, relative to a neutral model of a random-mating population of constant size. These are features expected after a recent selective sweep. Their prevalence suggests that positive directional selection may be widespread in both species. However, as I show here, these features do not persist long after the sweep ends: The high-frequ...

  1. Simulated Performance Evaluation of a Selective Tracker Through Random Scenario Generation

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar

    2006-01-01

    performance assessment. Therefore, a random target motion scenario is adopted. Its implementation in particular for testing the proposed selective track splitting algorithm using Kalman filters is investigated through a number of performance parameters which gives the activity profile of the tracking scenario......  The paper presents a simulation study on the performance of a target tracker using selective track splitting filter algorithm through a random scenario implemented on a digital signal processor.  In a typical track splitting filter all the observation which fall inside a likelihood ellipse...... are used for update, however, in our proposed selective track splitting filter less number of observations are used for track update.  Much of the previous performance work [1] has been done on specific (deterministic) scenarios. One of the reasons for considering the specific scenarios, which were...

  2. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  3. Random Decrement Based FRF Estimation

    DEFF Research Database (Denmark)

    Brincker, Rune; Asmussen, J. C.

    1997-01-01

    to speed and quality. The basis of the new method is the Fourier transformation of the Random Decrement functions which can be used to estimate the frequency response functions. The investigations are based on load and response measurements of a laboratory model of a 3 span bridge. By applying both methods...... that the Random Decrement technique is based on a simple controlled averaging of time segments of the load and response processes. Furthermore, the Random Decrement technique is expected to produce reliable results. The Random Decrement technique will reduce leakage, since the Fourier transformation...

  4. An explicit semantic relatedness measure based on random walk

    Directory of Open Access Journals (Sweden)

    HU Sihui

    2016-10-01

    Full Text Available The semantic relatedness calculation of open domain knowledge network is a significant issue.In this paper,pheromone strategy is drawn from the thought of ant colony algorithm and is integrated into the random walk which is taken as the basic framework of calculating the semantic relatedness degree.The pheromone distribution is taken as a criterion of determining the tightness degree of semantic relatedness.A method of calculating semantic relatedness degree based on random walk is proposed and the exploration process of calculating the semantic relatedness degree is presented in a dominant way.The method mainly contains Path Select Model(PSM and Semantic Relatedness Computing Model(SRCM.PSM is used to simulate the path selection of ants and pheromone release.SRCM is used to calculate the semantic relatedness by utilizing the information returned by ants.The result indicates that the method could complete semantic relatedness calculation in linear complexity and extend the feasible strategy of semantic relatedness calculation.

  5. Selection bias and subject refusal in a cluster-randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Rochelle Yang

    2017-07-01

    Full Text Available Abstract Background Selection bias and non-participation bias are major methodological concerns which impact external validity. Cluster-randomized controlled trials are especially prone to selection bias as it is impractical to blind clusters to their allocation into intervention or control. This study assessed the impact of selection bias in a large cluster-randomized controlled trial. Methods The Improved Cardiovascular Risk Reduction to Enhance Rural Primary Care (ICARE study examined the impact of a remote pharmacist-led intervention in twelve medical offices. To assess eligibility, a standardized form containing patient demographics and medical information was completed for each screened patient. Eligible patients were approached by the study coordinator for recruitment. Both the study coordinator and the patient were aware of the site’s allocation prior to consent. Patients who consented or declined to participate were compared across control and intervention arms for differing characteristics. Statistical significance was determined using a two-tailed, equal variance t-test and a chi-square test with adjusted Bonferroni p-values. Results were adjusted for random cluster variation. Results There were 2749 completed screening forms returned to research staff with 461 subjects who had either consented or declined participation. Patients with poorly controlled diabetes were found to be significantly more likely to decline participation in intervention sites compared to those in control sites. A higher mean diastolic blood pressure was seen in patients with uncontrolled hypertension who declined in the control sites compared to those who declined in the intervention sites. However, these findings were no longer significant after adjustment for random variation among the sites. After this adjustment, females were now found to be significantly more likely to consent than males (odds ratio = 1.41; 95% confidence interval = 1.03, 1

  6. From Protocols to Publications: A Study in Selective Reporting of Outcomes in Randomized Trials in Oncology

    Science.gov (United States)

    Raghav, Kanwal Pratap Singh; Mahajan, Sminil; Yao, James C.; Hobbs, Brian P.; Berry, Donald A.; Pentz, Rebecca D.; Tam, Alda; Hong, Waun K.; Ellis, Lee M.; Abbruzzese, James; Overman, Michael J.

    2015-01-01

    Purpose The decision by journals to append protocols to published reports of randomized trials was a landmark event in clinical trial reporting. However, limited information is available on how this initiative effected transparency and selective reporting of clinical trial data. Methods We analyzed 74 oncology-based randomized trials published in Journal of Clinical Oncology, the New England Journal of Medicine, and The Lancet in 2012. To ascertain integrity of reporting, we compared published reports with their respective appended protocols with regard to primary end points, nonprimary end points, unplanned end points, and unplanned analyses. Results A total of 86 primary end points were reported in 74 randomized trials; nine trials had greater than one primary end point. Nine trials (12.2%) had some discrepancy between their planned and published primary end points. A total of 579 nonprimary end points (median, seven per trial) were planned, of which 373 (64.4%; median, five per trial) were reported. A significant positive correlation was found between the number of planned and nonreported nonprimary end points (Spearman r = 0.66; P < .001). Twenty-eight studies (37.8%) reported a total of 65 unplanned end points; 52 (80.0%) of which were not identified as unplanned. Thirty-one (41.9%) and 19 (25.7%) of 74 trials reported a total of 52 unplanned analyses involving primary end points and 33 unplanned analyses involving nonprimary end points, respectively. Studies reported positive unplanned end points and unplanned analyses more frequently than negative outcomes in abstracts (unplanned end points odds ratio, 6.8; P = .002; unplanned analyses odd ratio, 8.4; P = .007). Conclusion Despite public and reviewer access to protocols, selective outcome reporting persists and is a major concern in the reporting of randomized clinical trials. To foster credible evidence-based medicine, additional initiatives are needed to minimize selective reporting. PMID:26304898

  7. Automatic learning-based beam angle selection for thoracic IMRT

    International Nuclear Information System (INIS)

    Amit, Guy; Marshall, Andrea; Purdie, Thomas G.; Jaffray, David A.; Levinshtein, Alex; Hope, Andrew J.; Lindsay, Patricia; Pekar, Vladimir

    2015-01-01

    Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume

  8. The effect of morphometric atlas selection on multi-atlas-based automatic brachial plexus segmentation

    International Nuclear Information System (INIS)

    Van de Velde, Joris; Wouters, Johan; Vercauteren, Tom; De Gersem, Werner; Achten, Eric; De Neve, Wilfried; Van Hoof, Tom

    2015-01-01

    The present study aimed to measure the effect of a morphometric atlas selection strategy on the accuracy of multi-atlas-based BP autosegmentation using the commercially available software package ADMIRE® and to determine the optimal number of selected atlases to use. Autosegmentation accuracy was measured by comparing all generated automatic BP segmentations with anatomically validated gold standard segmentations that were developed using cadavers. Twelve cadaver computed tomography (CT) atlases were included in the study. One atlas was selected as a patient in ADMIRE®, and multi-atlas-based BP autosegmentation was first performed with a group of morphometrically preselected atlases. In this group, the atlases were selected on the basis of similarity in the shoulder protraction position with the patient. The number of selected atlases used started at two and increased up to eight. Subsequently, a group of randomly chosen, non-selected atlases were taken. In this second group, every possible combination of 2 to 8 random atlases was used for multi-atlas-based BP autosegmentation. For both groups, the average Dice similarity coefficient (DSC), Jaccard index (JI) and Inclusion index (INI) were calculated, measuring the similarity of the generated automatic BP segmentations and the gold standard segmentation. Similarity indices of both groups were compared using an independent sample t-test, and the optimal number of selected atlases was investigated using an equivalence trial. For each number of atlases, average similarity indices of the morphometrically selected atlas group were significantly higher than the random group (p < 0,05). In this study, the highest similarity indices were achieved using multi-atlas autosegmentation with 6 selected atlases (average DSC = 0,598; average JI = 0,434; average INI = 0,733). Morphometric atlas selection on the basis of the protraction position of the patient significantly improves multi-atlas-based BP autosegmentation accuracy

  9. Using ArcMap, Google Earth, and Global Positioning Systems to select and locate random households in rural Haiti.

    Science.gov (United States)

    Wampler, Peter J; Rediske, Richard R; Molla, Azizur R

    2013-01-18

    A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This

  10. Using ArcMap, Google Earth, and Global Positioning Systems to select and locate random households in rural Haiti

    Directory of Open Access Journals (Sweden)

    Wampler Peter J

    2013-01-01

    Full Text Available Abstract Background A remote sensing technique was developed which combines a Geographic Information System (GIS; Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. Methods The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. Results A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. Conclusions The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only

  11. Automatic Recognition of Chinese Personal Name Using Conditional Random Fields and Knowledge Base

    Directory of Open Access Journals (Sweden)

    Chuan Gu

    2015-01-01

    Full Text Available According to the features of Chinese personal name, we present an approach for Chinese personal name recognition based on conditional random fields (CRF and knowledge base in this paper. The method builds multiple features of CRF model by adopting Chinese character as processing unit, selects useful features based on selection algorithm of knowledge base and incremental feature template, and finally implements the automatic recognition of Chinese personal name from Chinese document. The experimental results on open real corpus demonstrated the effectiveness of our method and obtained high accuracy rate and high recall rate of recognition.

  12. Interference-aware random beam selection schemes for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed; Qaraqe, Khalid; Alouini, Mohamed-Slim

    2012-01-01

    users. In this work, we develop interference-aware random beam selection schemes that provide enhanced performance for the secondary network under the condition that the interference observed by the receivers of the primary network is below a

  13. Analysis and applications of a frequency selective surface via a random distribution method

    International Nuclear Information System (INIS)

    Xie Shao-Yi; Huang Jing-Jian; Yuan Nai-Chang; Liu Li-Guo

    2014-01-01

    A novel frequency selective surface (FSS) for reducing radar cross section (RCS) is proposed in this paper. This FSS is based on the random distribution method, so it can be called random surface. In this paper, the stacked patches serving as periodic elements are employed for RCS reduction. Previous work has demonstrated the efficiency by utilizing the microstrip patches, especially for the reflectarray. First, the relevant theory of the method is described. Then a sample of a three-layer variable-sized stacked patch random surface with a dimension of 260 mm×260 mm is simulated, fabricated, and measured in order to demonstrate the validity of the proposed design. For the normal incidence, the 8-dB RCS reduction can be achieved both by the simulation and the measurement in 8 GHz–13 GHz. The oblique incidence of 30° is also investigated, in which the 7-dB RCS reduction can be obtained in a frequency range of 8 GHz–14 GHz. (condensed matter: electronic structure, electrical, magnetic, and optical properties)

  14. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex

    Science.gov (United States)

    Lindsay, Grace W.

    2017-01-01

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (

  15. DNABP: Identification of DNA-Binding Proteins Based on Feature Selection Using a Random Forest and Predicting Binding Residues.

    Science.gov (United States)

    Ma, Xin; Guo, Jing; Sun, Xiao

    2016-01-01

    DNA-binding proteins are fundamentally important in cellular processes. Several computational-based methods have been developed to improve the prediction of DNA-binding proteins in previous years. However, insufficient work has been done on the prediction of DNA-binding proteins from protein sequence information. In this paper, a novel predictor, DNABP (DNA-binding proteins), was designed to predict DNA-binding proteins using the random forest (RF) classifier with a hybrid feature. The hybrid feature contains two types of novel sequence features, which reflect information about the conservation of physicochemical properties of the amino acids, and the binding propensity of DNA-binding residues and non-binding propensities of non-binding residues. The comparisons with each feature demonstrated that these two novel features contributed most to the improvement in predictive ability. Furthermore, to improve the prediction performance of the DNABP model, feature selection using the minimum redundancy maximum relevance (mRMR) method combined with incremental feature selection (IFS) was carried out during the model construction. The results showed that the DNABP model could achieve 86.90% accuracy, 83.76% sensitivity, 90.03% specificity and a Matthews correlation coefficient of 0.727. High prediction accuracy and performance comparisons with previous research suggested that DNABP could be a useful approach to identify DNA-binding proteins from sequence information. The DNABP web server system is freely available at http://www.cbi.seu.edu.cn/DNABP/.

  16. From Protocols to Publications: A Study in Selective Reporting of Outcomes in Randomized Trials in Oncology.

    Science.gov (United States)

    Raghav, Kanwal Pratap Singh; Mahajan, Sminil; Yao, James C; Hobbs, Brian P; Berry, Donald A; Pentz, Rebecca D; Tam, Alda; Hong, Waun K; Ellis, Lee M; Abbruzzese, James; Overman, Michael J

    2015-11-01

    The decision by journals to append protocols to published reports of randomized trials was a landmark event in clinical trial reporting. However, limited information is available on how this initiative effected transparency and selective reporting of clinical trial data. We analyzed 74 oncology-based randomized trials published in Journal of Clinical Oncology, the New England Journal of Medicine, and The Lancet in 2012. To ascertain integrity of reporting, we compared published reports with their respective appended protocols with regard to primary end points, nonprimary end points, unplanned end points, and unplanned analyses. A total of 86 primary end points were reported in 74 randomized trials; nine trials had greater than one primary end point. Nine trials (12.2%) had some discrepancy between their planned and published primary end points. A total of 579 nonprimary end points (median, seven per trial) were planned, of which 373 (64.4%; median, five per trial) were reported. A significant positive correlation was found between the number of planned and nonreported nonprimary end points (Spearman r = 0.66; P medicine, additional initiatives are needed to minimize selective reporting. © 2015 by American Society of Clinical Oncology.

  17. Using Random Forests to Select Optimal Input Variables for Short-Term Wind Speed Forecasting Models

    Directory of Open Access Journals (Sweden)

    Hui Wang

    2017-10-01

    Full Text Available Achieving relatively high-accuracy short-term wind speed forecasting estimates is a precondition for the construction and grid-connected operation of wind power forecasting systems for wind farms. Currently, most research is focused on the structure of forecasting models and does not consider the selection of input variables, which can have significant impacts on forecasting performance. This paper presents an input variable selection method for wind speed forecasting models. The candidate input variables for various leading periods are selected and random forests (RF is employed to evaluate the importance of all variable as features. The feature subset with the best evaluation performance is selected as the optimal feature set. Then, kernel-based extreme learning machine is constructed to evaluate the performance of input variables selection based on RF. The results of the case study show that by removing the uncorrelated and redundant features, RF effectively extracts the most strongly correlated set of features from the candidate input variables. By finding the optimal feature combination to represent the original information, RF simplifies the structure of the wind speed forecasting model, shortens the training time required, and substantially improves the model’s accuracy and generalization ability, demonstrating that the input variables selected by RF are effective.

  18. Pseudo-random number generator based on asymptotic deterministic randomness

    Science.gov (United States)

    Wang, Kai; Pei, Wenjiang; Xia, Haishan; Cheung, Yiu-ming

    2008-06-01

    A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks.

  19. Pseudo-random number generator based on asymptotic deterministic randomness

    International Nuclear Information System (INIS)

    Wang Kai; Pei Wenjiang; Xia Haishan; Cheung Yiuming

    2008-01-01

    A novel approach to generate the pseudorandom-bit sequence from the asymptotic deterministic randomness system is proposed in this Letter. We study the characteristic of multi-value correspondence of the asymptotic deterministic randomness constructed by the piecewise linear map and the noninvertible nonlinearity transform, and then give the discretized systems in the finite digitized state space. The statistic characteristics of the asymptotic deterministic randomness are investigated numerically, such as stationary probability density function and random-like behavior. Furthermore, we analyze the dynamics of the symbolic sequence. Both theoretical and experimental results show that the symbolic sequence of the asymptotic deterministic randomness possesses very good cryptographic properties, which improve the security of chaos based PRBGs and increase the resistance against entropy attacks and symbolic dynamics attacks

  20. The reliability of randomly selected final year pharmacy students in ...

    African Journals Online (AJOL)

    Employing ANOVA, factorial experimental analysis, and the theory of error, reliability studies were conducted on the assessment of the drug product chloroquine phosphate tablets. The G–Study employed equal numbers of the factors for uniform control, and involved three analysts (randomly selected final year Pharmacy ...

  1. Rule-based versus probabilistic selection for active surveillance using three definitions of insignificant prostate cancer

    NARCIS (Netherlands)

    L.D.F. Venderbos (Lionne); M.J. Roobol-Bouts (Monique); C.H. Bangma (Chris); R.C.N. van den Bergh (Roderick); L.P. Bokhorst (Leonard); D. Nieboer (Daan); Godtman, R; J. Hugosson (Jonas); van der Kwast, T; E.W. Steyerberg (Ewout)

    2016-01-01

    textabstractTo study whether probabilistic selection by the use of a nomogram could improve patient selection for active surveillance (AS) compared to the various sets of rule-based AS inclusion criteria currently used. We studied Dutch and Swedish patients participating in the European Randomized

  2. Comparative analysis of instance selection algorithms for instance-based classifiers in the context of medical decision support

    International Nuclear Information System (INIS)

    Mazurowski, Maciej A; Tourassi, Georgia D; Malof, Jordan M

    2011-01-01

    When constructing a pattern classifier, it is important to make best use of the instances (a.k.a. cases, examples, patterns or prototypes) available for its development. In this paper we present an extensive comparative analysis of algorithms that, given a pool of previously acquired instances, attempt to select those that will be the most effective to construct an instance-based classifier in terms of classification performance, time efficiency and storage requirements. We evaluate seven previously proposed instance selection algorithms and compare their performance to simple random selection of instances. We perform the evaluation using k-nearest neighbor classifier and three classification problems: one with simulated Gaussian data and two based on clinical databases for breast cancer detection and diagnosis, respectively. Finally, we evaluate the impact of the number of instances available for selection on the performance of the selection algorithms and conduct initial analysis of the selected instances. The experiments show that for all investigated classification problems, it was possible to reduce the size of the original development dataset to less than 3% of its initial size while maintaining or improving the classification performance. Random mutation hill climbing emerges as the superior selection algorithm. Furthermore, we show that some previously proposed algorithms perform worse than random selection. Regarding the impact of the number of instances available for the classifier development on the performance of the selection algorithms, we confirm that the selection algorithms are generally more effective as the pool of available instances increases. In conclusion, instance selection is generally beneficial for instance-based classifiers as it can improve their performance, reduce their storage requirements and improve their response time. However, choosing the right selection algorithm is crucial.

  3. Fault diagnosis in spur gears based on genetic algorithm and random forest

    Science.gov (United States)

    Cerrada, Mariela; Zurita, Grover; Cabrera, Diego; Sánchez, René-Vinicio; Artés, Mariano; Li, Chuan

    2016-03-01

    There are growing demands for condition-based monitoring of gearboxes, and therefore new methods to improve the reliability, effectiveness, accuracy of the gear fault detection ought to be evaluated. Feature selection is still an important aspect in machine learning-based diagnosis in order to reach good performance of the diagnostic models. On the other hand, random forest classifiers are suitable models in industrial environments where large data-samples are not usually available for training such diagnostic models. The main aim of this research is to build up a robust system for the multi-class fault diagnosis in spur gears, by selecting the best set of condition parameters on time, frequency and time-frequency domains, which are extracted from vibration signals. The diagnostic system is performed by using genetic algorithms and a classifier based on random forest, in a supervised environment. The original set of condition parameters is reduced around 66% regarding the initial size by using genetic algorithms, and still get an acceptable classification precision over 97%. The approach is tested on real vibration signals by considering several fault classes, one of them being an incipient fault, under different running conditions of load and velocity.

  4. Fine mapping quantitative trait loci under selective phenotyping strategies based on linkage and linkage disequilibrium criteria

    DEFF Research Database (Denmark)

    Ansari-Mahyari, S; Berg, P; Lund, M S

    2009-01-01

    disequilibrium-based sampling criteria (LDC) for selecting individuals to phenotype are compared to random phenotyping in a quantitative trait loci (QTL) verification experiment using stochastic simulation. Several strategies based on LAC and LDC for selecting the most informative 30%, 40% or 50% of individuals...... for phenotyping to extract maximum power and precision in a QTL fine mapping experiment were developed and assessed. Linkage analyses for the mapping was performed for individuals sampled on LAC within families and combined linkage disequilibrium and linkage analyses was performed for individuals sampled across...... the whole population based on LDC. The results showed that selecting individuals with similar haplotypes to the paternal haplotypes (minimum recombination criterion) using LAC compared to random phenotyping gave at least the same power to detect a QTL but decreased the accuracy of the QTL position. However...

  5. Random selection of items. Selection of n1 samples among N items composing a stratum

    International Nuclear Information System (INIS)

    Jaech, J.L.; Lemaire, R.J.

    1987-02-01

    STR-224 provides generalized procedures to determine required sample sizes, for instance in the course of a Physical Inventory Verification at Bulk Handling Facilities. The present report describes procedures to generate random numbers and select groups of items to be verified in a given stratum through each of the measurement methods involved in the verification. (author). 3 refs

  6. The mathematics of random mutation and natural selection for multiple simultaneous selection pressures and the evolution of antimicrobial drug resistance.

    Science.gov (United States)

    Kleinman, Alan

    2016-12-20

    The random mutation and natural selection phenomenon act in a mathematically predictable behavior, which when understood leads to approaches to reduce and prevent the failure of the use of these selection pressures when treating infections and cancers. The underlying principle to impair the random mutation and natural selection phenomenon is to use combination therapy, which forces the population to evolve to multiple selection pressures simultaneously that invoke the multiplication rule of probabilities simultaneously as well. Recently, it has been seen that combination therapy for the treatment of malaria has failed to prevent the emergence of drug-resistant variants. Using this empirical example and the principles of probability theory, the derivation of the equations describing this treatment failure is carried out. These equations give guidance as to how to use combination therapy for the treatment of cancers and infectious diseases and prevent the emergence of drug resistance. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. A cluster-based randomized controlled trial promoting community participation in arsenic mitigation efforts in Bangladesh

    OpenAIRE

    George, Christine Marie; van Geen, Alexander; Slavkovich, Vesna; Singha, Ashit; Levy, Diane; Islam, Tariqul; Ahmed, Kazi Matin; Moon-Howard, Joyce; Tarozzi, Alessandro; Liu, Xinhua; Factor-Litvak, Pam; Graziano, Joseph

    2012-01-01

    Abstract Objective To reduce arsenic (As) exposure, we evaluated the effectiveness of training community members to perform water arsenic (WAs) testing and provide As education compared to sending representatives from outside communities to conduct these tasks. Methods We conducted a cluster based randomized controlled trial of 20 villages in Singair, Bangladesh. Fifty eligible respondents were randomly selected in each village. In 10 villages, a community member provided As education and WAs...

  8. A comparison of random forest and its Gini importance with standard chemometric methods for the feature selection and classification of spectral data

    Directory of Open Access Journals (Sweden)

    Himmelreich Uwe

    2009-07-01

    Full Text Available Abstract Background Regularized regression methods such as principal component or partial least squares regression perform well in learning tasks on high dimensional spectral data, but cannot explicitly eliminate irrelevant features. The random forest classifier with its associated Gini feature importance, on the other hand, allows for an explicit feature elimination, but may not be optimally adapted to spectral data due to the topology of its constituent classification trees which are based on orthogonal splits in feature space. Results We propose to combine the best of both approaches, and evaluated the joint use of a feature selection based on a recursive feature elimination using the Gini importance of random forests' together with regularized classification methods on spectral data sets from medical diagnostics, chemotaxonomy, biomedical analytics, food science, and synthetically modified spectral data. Here, a feature selection using the Gini feature importance with a regularized classification by discriminant partial least squares regression performed as well as or better than a filtering according to different univariate statistical tests, or using regression coefficients in a backward feature elimination. It outperformed the direct application of the random forest classifier, or the direct application of the regularized classifiers on the full set of features. Conclusion The Gini importance of the random forest provided superior means for measuring feature relevance on spectral data, but – on an optimal subset of features – the regularized classifiers might be preferable over the random forest classifier, in spite of their limitation to model linear dependencies only. A feature selection based on Gini importance, however, may precede a regularized linear classification to identify this optimal subset of features, and to earn a double benefit of both dimensionality reduction and the elimination of noise from the classification task.

  9. Distribution of orientation selectivity in recurrent networks of spiking neurons with different random topologies.

    Science.gov (United States)

    Sadeh, Sadra; Rotter, Stefan

    2014-01-01

    Neurons in the primary visual cortex are more or less selective for the orientation of a light bar used for stimulation. A broad distribution of individual grades of orientation selectivity has in fact been reported in all species. A possible reason for emergence of broad distributions is the recurrent network within which the stimulus is being processed. Here we compute the distribution of orientation selectivity in randomly connected model networks that are equipped with different spatial patterns of connectivity. We show that, for a wide variety of connectivity patterns, a linear theory based on firing rates accurately approximates the outcome of direct numerical simulations of networks of spiking neurons. Distance dependent connectivity in networks with a more biologically realistic structure does not compromise our linear analysis, as long as the linearized dynamics, and hence the uniform asynchronous irregular activity state, remain stable. We conclude that linear mechanisms of stimulus processing are indeed responsible for the emergence of orientation selectivity and its distribution in recurrent networks with functionally heterogeneous synaptic connectivity.

  10. On theoretical models of gene expression evolution with random genetic drift and natural selection.

    Directory of Open Access Journals (Sweden)

    Osamu Ogasawara

    2009-11-01

    Full Text Available The relative contributions of natural selection and random genetic drift are a major source of debate in the study of gene expression evolution, which is hypothesized to serve as a bridge from molecular to phenotypic evolution. It has been suggested that the conflict between views is caused by the lack of a definite model of the neutral hypothesis, which can describe the long-run behavior of evolutionary change in mRNA abundance. Therefore previous studies have used inadequate analogies with the neutral prediction of other phenomena, such as amino acid or nucleotide sequence evolution, as the null hypothesis of their statistical inference.In this study, we introduced two novel theoretical models, one based on neutral drift and the other assuming natural selection, by focusing on a common property of the distribution of mRNA abundance among a variety of eukaryotic cells, which reflects the result of long-term evolution. Our results demonstrated that (1 our models can reproduce two independently found phenomena simultaneously: the time development of gene expression divergence and Zipf's law of the transcriptome; (2 cytological constraints can be explicitly formulated to describe long-term evolution; (3 the model assuming that natural selection optimized relative mRNA abundance was more consistent with previously published observations than the model of optimized absolute mRNA abundances.The models introduced in this study give a formulation of evolutionary change in the mRNA abundance of each gene as a stochastic process, on the basis of previously published observations. This model provides a foundation for interpreting observed data in studies of gene expression evolution, including identifying an adequate time scale for discriminating the effect of natural selection from that of random genetic drift of selectively neutral variations.

  11. The signature of positive selection at randomly chosen loci.

    Science.gov (United States)

    Przeworski, Molly

    2002-03-01

    In Drosophila and humans, there are accumulating examples of loci with a significant excess of high-frequency-derived alleles or high levels of linkage disequilibrium, relative to a neutral model of a random-mating population of constant size. These are features expected after a recent selective sweep. Their prevalence suggests that positive directional selection may be widespread in both species. However, as I show here, these features do not persist long after the sweep ends: The high-frequency alleles drift to fixation and no longer contribute to polymorphism, while linkage disequilibrium is broken down by recombination. As a result, loci chosen without independent evidence of recent selection are not expected to exhibit either of these features, even if they have been affected by numerous sweeps in their genealogical history. How then can we explain the patterns in the data? One possibility is population structure, with unequal sampling from different subpopulations. Alternatively, positive selection may not operate as is commonly modeled. In particular, the rate of fixation of advantageous mutations may have increased in the recent past.

  12. Hybrid Feature Selection Approach Based on GRASP for Cancer Microarray Data

    Directory of Open Access Journals (Sweden)

    Arpita Nagpal

    2017-01-01

    Full Text Available Microarray data usually contain a large number of genes, but a small number of samples. Feature subset selection for microarray data aims at reducing the number of genes so that useful information can be extracted from the samples. Reducing the dimension of data sets further helps in improving the computational efficiency of the learning model. In this paper, we propose a modified algorithm based on the tabu search as local search procedures to a Greedy Randomized Adaptive Search Procedure (GRASP for high dimensional microarray data sets. The proposed Tabu based Greedy Randomized Adaptive Search Procedure algorithm is named as TGRASP. In TGRASP, a new parameter has been introduced named as Tabu Tenure and the existing parameters, NumIter and size have been modified. We observed that different parameter settings affect the quality of the optimum. The second proposed algorithm known as FFGRASP (Firefly Greedy Randomized Adaptive Search Procedure uses a firefly optimization algorithm in the local search optimzation phase of the greedy randomized adaptive search procedure (GRASP. Firefly algorithm is one of the powerful algorithms for optimization of multimodal applications. Experimental results show that the proposed TGRASP and FFGRASP algorithms are much better than existing algorithm with respect to three performance parameters viz. accuracy, run time, number of a selected subset of features. We have also compared both the approaches with a unified metric (Extended Adjusted Ratio of Ratios which has shown that TGRASP approach outperforms existing approach for six out of nine cancer microarray datasets and FFGRASP performs better on seven out of nine datasets.

  13. Computational intelligence-based polymerase chain reaction primer selection based on a novel teaching-learning-based optimisation.

    Science.gov (United States)

    Cheng, Yu-Huei

    2014-12-01

    Specific primers play an important role in polymerase chain reaction (PCR) experiments, and therefore it is essential to find specific primers of outstanding quality. Unfortunately, many PCR constraints must be simultaneously inspected which makes specific primer selection difficult and time-consuming. This paper introduces a novel computational intelligence-based method, Teaching-Learning-Based Optimisation, to select the specific and feasible primers. The specified PCR product lengths of 150-300 bp and 500-800 bp with three melting temperature formulae of Wallace's formula, Bolton and McCarthy's formula and SantaLucia's formula were performed. The authors calculate optimal frequency to estimate the quality of primer selection based on a total of 500 runs for 50 random nucleotide sequences of 'Homo species' retrieved from the National Center for Biotechnology Information. The method was then fairly compared with the genetic algorithm (GA) and memetic algorithm (MA) for primer selection in the literature. The results show that the method easily found suitable primers corresponding with the setting primer constraints and had preferable performance than the GA and the MA. Furthermore, the method was also compared with the common method Primer3 according to their method type, primers presentation, parameters setting, speed and memory usage. In conclusion, it is an interesting primer selection method and a valuable tool for automatic high-throughput analysis. In the future, the usage of the primers in the wet lab needs to be validated carefully to increase the reliability of the method.

  14. Risk-based audit selection of dairy farms.

    Science.gov (United States)

    van Asseldonk, M A P M; Velthuis, A G J

    2014-02-01

    Dairy farms are audited in the Netherlands on numerous process standards. Each farm is audited once every 2 years. Increasing demands for cost-effectiveness in farm audits can be met by introducing risk-based principles. This implies targeting subpopulations with a higher risk of poor process standards. To select farms for an audit that present higher risks, a statistical analysis was conducted to test the relationship between the outcome of farm audits and bulk milk laboratory results before the audit. The analysis comprised 28,358 farm audits and all conducted laboratory tests of bulk milk samples 12 mo before the audit. The overall outcome of each farm audit was classified as approved or rejected. Laboratory results included somatic cell count (SCC), total bacterial count (TBC), antimicrobial drug residues (ADR), level of butyric acid spores (BAB), freezing point depression (FPD), level of free fatty acids (FFA), and cleanliness of the milk (CLN). The bulk milk laboratory results were significantly related to audit outcomes. Rejected audits are likely to occur on dairy farms with higher mean levels of SCC, TBC, ADR, and BAB. Moreover, in a multivariable model, maxima for TBC, SCC, and FPD as well as standard deviations for TBC and FPD are risk factors for negative audit outcomes. The efficiency curve of a risk-based selection approach, on the basis of the derived regression results, dominated the current random selection approach. To capture 25, 50, or 75% of the population with poor process standards (i.e., audit outcome of rejected), respectively, only 8, 20, or 47% of the population had to be sampled based on a risk-based selection approach. Milk quality information can thus be used to preselect high-risk farms to be audited more frequently. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  16. Greater fruit selection following an appearance-based compared with a health-based health promotion poster

    Science.gov (United States)

    2016-01-01

    Abstract Background This study investigated the impact of an appearance-based compared with a traditional health-based public health message for healthy eating. Methods A total of 166 British University students (41 males; aged 20.6 ± 1.9 years) were randomized to view either an appearance-based (n = 82) or a health-based (n = 84) fruit promotion poster. Intentions to consume fruit and immediate fruit selection (laboratory observation) were assessed immediately after poster viewing, and subsequent self-report fruit consumption was assessed 3 days later. Results Intentions to consume fruit were not predicted by poster type (largest β = 0.03, P = 0.68) but were associated with fruit-based liking, past consumption, attitudes and social norms (smallest β = 0.16, P = 0.04). Immediate fruit selection was greater following the appearance-based compared with the health-based poster (β = −0.24, P poster (β = −0.22, P = 0.03), but this effect became non-significant on consideration of participant characteristics (β = −0.15, P = 0.13), and was instead associated with fruit-based liking and past consumption (smallest β = 0.24, P = 0.03). Conclusions These findings demonstrate the clear value of an appearance-based compared with a health-based health promotion poster for increasing fruit selection. A distinction between outcome measures and the value of a behavioural measure is also demonstrated. PMID:28158693

  17. Interference-aware random beam selection schemes for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed

    2012-10-19

    Spectrum sharing systems have been recently introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this work, we develop interference-aware random beam selection schemes that provide enhanced performance for the secondary network under the condition that the interference observed by the receivers of the primary network is below a predetermined/acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a primary link composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes select a beam, among a set of power-optimized random beams, that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the SINR statistics as well as the capacity and bit error rate (BER) of the secondary link.

  18. Topology-selective jamming of fully-connected, code-division random-access networks

    Science.gov (United States)

    Polydoros, Andreas; Cheng, Unjeng

    1990-01-01

    The purpose is to introduce certain models of topology selective stochastic jamming and examine its impact on a class of fully-connected, spread-spectrum, slotted ALOHA-type random access networks. The theory covers dedicated as well as half-duplex units. The dominant role of the spatial duty factor is established, and connections with the dual concept of time selective jamming are discussed. The optimal choices of coding rate and link access parameters (from the users' side) and the jamming spatial fraction are numerically established for DS and FH spreading.

  19. A Rewritable, Random-Access DNA-Based Storage System.

    Science.gov (United States)

    Yazdi, S M Hossein Tabatabaei; Yuan, Yongbo; Ma, Jian; Zhao, Huimin; Milenkovic, Olgica

    2015-09-18

    We describe the first DNA-based storage architecture that enables random access to data blocks and rewriting of information stored at arbitrary locations within the blocks. The newly developed architecture overcomes drawbacks of existing read-only methods that require decoding the whole file in order to read one data fragment. Our system is based on new constrained coding techniques and accompanying DNA editing methods that ensure data reliability, specificity and sensitivity of access, and at the same time provide exceptionally high data storage capacity. As a proof of concept, we encoded parts of the Wikipedia pages of six universities in the USA, and selected and edited parts of the text written in DNA corresponding to three of these schools. The results suggest that DNA is a versatile media suitable for both ultrahigh density archival and rewritable storage applications.

  20. Day-ahead load forecast using random forest and expert input selection

    International Nuclear Information System (INIS)

    Lahouar, A.; Ben Hadj Slama, J.

    2015-01-01

    Highlights: • A model based on random forests for short term load forecast is proposed. • An expert feature selection is added to refine inputs. • Special attention is paid to customers behavior, load profile and special holidays. • The model is flexible and able to handle complex load signal. • A technical comparison is performed to assess the forecast accuracy. - Abstract: The electrical load forecast is getting more and more important in recent years due to the electricity market deregulation and integration of renewable resources. To overcome the incoming challenges and ensure accurate power prediction for different time horizons, sophisticated intelligent methods are elaborated. Utilization of intelligent forecast algorithms is among main characteristics of smart grids, and is an efficient tool to face uncertainty. Several crucial tasks of power operators such as load dispatch rely on the short term forecast, thus it should be as accurate as possible. To this end, this paper proposes a short term load predictor, able to forecast the next 24 h of load. Using random forest, characterized by immunity to parameter variations and internal cross validation, the model is constructed following an online learning process. The inputs are refined by expert feature selection using a set of if–then rules, in order to include the own user specifications about the country weather or market, and to generalize the forecast ability. The proposed approach is tested through a real historical set from the Tunisian Power Company, and the simulation shows accurate and satisfactory results for one day in advance, with an average error exceeding rarely 2.3%. The model is validated for regular working days and weekends, and special attention is paid to moving holidays, following non Gregorian calendar

  1. Peculiarities of the statistics of spectrally selected fluorescence radiation in laser-pumped dye-doped random media

    Science.gov (United States)

    Yuvchenko, S. A.; Ushakova, E. V.; Pavlova, M. V.; Alonova, M. V.; Zimnyakov, D. A.

    2018-04-01

    We consider the practical realization of a new optical probe method of the random media which is defined as the reference-free path length interferometry with the intensity moments analysis. A peculiarity in the statistics of the spectrally selected fluorescence radiation in laser-pumped dye-doped random medium is discussed. Previously established correlations between the second- and the third-order moments of the intensity fluctuations in the random interference patterns, the coherence function of the probe radiation, and the path difference probability density for the interfering partial waves in the medium are confirmed. The correlations were verified using the statistical analysis of the spectrally selected fluorescence radiation emitted by a laser-pumped dye-doped random medium. Water solution of Rhodamine 6G was applied as the doping fluorescent agent for the ensembles of the densely packed silica grains, which were pumped by the 532 nm radiation of a solid state laser. The spectrum of the mean path length for a random medium was reconstructed.

  2. A Copula Based Approach for Design of Multivariate Random Forests for Drug Sensitivity Prediction.

    Science.gov (United States)

    Haider, Saad; Rahman, Raziur; Ghosh, Souparno; Pal, Ranadip

    2015-01-01

    Modeling sensitivity to drugs based on genetic characterizations is a significant challenge in the area of systems medicine. Ensemble based approaches such as Random Forests have been shown to perform well in both individual sensitivity prediction studies and team science based prediction challenges. However, Random Forests generate a deterministic predictive model for each drug based on the genetic characterization of the cell lines and ignores the relationship between different drug sensitivities during model generation. This application motivates the need for generation of multivariate ensemble learning techniques that can increase prediction accuracy and improve variable importance ranking by incorporating the relationships between different output responses. In this article, we propose a novel cost criterion that captures the dissimilarity in the output response structure between the training data and node samples as the difference in the two empirical copulas. We illustrate that copulas are suitable for capturing the multivariate structure of output responses independent of the marginal distributions and the copula based multivariate random forest framework can provide higher accuracy prediction and improved variable selection. The proposed framework has been validated on genomics of drug sensitivity for cancer and cancer cell line encyclopedia database.

  3. Maximum relevance, minimum redundancy band selection based on neighborhood rough set for hyperspectral data classification

    International Nuclear Information System (INIS)

    Liu, Yao; Chen, Yuehua; Tan, Kezhu; Xie, Hong; Wang, Liguo; Xie, Wu; Yan, Xiaozhen; Xu, Zhen

    2016-01-01

    Band selection is considered to be an important processing step in handling hyperspectral data. In this work, we selected informative bands according to the maximal relevance minimal redundancy (MRMR) criterion based on neighborhood mutual information. Two measures MRMR difference and MRMR quotient were defined and a forward greedy search for band selection was constructed. The performance of the proposed algorithm, along with a comparison with other methods (neighborhood dependency measure based algorithm, genetic algorithm and uninformative variable elimination algorithm), was studied using the classification accuracy of extreme learning machine (ELM) and random forests (RF) classifiers on soybeans’ hyperspectral datasets. The results show that the proposed MRMR algorithm leads to promising improvement in band selection and classification accuracy. (paper)

  4. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  5. Rich: Region-based Intelligent Cluster-Head Selection and Node Deployment Strategy in Concentric-based WSNs

    Directory of Open Access Journals (Sweden)

    FAN, C.-S.

    2013-11-01

    Full Text Available In a random deployment, sensor nodes are scattered randomly in the sensing field. Hence, the coverage can not be guaranteed. In contrast, the coverage of uniformly deployment is in general larger than the random deployment. However, uniformly deployment strategy may cause unbalanced traffic pattern in wireless sensor networks (WSNs. In this situation, larger load may be imposed to CHs (cluster heads around the sink. Therefore, CHs close to the sink use up their energy earlier than those farther away from the sink. To overcome this problem, we propose a novel node deployment strategy in the concentric model, namely, Region-based Intelligent Cluster-Head selection and node deployment strategy (called Rich. The coverage, energy consumption and data routing issues are well investigated and taken into consideration in the proposed Rich scheme. The simulation results show that the proposed Rich alleviates the unbalanced traffic pattern significantly, prolongs network lifetime and achieves satisfactory coverage ratio.

  6. Randomized clinical trial of Appendicitis Inflammatory Response score-based management of patients with suspected appendicitis.

    Science.gov (United States)

    Andersson, M; Kolodziej, B; Andersson, R E

    2017-10-01

    The role of imaging in the diagnosis of appendicitis is controversial. This prospective interventional study and nested randomized trial analysed the impact of implementing a risk stratification algorithm based on the Appendicitis Inflammatory Response (AIR) score, and compared routine imaging with selective imaging after clinical reassessment. Patients presenting with suspicion of appendicitis between September 2009 and January 2012 from age 10 years were included at 21 emergency surgical centres and from age 5 years at three university paediatric centres. Registration of clinical characteristics, treatments and outcomes started during the baseline period. The AIR score-based algorithm was implemented during the intervention period. Intermediate-risk patients were randomized to routine imaging or selective imaging after clinical reassessment. The baseline period included 1152 patients, and the intervention period 2639, of whom 1068 intermediate-risk patients were randomized. In low-risk patients, use of the AIR score-based algorithm resulted in less imaging (19·2 versus 34·5 per cent; P appendicitis (6·8 versus 9·7 per cent; P = 0·034). Intermediate-risk patients randomized to the imaging and observation groups had the same proportion of negative appendicectomies (6·4 versus 6·7 per cent respectively; P = 0·884), number of admissions, number of perforations and length of hospital stay, but routine imaging was associated with an increased proportion of patients treated for appendicitis (53·4 versus 46·3 per cent; P = 0·020). AIR score-based risk classification can safely reduce the use of diagnostic imaging and hospital admissions in patients with suspicion of appendicitis. Registration number: NCT00971438 ( http://www.clinicaltrials.gov). © 2017 BJS Society Ltd Published by John Wiley & Sons Ltd.

  7. Sequence based prediction of DNA-binding proteins based on hybrid feature selection using random forest and Gaussian naïve Bayes.

    Directory of Open Access Journals (Sweden)

    Wangchao Lou

    Full Text Available Developing an efficient method for determination of the DNA-binding proteins, due to their vital roles in gene regulation, is becoming highly desired since it would be invaluable to advance our understanding of protein functions. In this study, we proposed a new method for the prediction of the DNA-binding proteins, by performing the feature rank using random forest and the wrapper-based feature selection using forward best-first search strategy. The features comprise information from primary sequence, predicted secondary structure, predicted relative solvent accessibility, and position specific scoring matrix. The proposed method, called DBPPred, used Gaussian naïve Bayes as the underlying classifier since it outperformed five other classifiers, including decision tree, logistic regression, k-nearest neighbor, support vector machine with polynomial kernel, and support vector machine with radial basis function. As a result, the proposed DBPPred yields the highest average accuracy of 0.791 and average MCC of 0.583 according to the five-fold cross validation with ten runs on the training benchmark dataset PDB594. Subsequently, blind tests on the independent dataset PDB186 by the proposed model trained on the entire PDB594 dataset and by other five existing methods (including iDNA-Prot, DNA-Prot, DNAbinder, DNABIND and DBD-Threader were performed, resulting in that the proposed DBPPred yielded the highest accuracy of 0.769, MCC of 0.538, and AUC of 0.790. The independent tests performed by the proposed DBPPred on completely a large non-DNA binding protein dataset and two RNA binding protein datasets also showed improved or comparable quality when compared with the relevant prediction methods. Moreover, we observed that majority of the selected features by the proposed method are statistically significantly different between the mean feature values of the DNA-binding and the non DNA-binding proteins. All of the experimental results indicate that

  8. Blind Measurement Selection: A Random Matrix Theory Approach

    KAUST Repository

    Elkhalil, Khalil

    2016-12-14

    This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter vector. The exhaustive search inspecting each of the $n\\\\choose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.

  9. Strategyproof Peer Selection using Randomization, Partitioning, and Apportionment

    OpenAIRE

    Aziz, Haris; Lev, Omer; Mattei, Nicholas; Rosenschein, Jeffrey S.; Walsh, Toby

    2016-01-01

    Peer review, evaluation, and selection is a fundamental aspect of modern science. Funding bodies the world over employ experts to review and select the best proposals of those submitted for funding. The problem of peer selection, however, is much more general: a professional society may want to give a subset of its members awards based on the opinions of all members; an instructor for a MOOC or online course may want to crowdsource grading; or a marketing company may select ideas from group b...

  10. TU-AB-202-10: How Effective Are Current Atlas Selection Methods for Atlas-Based Auto-Contouring in Radiotherapy Planning?

    Energy Technology Data Exchange (ETDEWEB)

    Peressutti, D; Schipaanboord, B; Kadir, T; Gooding, M [Mirada Medical Limited, Science and Medical Technology, Oxford (United Kingdom); Soest, J van; Lustberg, T; Elmpt, W van; Dekker, A [Maastricht University Medical Centre, Department of Radiation Oncology MAASTRO - GROW School for Oncology Developmental Biology, Maastricht (Netherlands)

    2016-06-15

    Purpose: To investigate the effectiveness of atlas selection methods for improving atlas-based auto-contouring in radiotherapy planning. Methods: 275 H&N clinically delineated cases were employed as an atlas database from which atlases would be selected. A further 40 previously contoured cases were used as test patients against which atlas selection could be performed and evaluated. 26 variations of selection methods proposed in the literature and used in commercial systems were investigated. Atlas selection methods comprised either global or local image similarity measures, computed after rigid or deformable registration, combined with direct atlas search or with an intermediate template image. Workflow Box (Mirada-Medical, Oxford, UK) was used for all auto-contouring. Results on brain, brainstem, parotids and spinal cord were compared to random selection, a fixed set of 10 “good” atlases, and optimal selection by an “oracle” with knowledge of the ground truth. The Dice score and the average ranking with respect to the “oracle” were employed to assess the performance of the top 10 atlases selected by each method. Results: The fixed set of “good” atlases outperformed all of the atlas-patient image similarity-based selection methods (mean Dice 0.715 c.f. 0.603 to 0.677). In general, methods based on exhaustive comparison of local similarity measures showed better average Dice scores (0.658 to 0.677) compared to the use of either template image (0.655 to 0.672) or global similarity measures (0.603 to 0.666). The performance of image-based selection methods was found to be only slightly better than a random (0.645). Dice scores given relate to the left parotid, but similar results patterns were observed for all organs. Conclusion: Intuitively, atlas selection based on the patient CT is expected to improve auto-contouring performance. However, it was found that published approaches performed marginally better than random and use of a fixed set of

  11. An optical authentication system based on imaging of excitation-selected lanthanide luminescence.

    Science.gov (United States)

    Carro-Temboury, Miguel R; Arppe, Riikka; Vosch, Tom; Sørensen, Thomas Just

    2018-01-01

    Secure data encryption relies heavily on one-way functions, and copy protection relies on features that are difficult to reproduce. We present an optical authentication system based on lanthanide luminescence from physical one-way functions or physical unclonable functions (PUFs). They cannot be reproduced and thus enable unbreakable encryption. Further, PUFs will prevent counterfeiting if tags with unique PUFs are grafted onto products. We have developed an authentication system that comprises a hardware reader, image analysis, and authentication software and physical keys that we demonstrate as an anticounterfeiting system. The physical keys are PUFs made from random patterns of taggants in polymer films on glass that can be imaged following selected excitation of particular lanthanide(III) ions doped into the individual taggants. This form of excitation-selected imaging ensures that by using at least two lanthanide(III) ion dopants, the random patterns cannot be copied, because the excitation selection will fail when using any other emitter. With the developed reader and software, the random patterns are read and digitized, which allows a digital pattern to be stored. This digital pattern or digital key can be used to authenticate the physical key in anticounterfeiting or to encrypt any message. The PUF key was produced with a staggering nominal encoding capacity of 7 3600 . Although the encoding capacity of the realized authentication system reduces to 6 × 10 104 , it is more than sufficient to completely preclude counterfeiting of products.

  12. Hyperspectral band selection based on consistency-measure of neighborhood rough set theory

    International Nuclear Information System (INIS)

    Liu, Yao; Xie, Hong; Wang, Liguo; Tan, Kezhu; Chen, Yuehua; Xu, Zhen

    2016-01-01

    Band selection is a well-known approach for reducing dimensionality in hyperspectral imaging. In this paper, a band selection method based on consistency-measure of neighborhood rough set theory (CMNRS) was proposed to select informative bands from hyperspectral images. A decision-making information system was established by the reflection spectrum of soybeans’ hyperspectral data between 400 nm and 1000 nm wavelengths. The neighborhood consistency-measure, which reflects not only the size of the decision positive region, but also the sample distribution in the boundary region, was used as the evaluation function of band significance. The optimal band subset was selected by a forward greedy search algorithm. A post-pruning strategy was employed to overcome the over-fitting problem and find the minimum subset. To assess the effectiveness of the proposed band selection technique, two classification models (extreme learning machine (ELM) and random forests (RF)) were built. The experimental results showed that the proposed algorithm can effectively select key bands and obtain satisfactory classification accuracy. (paper)

  13. Optimizing Event Selection with the Random Grid Search

    Energy Technology Data Exchange (ETDEWEB)

    Bhat, Pushpalatha C. [Fermilab; Prosper, Harrison B. [Florida State U.; Sekmen, Sezen [Kyungpook Natl. U.; Stewart, Chip [Broad Inst., Cambridge

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  14. Novel pseudo-random number generator based on quantum random walks

    Science.gov (United States)

    Yang, Yu-Guang; Zhao, Qian-Qian

    2016-02-01

    In this paper, we investigate the potential application of quantum computation for constructing pseudo-random number generators (PRNGs) and further construct a novel PRNG based on quantum random walks (QRWs), a famous quantum computation model. The PRNG merely relies on the equations used in the QRWs, and thus the generation algorithm is simple and the computation speed is fast. The proposed PRNG is subjected to statistical tests such as NIST and successfully passed the test. Compared with the representative PRNG based on quantum chaotic maps (QCM), the present QRWs-based PRNG has some advantages such as better statistical complexity and recurrence. For example, the normalized Shannon entropy and the statistical complexity of the QRWs-based PRNG are 0.999699456771172 and 1.799961178212329e-04 respectively given the number of 8 bits-words, say, 16Mbits. By contrast, the corresponding values of the QCM-based PRNG are 0.999448131481064 and 3.701210794388818e-04 respectively. Thus the statistical complexity and the normalized entropy of the QRWs-based PRNG are closer to 0 and 1 respectively than those of the QCM-based PRNG when the number of words of the analyzed sequence increases. It provides a new clue to construct PRNGs and also extends the applications of quantum computation.

  15. Novel pseudo-random number generator based on quantum random walks.

    Science.gov (United States)

    Yang, Yu-Guang; Zhao, Qian-Qian

    2016-02-04

    In this paper, we investigate the potential application of quantum computation for constructing pseudo-random number generators (PRNGs) and further construct a novel PRNG based on quantum random walks (QRWs), a famous quantum computation model. The PRNG merely relies on the equations used in the QRWs, and thus the generation algorithm is simple and the computation speed is fast. The proposed PRNG is subjected to statistical tests such as NIST and successfully passed the test. Compared with the representative PRNG based on quantum chaotic maps (QCM), the present QRWs-based PRNG has some advantages such as better statistical complexity and recurrence. For example, the normalized Shannon entropy and the statistical complexity of the QRWs-based PRNG are 0.999699456771172 and 1.799961178212329e-04 respectively given the number of 8 bits-words, say, 16Mbits. By contrast, the corresponding values of the QCM-based PRNG are 0.999448131481064 and 3.701210794388818e-04 respectively. Thus the statistical complexity and the normalized entropy of the QRWs-based PRNG are closer to 0 and 1 respectively than those of the QCM-based PRNG when the number of words of the analyzed sequence increases. It provides a new clue to construct PRNGs and also extends the applications of quantum computation.

  16. Non-random mating for selection with restricted rates of inbreeding and overlapping generations

    NARCIS (Netherlands)

    Sonesson, A.K.; Meuwissen, T.H.E.

    2002-01-01

    Minimum coancestry mating with a maximum of one offspring per mating pair (MC1) is compared with random mating schemes for populations with overlapping generations. Optimum contribution selection is used, whereby $\\\\\\\\Delta F$ is restricted. For schemes with $\\\\\\\\Delta F$ restricted to 0.25% per

  17. Comparative Evaluations of Randomly Selected Four Point-of-Care Glucometer Devices in Addis Ababa, Ethiopia.

    Science.gov (United States)

    Wolde, Mistire; Tarekegn, Getahun; Kebede, Tedla

    2018-05-01

    Point-of-care glucometer (PoCG) devices play a significant role in self-monitoring of the blood sugar level, particularly in the follow-up of high blood sugar therapeutic response. The aim of this study was to evaluate blood glucose test results performed with four randomly selected glucometers on diabetes and control subjects versus standard wet chemistry (hexokinase) methods in Addis Ababa, Ethiopia. A prospective cross-sectional study was conducted on randomly selected 200 study participants (100 participants with diabetes and 100 healthy controls). Four randomly selected PoCG devices (CareSens N, DIAVUE Prudential, On Call Extra, i-QARE DS-W) were evaluated against hexokinase method and ISO 15197:2003 and ISO 15197:2013 standards. The minimum and maximum blood sugar values were recorded by CareSens N (21 mg/dl) and hexokinase method (498.8 mg/dl), respectively. The mean sugar values of all PoCG devices except On Call Extra showed significant differences compared with the reference hexokinase method. Meanwhile, all four PoCG devices had strong positive relationship (>80%) with the reference method (hexokinase). On the other hand, none of the four PoCG devices fulfilled the minimum accuracy measurement set by ISO 15197:2003 and ISO 15197:2013 standards. In addition, the linear regression analysis revealed that all four selected PoCG overestimated the glucose concentrations. The overall evaluation of the selected four PoCG measurements were poorly correlated with standard reference method. Therefore, before introducing PoCG devices to the market, there should be a standardized evaluation platform for validation. Further similar large-scale studies on other PoCG devices also need to be undertaken.

  18. Geography and genography: prediction of continental origin using randomly selected single nucleotide polymorphisms

    Directory of Open Access Journals (Sweden)

    Ramoni Marco F

    2007-03-01

    Full Text Available Abstract Background Recent studies have shown that when individuals are grouped on the basis of genetic similarity, group membership corresponds closely to continental origin. There has been considerable debate about the implications of these findings in the context of larger debates about race and the extent of genetic variation between groups. Some have argued that clustering according to continental origin demonstrates the existence of significant genetic differences between groups and that these differences may have important implications for differences in health and disease. Others argue that clustering according to continental origin requires the use of large amounts of genetic data or specifically chosen markers and is indicative only of very subtle genetic differences that are unlikely to have biomedical significance. Results We used small numbers of randomly selected single nucleotide polymorphisms (SNPs from the International HapMap Project to train naïve Bayes classifiers for prediction of ancestral continent of origin. Predictive accuracy was tested on two independent data sets. Genetically similar groups should be difficult to distinguish, especially if only a small number of genetic markers are used. The genetic differences between continentally defined groups are sufficiently large that one can accurately predict ancestral continent of origin using only a minute, randomly selected fraction of the genetic variation present in the human genome. Genotype data from only 50 random SNPs was sufficient to predict ancestral continent of origin in our primary test data set with an average accuracy of 95%. Genetic variations informative about ancestry were common and widely distributed throughout the genome. Conclusion Accurate characterization of ancestry is possible using small numbers of randomly selected SNPs. The results presented here show how investigators conducting genetic association studies can use small numbers of arbitrarily

  19. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  20. Fault Diagnosis for Hydraulic Servo System Using Compressed Random Subspace Based ReliefF

    Directory of Open Access Journals (Sweden)

    Yu Ding

    2018-01-01

    Full Text Available Playing an important role in electromechanical systems, hydraulic servo system is crucial to mechanical systems like engineering machinery, metallurgical machinery, ships, and other equipment. Fault diagnosis based on monitoring and sensory signals plays an important role in avoiding catastrophic accidents and enormous economic losses. This study presents a fault diagnosis scheme for hydraulic servo system using compressed random subspace based ReliefF (CRSR method. From the point of view of feature selection, the scheme utilizes CRSR method to determine the most stable feature combination that contains the most adequate information simultaneously. Based on the feature selection structure of ReliefF, CRSR employs feature integration rules in the compressed domain. Meanwhile, CRSR substitutes information entropy and fuzzy membership for traditional distance measurement index. The proposed CRSR method is able to enhance the robustness of the feature information against interference while selecting the feature combination with balanced information expressing ability. To demonstrate the effectiveness of the proposed CRSR method, a hydraulic servo system joint simulation model is constructed by HyPneu and Simulink, and three fault modes are injected to generate the validation data.

  1. Fuzzy Random λ-Mean SAD Portfolio Selection Problem: An Ant Colony Optimization Approach

    Science.gov (United States)

    Thakur, Gour Sundar Mitra; Bhattacharyya, Rupak; Mitra, Swapan Kumar

    2010-10-01

    To reach the investment goal, one has to select a combination of securities among different portfolios containing large number of securities. Only the past records of each security do not guarantee the future return. As there are many uncertain factors which directly or indirectly influence the stock market and there are also some newer stock markets which do not have enough historical data, experts' expectation and experience must be combined with the past records to generate an effective portfolio selection model. In this paper the return of security is assumed to be Fuzzy Random Variable Set (FRVS), where returns are set of random numbers which are in turn fuzzy numbers. A new λ-Mean Semi Absolute Deviation (λ-MSAD) portfolio selection model is developed. The subjective opinions of the investors to the rate of returns of each security are taken into consideration by introducing a pessimistic-optimistic parameter vector λ. λ-Mean Semi Absolute Deviation (λ-MSAD) model is preferred as it follows absolute deviation of the rate of returns of a portfolio instead of the variance as the measure of the risk. As this model can be reduced to Linear Programming Problem (LPP) it can be solved much faster than quadratic programming problems. Ant Colony Optimization (ACO) is used for solving the portfolio selection problem. ACO is a paradigm for designing meta-heuristic algorithms for combinatorial optimization problem. Data from BSE is used for illustration.

  2. Pediatric selective mutism therapy: a randomized controlled trial.

    Science.gov (United States)

    Esposito, Maria; Gimigliano, Francesca; Barillari, Maria R; Precenzano, Francesco; Ruberto, Maria; Sepe, Joseph; Barillari, Umberto; Gimigliano, Raffaele; Militerni, Roberto; Messina, Giovanni; Carotenuto, Marco

    2017-10-01

    Selective mutism (SM) is a rare disease in children coded by DSM-5 as an anxiety disorder. Despite the disabling nature of the disease, there is still no specific treatment. The aims of this study were to verify the efficacy of six-month standard psychomotor treatment and the positive changes in lifestyle, in a population of children affected by SM. Randomized controlled trial registered in the European Clinical Trials Registry (EuDract 2015-001161-36). University third level Centre (Child and Adolescent Neuropsychiatry Clinic). Study population was composed by 67 children in group A (psychomotricity treatment) (35 M, mean age 7.84±1.15) and 71 children in group B (behavioral and educational counseling) (37 M, mean age 7.75±1.36). Psychomotor treatment was administered by trained child therapists in residential settings three times per week. Each child was treated for the whole period by the same therapist and all the therapists shared the same protocol. The standard psychomotor session length is of 45 minutes. At T0 and after 6 months (T1) of treatments, patients underwent a behavioral and SM severity assessment. To verify the effects of the psychomotor management, the Child Behavior Checklist questionnaire (CBCL) and Selective Mutism Questionnaire (SMQ) were administered to the parents. After 6 months of psychomotor treatment SM children showed a significant reduction among CBCL scores such as in social relations, anxious/depressed, social problems and total problems (Pselective mutism, even if further studies are needed. The present study identifies in psychomotricity a safe and efficacy therapy for pediatric selective mutism.

  3. Factors Associated with High Use of a Workplace Web-Based Stress Management Program in a Randomized Controlled Intervention Study

    Science.gov (United States)

    Hasson, H.; Brown, C.; Hasson, D.

    2010-01-01

    In web-based health promotion programs, large variations in participant engagement are common. The aim was to investigate determinants of high use of a worksite self-help web-based program for stress management. Two versions of the program were offered to randomly selected departments in IT and media companies. A static version of the program…

  4. Random forest variable selection in spatial malaria transmission modelling in Mpumalanga Province, South Africa

    Directory of Open Access Journals (Sweden)

    Thandi Kapwata

    2016-11-01

    Full Text Available Malaria is an environmentally driven disease. In order to quantify the spatial variability of malaria transmission, it is imperative to understand the interactions between environmental variables and malaria epidemiology at a micro-geographic level using a novel statistical approach. The random forest (RF statistical learning method, a relatively new variable-importance ranking method, measures the variable importance of potentially influential parameters through the percent increase of the mean squared error. As this value increases, so does the relative importance of the associated variable. The principal aim of this study was to create predictive malaria maps generated using the selected variables based on the RF algorithm in the Ehlanzeni District of Mpumalanga Province, South Africa. From the seven environmental variables used [temperature, lag temperature, rainfall, lag rainfall, humidity, altitude, and the normalized difference vegetation index (NDVI], altitude was identified as the most influential predictor variable due its high selection frequency. It was selected as the top predictor for 4 out of 12 months of the year, followed by NDVI, temperature and lag rainfall, which were each selected twice. The combination of climatic variables that produced the highest prediction accuracy was altitude, NDVI, and temperature. This suggests that these three variables have high predictive capabilities in relation to malaria transmission. Furthermore, it is anticipated that the predictive maps generated from predictions made by the RF algorithm could be used to monitor the progression of malaria and assist in intervention and prevention efforts with respect to malaria.

  5. Sequence-Based Prediction of RNA-Binding Proteins Using Random Forest with Minimum Redundancy Maximum Relevance Feature Selection

    Directory of Open Access Journals (Sweden)

    Xin Ma

    2015-01-01

    Full Text Available The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR method, followed by incremental feature selection (IFS. We incorporated features of conjoint triad features and three novel features: binding propensity (BP, nonbinding propensity (NBP, and evolutionary information combined with physicochemical properties (EIPP. The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient. High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.

  6. Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design

    Science.gov (United States)

    Wagler, Amy; Wagler, Ron

    2014-01-01

    Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…

  7. Emergence of multilevel selection in the prisoner's dilemma game on coevolving random networks

    International Nuclear Information System (INIS)

    Szolnoki, Attila; Perc, Matjaz

    2009-01-01

    We study the evolution of cooperation in the prisoner's dilemma game, whereby a coevolutionary rule is introduced that molds the random topology of the interaction network in two ways. First, existing links are deleted whenever a player adopts a new strategy or its degree exceeds a threshold value; second, new links are added randomly after a given number of game iterations. These coevolutionary processes correspond to the generic formation of new links and deletion of existing links that, especially in human societies, appear frequently as a consequence of ongoing socialization, change of lifestyle or death. Due to the counteraction of deletions and additions of links the initial heterogeneity of the interaction network is qualitatively preserved, and thus cannot be held responsible for the observed promotion of cooperation. Indeed, the coevolutionary rule evokes the spontaneous emergence of a powerful multilevel selection mechanism, which despite the sustained random topology of the evolving network, maintains cooperation across the whole span of defection temptation values.

  8. Improved methods in Agrobacterium-mediated transformation of almond using positive (mannose/pmi) or negative (kanamycin resistance) selection-based protocols.

    Science.gov (United States)

    Ramesh, Sunita A; Kaiser, Brent N; Franks, Tricia; Collins, Graham; Sedgley, Margaret

    2006-08-01

    A protocol for Agrobacterium-mediated transformation with either kanamycin or mannose selection was developed for leaf explants of the cultivar Prunus dulcis cv. Ne Plus Ultra. Regenerating shoots were selected on medium containing 15 muM kanamycin (negative selection), while in the positive selection strategy, shoots were selected on 2.5 g/l mannose supplemented with 15 g/l sucrose. Transformation efficiencies based on PCR analysis of individual putative transformed shoots from independent lines relative to the initial numbers of leaf explants tested were 5.6% for kanamycin/nptII and 6.8% for mannose/pmi selection, respectively. Southern blot analysis on six randomly chosen PCR-positive shoots confirmed the presence of the nptII transgene in each, and five randomly chosen lines identified to contain the pmi transgene by PCR showed positive hybridisation to a pmi DNA probe. The positive (mannose/pmi) and the negative (kanamycin) selection protocols used in this study have greatly improved transformation efficiency in almond, which were confirmed with PCR and Southern blot. This study also demonstrates that in almond the mannose/pmi selection protocol is appropriate and can result in higher transformation efficiencies over that of kanamycin/nptII selection protocols.

  9. Experimental nonlocality-based randomness generation with nonprojective measurements

    Science.gov (United States)

    Gómez, S.; Mattar, A.; Gómez, E. S.; Cavalcanti, D.; Farías, O. Jiménez; Acín, A.; Lima, G.

    2018-04-01

    We report on an optical setup generating more than one bit of randomness from one entangled bit (i.e., a maximally entangled state of two qubits). The amount of randomness is certified through the observation of Bell nonlocal correlations. To attain this result we implemented a high-purity entanglement source and a nonprojective three-outcome measurement. Our implementation achieves a gain of 27% of randomness as compared with the standard methods using projective measurements. Additionally, we estimate the amount of randomness certified in a one-sided device-independent scenario, through the observation of Einstein-Podolsky-Rosen steering. Our results prove that nonprojective quantum measurements allow extending the limits for nonlocality-based certified randomness generation using current technology.

  10. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  11. SDE based regression for random PDEs

    KAUST Repository

    Bayer, Christian

    2016-01-01

    A simulation based method for the numerical solution of PDE with random coefficients is presented. By the Feynman-Kac formula, the solution can be represented as conditional expectation of a functional of a corresponding stochastic differential equation driven by independent noise. A time discretization of the SDE for a set of points in the domain and a subsequent Monte Carlo regression lead to an approximation of the global solution of the random PDE. We provide an initial error and complexity analysis of the proposed method along with numerical examples illustrating its behaviour.

  12. SDE based regression for random PDEs

    KAUST Repository

    Bayer, Christian

    2016-01-06

    A simulation based method for the numerical solution of PDE with random coefficients is presented. By the Feynman-Kac formula, the solution can be represented as conditional expectation of a functional of a corresponding stochastic differential equation driven by independent noise. A time discretization of the SDE for a set of points in the domain and a subsequent Monte Carlo regression lead to an approximation of the global solution of the random PDE. We provide an initial error and complexity analysis of the proposed method along with numerical examples illustrating its behaviour.

  13. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest.

    Science.gov (United States)

    Ma, Suliang; Chen, Mingxuan; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-04-16

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods.

  14. The use of odd random phase electrochemical impedance spectroscopy to study lithium-based corrosion inhibition by active protective coatings

    NARCIS (Netherlands)

    Meeusen, M.; Visser, P.; Fernández Macía, L.; Hubin, A.; Terryn, H.A.; Mol, J.M.C.

    2018-01-01

    In this work, the study of the time-dependent behaviour of lithium carbonate based inhibitor technology for the active corrosion protection of aluminium alloy 2024-T3 is presented. Odd random phase electrochemical impedance spectroscopy (ORP-EIS) is selected as the electrochemical tool to study

  15. A Random-Dot Kinematogram for Web-Based Vision Research

    Directory of Open Access Journals (Sweden)

    Sivananda Rajananda

    2018-01-01

    Full Text Available Web-based experiments using visual stimuli have become increasingly common in recent years, but many frequently-used stimuli in vision research have yet to be developed for online platforms. Here, we introduce the first open access random-dot kinematogram (RDK for use in web browsers. This fully customizable RDK offers options to implement several different types of noise (random position, random walk, random direction and parameters to control aperture shape, coherence level, the number of dots, and other features. We include links to commented JavaScript code for easy implementation in web-based experiments, as well as an example of how this stimulus can be integrated as a plugin with a JavaScript library for online studies (jsPsych.

  16. Random drift versus selection in academic vocabulary: an evolutionary analysis of published keywords.

    Science.gov (United States)

    Bentley, R Alexander

    2008-08-27

    The evolution of vocabulary in academic publishing is characterized via keyword frequencies recorded in the ISI Web of Science citations database. In four distinct case-studies, evolutionary analysis of keyword frequency change through time is compared to a model of random copying used as the null hypothesis, such that selection may be identified against it. The case studies from the physical sciences indicate greater selection in keyword choice than in the social sciences. Similar evolutionary analyses can be applied to a wide range of phenomena; wherever the popularity of multiple items through time has been recorded, as with web searches, or sales of popular music and books, for example.

  17. Feature Selection based on Machine Learning in MRIs for Hippocampal Segmentation

    Science.gov (United States)

    Tangaro, Sabina; Amoroso, Nicola; Brescia, Massimo; Cavuoti, Stefano; Chincarini, Andrea; Errico, Rosangela; Paolo, Inglese; Longo, Giuseppe; Maglietta, Rosalia; Tateo, Andrea; Riccio, Giuseppe; Bellotti, Roberto

    2015-01-01

    Neurodegenerative diseases are frequently associated with structural changes in the brain. Magnetic resonance imaging (MRI) scans can show these variations and therefore can be used as a supportive feature for a number of neurodegenerative diseases. The hippocampus has been known to be a biomarker for Alzheimer disease and other neurological and psychiatric diseases. However, it requires accurate, robust, and reproducible delineation of hippocampal structures. Fully automatic methods are usually the voxel based approach; for each voxel a number of local features were calculated. In this paper, we compared four different techniques for feature selection from a set of 315 features extracted for each voxel: (i) filter method based on the Kolmogorov-Smirnov test; two wrapper methods, respectively, (ii) sequential forward selection and (iii) sequential backward elimination; and (iv) embedded method based on the Random Forest Classifier on a set of 10 T1-weighted brain MRIs and tested on an independent set of 25 subjects. The resulting segmentations were compared with manual reference labelling. By using only 23 feature for each voxel (sequential backward elimination) we obtained comparable state-of-the-art performances with respect to the standard tool FreeSurfer.

  18. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  19. Do pseudo-absence selection strategies influence species distribution models and their predictions? An information-theoretic approach based on simulated data

    Directory of Open Access Journals (Sweden)

    Guisan Antoine

    2009-04-01

    Full Text Available Abstract Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a real absences b pseudo-absences selected randomly from the background and c two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97, and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have

  20. 40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...

  1. Joint random beam and spectrum selection for spectrum sharing systems with partial channel state information

    KAUST Repository

    Abdallah, Mohamed M.

    2013-11-01

    In this work, we develop joint interference-aware random beam and spectrum selection scheme that provide enhanced performance for the secondary network under the condition that the interference observed at the primary receiver is below a predetermined acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a set of primary links composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes jointly select a beam, among a set of power-optimized random beams, as well as the primary spectrum that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint. In particular, we consider the case where the interference level is described by a q-bit description of its magnitude, whereby we propose a technique to find the optimal quantizer thresholds in a mean square error (MSE) sense. © 2013 IEEE.

  2. Joint random beam and spectrum selection for spectrum sharing systems with partial channel state information

    KAUST Repository

    Abdallah, Mohamed M.; Sayed, Mostafa M.; Alouini, Mohamed-Slim; Qaraqe, Khalid A.

    2013-01-01

    In this work, we develop joint interference-aware random beam and spectrum selection scheme that provide enhanced performance for the secondary network under the condition that the interference observed at the primary receiver is below a predetermined acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a set of primary links composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes jointly select a beam, among a set of power-optimized random beams, as well as the primary spectrum that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint. In particular, we consider the case where the interference level is described by a q-bit description of its magnitude, whereby we propose a technique to find the optimal quantizer thresholds in a mean square error (MSE) sense. © 2013 IEEE.

  3. The RANDOM computer program: A linear congruential random number generator

    Science.gov (United States)

    Miles, R. F., Jr.

    1986-01-01

    The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.

  4. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    Science.gov (United States)

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  5. Random drift versus selection in academic vocabulary: an evolutionary analysis of published keywords.

    Directory of Open Access Journals (Sweden)

    R Alexander Bentley

    Full Text Available The evolution of vocabulary in academic publishing is characterized via keyword frequencies recorded in the ISI Web of Science citations database. In four distinct case-studies, evolutionary analysis of keyword frequency change through time is compared to a model of random copying used as the null hypothesis, such that selection may be identified against it. The case studies from the physical sciences indicate greater selection in keyword choice than in the social sciences. Similar evolutionary analyses can be applied to a wide range of phenomena; wherever the popularity of multiple items through time has been recorded, as with web searches, or sales of popular music and books, for example.

  6. Randomized controlled trials of simulation-based interventions in Emergency Medicine: a methodological review.

    Science.gov (United States)

    Chauvin, Anthony; Truchot, Jennifer; Bafeta, Aida; Pateron, Dominique; Plaisance, Patrick; Yordanov, Youri

    2018-04-01

    The number of trials assessing Simulation-Based Medical Education (SBME) interventions has rapidly expanded. Many studies show that potential flaws in design, conduct and reporting of randomized controlled trials (RCTs) can bias their results. We conducted a methodological review of RCTs assessing a SBME in Emergency Medicine (EM) and examined their methodological characteristics. We searched MEDLINE via PubMed for RCT that assessed a simulation intervention in EM, published in 6 general and internal medicine and in the top 10 EM journals. The Cochrane Collaboration risk of Bias tool was used to assess risk of bias, intervention reporting was evaluated based on the "template for intervention description and replication" checklist, and methodological quality was evaluated by the Medical Education Research Study Quality Instrument. Reports selection and data extraction was done by 2 independents researchers. From 1394 RCTs screened, 68 trials assessed a SBME intervention. They represent one quarter of our sample. Cardiopulmonary resuscitation (CPR) is the most frequent topic (81%). Random sequence generation and allocation concealment were performed correctly in 66 and 49% of trials. Blinding of participants and assessors was performed correctly in 19 and 68%. Risk of attrition bias was low in three-quarters of the studies (n = 51). Risk of selective reporting bias was unclear in nearly all studies. The mean MERQSI score was of 13.4/18.4% of the reports provided a description allowing the intervention replication. Trials assessing simulation represent one quarter of RCTs in EM. Their quality remains unclear, and reproducing the interventions appears challenging due to reporting issues.

  7. Robust portfolio selection based on asymmetric measures of variability of stock returns

    Science.gov (United States)

    Chen, Wei; Tan, Shaohua

    2009-10-01

    This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.

  8. High Entropy Random Selection Protocols

    NARCIS (Netherlands)

    H. Buhrman (Harry); M. Christandl (Matthias); M. Koucky (Michal); Z. Lotker (Zvi); B. Patt-Shamir; M. Charikar; K. Jansen; O. Reingold; J. Rolim

    2007-01-01

    textabstractIn this paper, we construct protocols for two parties that do not trust each other, to generate random variables with high Shannon entropy. We improve known bounds for the trade off between the number of rounds, length of communication and the entropy of the outcome.

  9. Integrated Behavior Therapy for Selective Mutism: a randomized controlled pilot study.

    Science.gov (United States)

    Bergman, R Lindsey; Gonzalez, Araceli; Piacentini, John; Keller, Melody L

    2013-10-01

    To evaluate the feasibility, acceptability, and preliminary efficacy of a novel behavioral intervention for reducing symptoms of selective mutism and increasing functional speech. A total of 21 children ages 4 to 8 with primary selective mutism were randomized to 24 weeks of Integrated Behavior Therapy for Selective Mutism (IBTSM) or a 12-week Waitlist control. Clinical outcomes were assessed using blind independent evaluators, parent-, and teacher-report, and an objective behavioral measure. Treatment recipients completed a three-month follow-up to assess durability of treatment gains. Data indicated increased functional speaking behavior post-treatment as rated by parents and teachers, with a high rate of treatment responders as rated by blind independent evaluators (75%). Conversely, children in the Waitlist comparison group did not experience significant improvements in speaking behaviors. Children who received IBTSM also demonstrated significant improvements in number of words spoken at school compared to baseline, however, significant group differences did not emerge. Treatment recipients also experienced significant reductions in social anxiety per parent, but not teacher, report. Clinical gains were maintained over 3 month follow-up. IBTSM appears to be a promising new intervention that is efficacious in increasing functional speaking behaviors, feasible, and acceptable to parents and teachers. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  11. An improved label propagation algorithm based on node importance and random walk for community detection

    Science.gov (United States)

    Ma, Tianren; Xia, Zhengyou

    2017-05-01

    Currently, with the rapid development of information technology, the electronic media for social communication is becoming more and more popular. Discovery of communities is a very effective way to understand the properties of complex networks. However, traditional community detection algorithms consider the structural characteristics of a social organization only, with more information about nodes and edges wasted. In the meanwhile, these algorithms do not consider each node on its merits. Label propagation algorithm (LPA) is a near linear time algorithm which aims to find the community in the network. It attracts many scholars owing to its high efficiency. In recent years, there are more improved algorithms that were put forward based on LPA. In this paper, an improved LPA based on random walk and node importance (NILPA) is proposed. Firstly, a list of node importance is obtained through calculation. The nodes in the network are sorted in descending order of importance. On the basis of random walk, a matrix is constructed to measure the similarity of nodes and it avoids the random choice in the LPA. Secondly, a new metric IAS (importance and similarity) is calculated by node importance and similarity matrix, which we can use to avoid the random selection in the original LPA and improve the algorithm stability. Finally, a test in real-world and synthetic networks is given. The result shows that this algorithm has better performance than existing methods in finding community structure.

  12. Wavelength Selection Method Based on Differential Evolution for Precise Quantitative Analysis Using Terahertz Time-Domain Spectroscopy.

    Science.gov (United States)

    Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong

    2017-12-01

    Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.

  13. Modified random hinge transport mechanics and multiple scattering step-size selection in EGS5

    International Nuclear Information System (INIS)

    Wilderman, S.J.; Bielajew, A.F.

    2005-01-01

    The new transport mechanics in EGS5 allows for significantly longer electron transport step sizes and hence shorter computation times than required for identical problems in EGS4. But as with all Monte Carlo electron transport algorithms, certain classes of problems exhibit step-size dependencies even when operating within recommended ranges, sometimes making selection of step-sizes a daunting task for novice users. Further contributing to this problem, because of the decoupling of multiple scattering and continuous energy loss in the dual random hinge transport mechanics of EGS5, there are two independent step sizes in EGS5, one for multiple scattering and one for continuous energy loss, each of which influences speed and accuracy in a different manner. Further, whereas EGS4 used a single value of fractional energy loss (ESTEPE) to determine step sizes at all energies, to increase performance by decreasing the amount of effort expended simulating lower energy particles, EGS5 permits the fractional energy loss values which are used to determine both the multiple scattering and continuous energy loss step sizes to vary with energy. This results in requiring the user to specify four fractional energy loss values when optimizing computations for speed. Thus, in order to simplify step-size selection and to mitigate step-size dependencies, a method has been devised to automatically optimize step-size selection based on a single material dependent input related to the size of problem tally region. In this paper we discuss the new transport mechanics in EGS5 and describe the automatic step-size optimization algorithm. (author)

  14. Optical image encryption based on interference under convergent random illumination

    International Nuclear Information System (INIS)

    Kumar, Pramod; Joseph, Joby; Singh, Kehar

    2010-01-01

    In an optical image encryption system based on the interference principle, two pure phase masks are designed analytically to hide an image. These two masks are illuminated with a plane wavefront to retrieve the original image in the form of an interference pattern at the decryption plane. Replacement of the plane wavefront with convergent random illumination in the proposed scheme leads to an improvement in the security of interference based encryption. The proposed encryption scheme retains the simplicity of an interference based method, as the two pure masks are generated with an analytical method without any iterative algorithm. In addition to the free-space propagation distance and the two pure phase masks, the convergence distance and the randomized lens phase function are two new encryption parameters to enhance the system security. The robustness of this scheme against occlusion of the random phase mask of the randomized lens phase function is investigated. The feasibility of the proposed scheme is demonstrated with numerical simulation results

  15. A generator for unique quantum random numbers based on vacuum states

    DEFF Research Database (Denmark)

    Gabriel, C.; Wittmann, C.; Sych, D.

    2010-01-01

    the purity of a continuous-variable quantum vacuum state to generate unique random numbers. We use the intrinsic randomness in measuring the quadratures of a mode in the lowest energy vacuum state, which cannot be correlated to any other state. The simplicity of our source, combined with its verifiably......Random numbers are a valuable component in diverse applications that range from simulations(1) over gambling to cryptography(2,3). The quest for true randomness in these applications has engendered a large variety of different proposals for producing random numbers based on the foundational...... unpredictability of quantum mechanics(4-11). However, most approaches do not consider that a potential adversary could have knowledge about the generated numbers, so the numbers are not verifiably random and unique(12-15). Here we present a simple experimental setup based on homodyne measurements that uses...

  16. Preferential selection based on degree difference in the spatial prisoner's dilemma games

    Science.gov (United States)

    Huang, Changwei; Dai, Qionglin; Cheng, Hongyan; Li, Haihong

    2017-10-01

    Strategy evolution in spatial evolutionary games is generally implemented through imitation processes between individuals. In most previous studies, it is assumed that individuals pick up one of their neighbors randomly to learn from. However, by considering the heterogeneity of individuals' influence in the real society, preferential selection is more realistic. Here, we introduce a preferential selection mechanism based on degree difference into spatial prisoner's dilemma games on Erdös-Rényi networks and Barabási-Albert scale-free networks and investigate the effects of the preferential selection on cooperation. The results show that, when the individuals prefer to choose the neighbors who have small degree difference with themselves to imitate, cooperation is hurt by the preferential selection. In contrast, when the individuals prefer to choose those large degree difference neighbors to learn from, there exists optimal preference strength resulting in the maximal cooperation level no matter what the network structure is. In addition, we investigate the robustness of the results against variations of the noise, the average degree and the size of network in the model, and find that the qualitative features of the results are unchanged.

  17. SELECTION OF FISÁLIS POPULATIONS FOR HIBRIDIZATIONS, BASED ON FRUIT TRAITS

    Directory of Open Access Journals (Sweden)

    NICOLE TREVISANI

    2016-01-01

    Full Text Available ABSTRACT The objective of this study was to characterize the genetic variability in fisális populations and select promising parents based on fruit traits. The experimental design consisted of randomized blocks, with six populations. Five plants per treatment were sampled. The evaluated traits were fruit weight, capsule weight, 1000- seed weight and fruit diameter. The data were subjected to multivariate analysis of variance with error specification between and within (p <0.05. Mahalanobis’ distance was used as a measure of genetic dissimilarity. Significant differences for the assessed traits were detected between fisális populations. The ratio error among by within indicated no need for sampling within the experimental unit. Dissimilarity was greatest between Lages and Vacaria. The most discriminating traits were capsule weight, fruit weight and fruit diameter. The multivariate contrasts indicated differences between the populations of Vacaria and from Caçador, Lages and Peru, selected for hybridizations.

  18. Selective decontamination in pediatric liver transplants. A randomized prospective study.

    Science.gov (United States)

    Smith, S D; Jackson, R J; Hannakan, C J; Wadowsky, R M; Tzakis, A G; Rowe, M I

    1993-06-01

    Although it has been suggested that selective decontamination of the digestive tract (SDD) decreases postoperative aerobic Gram-negative and fungal infections in orthotopic liver transplantation (OLT), no controlled trials exist in pediatric patients. This prospective, randomized controlled study of 36 pediatric OLT patients examines the effect of short-term SDD on postoperative infection and digestive tract flora. Patients were randomized into two groups. The control group received perioperative parenteral antibiotics only. The SDD group received in addition polymyxin E, tobramycin, and amphotericin B enterally and by oropharyngeal swab postoperatively until oral intake was tolerated (6 +/- 4 days). Indications for operation, preoperative status, age, and intensive care unit and hospital length of stay were no different in SDD (n = 18) and control (n = 18) groups. A total of 14 Gram-negative infections (intraabdominal abscess 7, septicemia 5, pneumonia 1, urinary tract 1) developed in the 36 patients studied. Mortality was not significantly different in the two groups. However, there were significantly fewer patients with Gram-negative infections in the SDD group: 3/18 patients (11%) vs. 11/18 patients (50%) in the control group, P < 0.001. There was also significant reduction in aerobic Gram-negative flora in the stool and pharynx in patients receiving SDD. Gram-positive and anaerobic organisms were unaffected. We conclude that short-term postoperative SDD significantly reduces Gram-negative infections in pediatric OLT patients.

  19. Expectation-based approach for one-dimensional randomly disordered phononic crystals

    International Nuclear Information System (INIS)

    Wu, Feng; Gao, Qiang; Xu, Xiaoming; Zhong, Wanxie

    2014-01-01

    An expectation-based approach to the statistical theorem is proposed for the one-dimensional randomly disordered phononic crystal. In the proposed approach, the expectations of the random eigenstates of randomly disordered phononic crystals are investigated. In terms of the expectations of the random eigenstates, the wave propagation and localization phenomenon in the random phononic crystal could be understood in a statistical perspective. Using the proposed approach, it is proved that for a randomly disordered phononic crystal, the Bloch theorem holds in the perspective of expectation. A one-dimensional randomly disordered binary phononic crystal consisting of two materials with the random geometry size or random physical parameter is addressed by using the proposed approach. From the result, it can be observed that with the increase of the disorder degree, the localization of the expectations of the eigenstates is strengthened. The effect of the random disorder on the eigenstates at higher frequencies is more significant than that at lower frequencies. Furthermore, after introducing the random disorder into phononic crystals, some random divergent eigenstates are changed to localized eigenstates in expectation sense.

  20. An MGF-based unified framework to determine the joint statistics of partial sums of ordered i.n.d. random variables

    KAUST Repository

    Nam, Sungsik

    2014-08-01

    The joint statistics of partial sums of ordered random variables (RVs) are often needed for the accurate performance characterization of a wide variety of wireless communication systems. A unified analytical framework to determine the joint statistics of partial sums of ordered independent and identically distributed (i.i.d.) random variables was recently presented. However, the identical distribution assumption may not be valid in several real-world applications. With this motivation in mind, we consider in this paper the more general case in which the random variables are independent but not necessarily identically distributed (i.n.d.). More specifically, we extend the previous analysis and introduce a new more general unified analytical framework to determine the joint statistics of partial sums of ordered i.n.d. RVs. Our mathematical formalism is illustrated with an application on the exact performance analysis of the capture probability of generalized selection combining (GSC)-based RAKE receivers operating over frequency-selective fading channels with a non-uniform power delay profile. © 1991-2012 IEEE.

  1. Analysis of swaps in Radix selection

    DEFF Research Database (Denmark)

    Elmasry, Amr Ahmed Abd Elmoneim; Mahmoud, Hosam

    2011-01-01

    Radix Sort is a sorting algorithm based on analyzing digital data. We study the number of swaps made by Radix Select (a one-sided version of Radix Sort) to find an element with a randomly selected rank. This kind of grand average provides a smoothing over all individual distributions for specific...

  2. Effects of psychological therapies in randomized trials and practice-based studies.

    Science.gov (United States)

    Barkham, Michael; Stiles, William B; Connell, Janice; Twigg, Elspeth; Leach, Chris; Lucock, Mike; Mellor-Clark, John; Bower, Peter; King, Michael; Shapiro, David A; Hardy, Gillian E; Greenberg, Leslie; Angus, Lynne

    2008-11-01

    Randomized trials of the effects of psychological therapies seek internal validity via homogeneous samples and standardized treatment protocols. In contrast, practice-based studies aim for clinical realism and external validity via heterogeneous samples of clients treated under routine practice conditions. We compared indices of treatment effects in these two types of studies. Using published transformation formulas, the Beck Depression Inventory (BDI) scores from five randomized trials of depression (N = 477 clients) were transformed into Clinical Outcomes in Routine Evaluation-Outcome Measure (CORE-OM) scores and compared with CORE-OM data collected in four practice-based studies (N = 4,196 clients). Conversely, the practice-based studies' CORE-OM scores were transformed into BDI scores and compared with randomized trial data. Randomized trials showed a modest advantage over practice-based studies in amount of pre-post improvement. This difference was compressed or exaggerated depending on the direction of the transformation but averaged about 12%. There was a similarly sized advantage to randomized trials in rates of reliable and clinically significant improvement (RCSI). The largest difference was yielded by comparisons of effect sizes which suggested an advantage more than twice as large, reflecting narrower pre-treatment distributions in the randomized trials. Outcomes of completed treatments for depression in randomized trials appeared to be modestly greater than those in routine care settings. The size of the difference may be distorted depending on the method for calculating degree of change. Transforming BDI scores into CORE-OM scores and vice versa may be a preferable alternative to effect sizes for comparisons of studies using these measures.

  3. A novel EMD selecting thresholding method based on multiple iteration for denoising LIDAR signal

    Science.gov (United States)

    Li, Meng; Jiang, Li-hui; Xiong, Xing-long

    2015-06-01

    Empirical mode decomposition (EMD) approach has been believed to be potentially useful for processing the nonlinear and non-stationary LIDAR signals. To shed further light on its performance, we proposed the EMD selecting thresholding method based on multiple iteration, which essentially acts as a development of EMD interval thresholding (EMD-IT), and randomly alters the samples of noisy parts of all the corrupted intrinsic mode functions to generate a better effect of iteration. Simulations on both synthetic signals and LIDAR signals from real world support this method.

  4. Quantum random number generator based on quantum nature of vacuum fluctuations

    Science.gov (United States)

    Ivanova, A. E.; Chivilikhin, S. A.; Gleim, A. V.

    2017-11-01

    Quantum random number generator (QRNG) allows obtaining true random bit sequences. In QRNG based on quantum nature of vacuum, optical beam splitter with two inputs and two outputs is normally used. We compare mathematical descriptions of spatial beam splitter and fiber Y-splitter in the quantum model for QRNG, based on homodyne detection. These descriptions were identical, that allows to use fiber Y-splitters in practical QRNG schemes, simplifying the setup. Also we receive relations between the input radiation and the resulting differential current in homodyne detector. We experimentally demonstrate possibility of true random bits generation by using QRNG based on homodyne detection with Y-splitter.

  5. A Distributed Dynamic Super Peer Selection Method Based on Evolutionary Game for Heterogeneous P2P Streaming Systems

    Directory of Open Access Journals (Sweden)

    Jing Chen

    2013-01-01

    Full Text Available Due to high efficiency and good scalability, hierarchical hybrid P2P architecture has drawn more and more attention in P2P streaming research and application fields recently. The problem about super peer selection, which is the key problem in hybrid heterogeneous P2P architecture, is becoming highly challenging because super peers must be selected from a huge and dynamically changing network. A distributed super peer selection (SPS algorithm for hybrid heterogeneous P2P streaming system based on evolutionary game is proposed in this paper. The super peer selection procedure is modeled based on evolutionary game framework firstly, and its evolutionarily stable strategies are analyzed. Then a distributed Q-learning algorithm (ESS-SPS according to the mixed strategies by analysis is proposed for the peers to converge to the ESSs based on its own payoff history. Compared to the traditional randomly super peer selection scheme, experiments results show that the proposed ESS-SPS algorithm achieves better performance in terms of social welfare and average upload rate of super peers and keeps the upload capacity of the P2P streaming system increasing steadily with the number of peers increasing.

  6. On the design of henon and logistic map-based random number generator

    Science.gov (United States)

    Magfirawaty; Suryadi, M. T.; Ramli, Kalamullah

    2017-10-01

    The key sequence is one of the main elements in the cryptosystem. True Random Number Generators (TRNG) method is one of the approaches to generating the key sequence. The randomness source of the TRNG divided into three main groups, i.e. electrical noise based, jitter based and chaos based. The chaos based utilizes a non-linear dynamic system (continuous time or discrete time) as an entropy source. In this study, a new design of TRNG based on discrete time chaotic system is proposed, which is then simulated in LabVIEW. The principle of the design consists of combining 2D and 1D chaotic systems. A mathematical model is implemented for numerical simulations. We used comparator process as a harvester method to obtain the series of random bits. Without any post processing, the proposed design generated random bit sequence with high entropy value and passed all NIST 800.22 statistical tests.

  7. Randomized algorithms in automatic control and data mining

    CERN Document Server

    Granichin, Oleg; Toledano-Kitai, Dvora

    2015-01-01

    In the fields of data mining and control, the huge amount of unstructured data and the presence of uncertainty in system descriptions have always been critical issues. The book Randomized Algorithms in Automatic Control and Data Mining introduces the readers to the fundamentals of randomized algorithm applications in data mining (especially clustering) and in automatic control synthesis. The methods proposed in this book guarantee that the computational complexity of classical algorithms and the conservativeness of standard robust control techniques will be reduced. It is shown that when a problem requires "brute force" in selecting among options, algorithms based on random selection of alternatives offer good results with certain probability for a restricted time and significantly reduce the volume of operations.

  8. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  9. Emotional textile image classification based on cross-domain convolutional sparse autoencoders with feature selection

    Science.gov (United States)

    Li, Zuhe; Fan, Yangyu; Liu, Weihua; Yu, Zeqi; Wang, Fengqin

    2017-01-01

    We aim to apply sparse autoencoder-based unsupervised feature learning to emotional semantic analysis for textile images. To tackle the problem of limited training data, we present a cross-domain feature learning scheme for emotional textile image classification using convolutional autoencoders. We further propose a correlation-analysis-based feature selection method for the weights learned by sparse autoencoders to reduce the number of features extracted from large size images. First, we randomly collect image patches on an unlabeled image dataset in the source domain and learn local features with a sparse autoencoder. We then conduct feature selection according to the correlation between different weight vectors corresponding to the autoencoder's hidden units. We finally adopt a convolutional neural network including a pooling layer to obtain global feature activations of textile images in the target domain and send these global feature vectors into logistic regression models for emotional image classification. The cross-domain unsupervised feature learning method achieves 65% to 78% average accuracy in the cross-validation experiments corresponding to eight emotional categories and performs better than conventional methods. Feature selection can reduce the computational cost of global feature extraction by about 50% while improving classification performance.

  10. [Plaque segmentation of intracoronary optical coherence tomography images based on K-means and improved random walk algorithm].

    Science.gov (United States)

    Wang, Guanglei; Wang, Pengyu; Han, Yechen; Liu, Xiuling; Li, Yan; Lu, Qian

    2017-06-01

    In recent years, optical coherence tomography (OCT) has developed into a popular coronary imaging technology at home and abroad. The segmentation of plaque regions in coronary OCT images has great significance for vulnerable plaque recognition and research. In this paper, a new algorithm based on K -means clustering and improved random walk is proposed and Semi-automated segmentation of calcified plaque, fibrotic plaque and lipid pool was achieved. And the weight function of random walk is improved. The distance between the edges of pixels in the image and the seed points is added to the definition of the weight function. It increases the weak edge weights and prevent over-segmentation. Based on the above methods, the OCT images of 9 coronary atherosclerotic patients were selected for plaque segmentation. By contrasting the doctor's manual segmentation results with this method, it was proved that this method had good robustness and accuracy. It is hoped that this method can be helpful for the clinical diagnosis of coronary heart disease.

  11. Pseudo-random bit generator based on Chebyshev map

    Science.gov (United States)

    Stoyanov, B. P.

    2013-10-01

    In this paper, we study a pseudo-random bit generator based on two Chebyshev polynomial maps. The novel derivative algorithm shows perfect statistical properties established by number of statistical tests.

  12. Brownian motion properties of optoelectronic random bit generators based on laser chaos.

    Science.gov (United States)

    Li, Pu; Yi, Xiaogang; Liu, Xianglian; Wang, Yuncai; Wang, Yongge

    2016-07-11

    The nondeterministic property of the optoelectronic random bit generator (RBG) based on laser chaos are experimentally analyzed from two aspects of the central limit theorem and law of iterated logarithm. The random bits are extracted from an optical feedback chaotic laser diode using a multi-bit extraction technique in the electrical domain. Our experimental results demonstrate that the generated random bits have no statistical distance from the Brownian motion, besides that they can pass the state-of-the-art industry-benchmark statistical test suite (NIST SP800-22). All of them give a mathematically provable evidence that the ultrafast random bit generator based on laser chaos can be used as a nondeterministic random bit source.

  13. Efficient Multi-Label Feature Selection Using Entropy-Based Label Selection

    Directory of Open Access Journals (Sweden)

    Jaesung Lee

    2016-11-01

    Full Text Available Multi-label feature selection is designed to select a subset of features according to their importance to multiple labels. This task can be achieved by ranking the dependencies of features and selecting the features with the highest rankings. In a multi-label feature selection problem, the algorithm may be faced with a dataset containing a large number of labels. Because the computational cost of multi-label feature selection increases according to the number of labels, the algorithm may suffer from a degradation in performance when processing very large datasets. In this study, we propose an efficient multi-label feature selection method based on an information-theoretic label selection strategy. By identifying a subset of labels that significantly influence the importance of features, the proposed method efficiently outputs a feature subset. Experimental results demonstrate that the proposed method can identify a feature subset much faster than conventional multi-label feature selection methods for large multi-label datasets.

  14. Dynamic approach to space and habitat use based on biased random bridges.

    Directory of Open Access Journals (Sweden)

    Simon Benhamou

    Full Text Available BACKGROUND: Although habitat use reflects a dynamic process, most studies assess habitat use statically as if an animal's successively recorded locations reflected a point rather than a movement process. By relying on the activity time between successive locations instead of the local density of individual locations, movement-based methods can substantially improve the biological relevance of utilization distribution (UD estimates (i.e. the relative frequencies with which an animal uses the various areas of its home range, HR. One such method rests on Brownian bridges (BBs. Its theoretical foundation (purely and constantly diffusive movements is paradoxically inconsistent with both HR settlement and habitat selection. An alternative involves movement-based kernel density estimation (MKDE through location interpolation, which may be applied to various movement behaviours but lacks a sound theoretical basis. METHODOLOGY/PRINCIPAL FINDINGS: I introduce the concept of a biased random (advective-diffusive bridge (BRB and show that the MKDE method is a practical means to estimate UDs based on simplified (isotropically diffusive BRBs. The equation governing BRBs is constrained by the maximum delay between successive relocations warranting constant within-bridge advection (allowed to vary between bridges but remains otherwise similar to the BB equation. Despite its theoretical inconsistencies, the BB method can therefore be applied to animals that regularly reorientate within their HRs and adapt their movements to the habitats crossed, provided that they were relocated with a high enough frequency. CONCLUSIONS/SIGNIFICANCE: Biased random walks can approximate various movement types at short times from a given relocation. Their simplified form constitutes an effective trade-off between too simple, unrealistic movement models, such as Brownian motion, and more sophisticated and realistic ones, such as biased correlated random walks (BCRWs, which are too

  15. Random amplified polymorphic DNA based genetic characterization ...

    African Journals Online (AJOL)

    Random amplified polymorphic DNA based genetic characterization of four important species of Bamboo, found in Raigad district, Maharashtra State, India. ... Bambusoideae are differentiated from other members of the family by the presence of petiolate blades with parallel venation and stamens are three, four, six or more, ...

  16. Pseudo-random bit generator based on lag time series

    Science.gov (United States)

    García-Martínez, M.; Campos-Cantón, E.

    2014-12-01

    In this paper, we present a pseudo-random bit generator (PRBG) based on two lag time series of the logistic map using positive and negative values in the bifurcation parameter. In order to hidden the map used to build the pseudo-random series we have used a delay in the generation of time series. These new series when they are mapped xn against xn+1 present a cloud of points unrelated to the logistic map. Finally, the pseudo-random sequences have been tested with the suite of NIST giving satisfactory results for use in stream ciphers.

  17. N-state random switching based on quantum tunnelling

    Science.gov (United States)

    Bernardo Gavito, Ramón; Jiménez Urbanos, Fernando; Roberts, Jonathan; Sexton, James; Astbury, Benjamin; Shokeir, Hamzah; McGrath, Thomas; Noori, Yasir J.; Woodhead, Christopher S.; Missous, Mohamed; Roedig, Utz; Young, Robert J.

    2017-08-01

    In this work, we show how the hysteretic behaviour of resonant tunnelling diodes (RTDs) can be exploited for new functionalities. In particular, the RTDs exhibit a stochastic 2-state switching mechanism that could be useful for random number generation and cryptographic applications. This behaviour can be scaled to N-bit switching, by connecting various RTDs in series. The InGaAs/AlAs RTDs used in our experiments display very sharp negative differential resistance (NDR) peaks at room temperature which show hysteresis cycles that, rather than having a fixed switching threshold, show a probability distribution about a central value. We propose to use this intrinsic uncertainty emerging from the quantum nature of the RTDs as a source of randomness. We show that a combination of two RTDs in series results in devices with three-state outputs and discuss the possibility of scaling to N-state devices by subsequent series connections of RTDs, which we demonstrate for the up to the 4-state case. In this work, we suggest using that the intrinsic uncertainty in the conduction paths of resonant tunnelling diodes can behave as a source of randomness that can be integrated into current electronics to produce on-chip true random number generators. The N-shaped I-V characteristic of RTDs results in a two-level random voltage output when driven with current pulse trains. Electrical characterisation and randomness testing of the devices was conducted in order to determine the validity of the true randomness assumption. Based on the results obtained for the single RTD case, we suggest the possibility of using multi-well devices to generate N-state random switching devices for their use in random number generation or multi-valued logic devices.

  18. Text Clustering Algorithm Based on Random Cluster Core

    Directory of Open Access Journals (Sweden)

    Huang Long-Jun

    2016-01-01

    Full Text Available Nowadays clustering has become a popular text mining algorithm, but the huge data can put forward higher requirements for the accuracy and performance of text mining. In view of the performance bottleneck of traditional text clustering algorithm, this paper proposes a text clustering algorithm with random features. This is a kind of clustering algorithm based on text density, at the same time using the neighboring heuristic rules, the concept of random cluster is introduced, which effectively reduces the complexity of the distance calculation.

  19. Unwilling or Unable to Cheat? Evidence from a Randomized Tax Audit Experiment in Denmark

    OpenAIRE

    Henrik J. Kleven; Martin B. Knudsen; Claus T. Kreiner; Søren Pedersen; Emmanuel Saez

    2010-01-01

    This paper analyzes a randomized tax enforcement experiment in Denmark. In the base year, a stratified and representative sample of over 40,000 individual income tax filers was selected for the experiment. Half of the tax filers were randomly selected to be thoroughly audited, while the rest were deliberately not audited. The following year, "threat-of-audit" letters were randomly assigned and sent to tax filers in both groups. Using comprehensive administrative tax data, we present four main...

  20. An empirical approach to selecting community-based alcohol interventions: combining research evidence, rural community views and professional opinion

    Directory of Open Access Journals (Sweden)

    Shakeshaft Anthony

    2012-01-01

    Full Text Available Abstract Background Given limited research evidence for community-based alcohol interventions, this study examines the intervention preferences of rural communities and alcohol professionals, and factors that influence their choices. Method Community preferences were identified by a survey of randomly selected individuals across 20 regional Australian communities. The preferences of alcohol professionals were identified by a survey of randomly selected members of the Australasian Professional Society on Alcohol and Other Drugs. To identify preferred interventions and the extent of support for them, a budget allocation exercise was embedded in both surveys, asking respondents to allocate a given budget to different interventions. Tobit regression models were estimated to identify the characteristics that explain differences in intervention preferences. Results Community respondents selected school programs most often (88.0% and allocated it the largest proportion of funds, followed by promotion of safer drinking (71.3%, community programs (61.4% and police enforcement of alcohol laws (60.4%. Professionals selected GP training most often (61.0% and allocated it the largest proportion of funds, followed by school programs (36.6%, community programs (33.8% and promotion of safer drinking (31.7%. Community views were susceptible to response bias. There were no significant predictors of professionals' preferences. Conclusions In the absence of sufficient research evidence for effective community-based alcohol interventions, rural communities and professionals both strongly support school programs, promotion of safer drinking and community programs. Rural communities also supported police enforcement of alcohol laws and professionals supported GP training. The impact of a combination of these strategies needs to be rigorously evaluated.

  1. The effectiveness of educational interventions to enhance the adoption of fee-based arsenic testing in Bangladesh: a cluster randomized controlled trial.

    Science.gov (United States)

    George, Christine Marie; Inauen, Jennifer; Rahman, Sheikh Masudur; Zheng, Yan

    2013-07-01

    Arsenic (As) testing could help 22 million people, using drinking water sources that exceed the Bangladesh As standard, to identify safe sources. A cluster randomized controlled trial was conducted to evaluate the effectiveness of household education and local media in the increasing demand for fee-based As testing. Randomly selected households (N = 452) were divided into three interventions implemented by community workers: 1) fee-based As testing with household education (HE); 2) fee-based As testing with household education and a local media campaign (HELM); and 3) fee-based As testing alone (Control). The fee for the As test was US$ 0.28, higher than the cost of the test (US$ 0.16). Of households with untested wells, 93% in both intervention groups HE and HELM purchased an As test, whereas only 53% in the control group. In conclusion, fee-based As testing with household education is effective in the increasing demand for As testing in rural Bangladesh.

  2. Randomized Controlled Trial of SuperBetter, a Smartphone-Based/Internet-Based Self-Help Tool to Reduce Depressive Symptoms.

    Science.gov (United States)

    Roepke, Ann Marie; Jaffee, Sara R; Riffle, Olivia M; McGonigal, Jane; Broome, Rose; Maxwell, Bez

    2015-06-01

    Technological advances have sparked the development of computer- and smartphone-based self-help programs for depressed people, but these programs' efficacy is uncertain. This randomized controlled trial evaluated an intervention called SuperBetter (SB), which is accessed via smartphone and/or the SB Web site. Online, we recruited 283 adult iPhone(®) (Apple, Cupertino, CA) users with significant depression symptoms according to the Center for Epidemiological Studies Depression questionnaire (CES-D). They were randomly assigned to one of three conditions: (a) a version of SB using cognitive-behavioral therapy and positive psychotherapy strategies to target depression (CBT-PPT SB); (b) a general SB version focused on self-esteem and acceptance (General SB); or (c) a waiting list control group (WL). The two SB groups were instructed to use SB for 10 minutes daily for 1 month. All participants completed psychological distress and well-being measures online every 2 weeks through follow-up. An intent-to-treat analysis was conducted using hierarchical linear modeling. As hypothesized, SB participants achieved greater reductions in CES-D scores than WL participants by posttest (Cohen's d=0.67) and by follow-up (d=1.05). Contrary to prediction, CBT-PPT SB did not perform better than General SB; both versions of SB were more effective than the WL control. Differences between SB versions favored General SB but were not statistically significant. These large effect sizes should be interpreted cautiously in light of high attrition rates and the motivated, self-selected sample. Nonetheless, smartphone-based/Internet-based self-help may play an important role in treating depression.

  3. A school-based randomized controlled trial to improve physical activity among Iranian high school girls

    Directory of Open Access Journals (Sweden)

    Ghofranipour Fazloalha

    2008-04-01

    Full Text Available Abstract Background Physical activity (PA rates decline precipitously during the high school years and are consistently lower among adolescent girls than adolescent boys. Due to cultural barriers, this problem might be exacerbated in female Iranian adolescents. However, little intervention research has been conducted to try to increase PA participation rates with this population. Because PA interventions in schools have the potential to reach many children and adolescents, this study reports on PA intervention research conducted in all-female Iranian high schools. Methods A randomized controlled trial was conducted to examine the effects of two six-month tailored interventions on potential determinants of PA and PA behavior. Students (N = 161 were randomly allocated to one of three conditions: an intervention based on Pender's Health Promotion model (HP, an intervention based on an integration of the health promotion model and selected constructs from the Transtheoretical model (THP, and a control group (CON. Measures were administered prior to the intervention, at post-intervention and at a six-month follow-up. Results Repeated measure ANOVAs showed a significant interaction between group and time for perceived benefits, self efficacy, interpersonal norms, social support, behavioral processes, and PA behavior, indicating that both intervention groups significantly improved across the 24-week intervention, whereas the control group did not. Participants in the THP group showed greater use of counter conditioning and stimulus control at post-intervention and at follow-up. While there were no significant differences in PA between the HP and CON groups at follow-up, a significant difference was still found between the THP and the CON group. Conclusion This study provides the first evidence of the effectiveness of a PA intervention based on Pender's HP model combined with selected aspects of the TTM on potential determinants to increase PA among

  4. [Research on K-means clustering segmentation method for MRI brain image based on selecting multi-peaks in gray histogram].

    Science.gov (United States)

    Chen, Zhaoxue; Yu, Haizhong; Chen, Hao

    2013-12-01

    To solve the problem of traditional K-means clustering in which initial clustering centers are selected randomly, we proposed a new K-means segmentation algorithm based on robustly selecting 'peaks' standing for White Matter, Gray Matter and Cerebrospinal Fluid in multi-peaks gray histogram of MRI brain image. The new algorithm takes gray value of selected histogram 'peaks' as the initial K-means clustering center and can segment the MRI brain image into three parts of tissue more effectively, accurately, steadily and successfully. Massive experiments have proved that the proposed algorithm can overcome many shortcomings caused by traditional K-means clustering method such as low efficiency, veracity, robustness and time consuming. The histogram 'peak' selecting idea of the proposed segmentootion method is of more universal availability.

  5. Content-based image retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Broek, E.L. van den; Vuurpijl, L.G.; Kisters, P. M. F.; Schmid, J.C.M. von; Moens, M.F.; Busser, R. de; Hiemstra, D.; Kraaij, W.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  6. Content-Based Image Retrieval: Color-selection exploited

    NARCIS (Netherlands)

    Moens, Marie-Francine; van den Broek, Egon; Vuurpijl, L.G.; de Brusser, Rik; Kisters, P.M.F.; Hiemstra, Djoerd; Kraaij, Wessel; von Schmid, J.C.M.

    2002-01-01

    This research presents a new color selection interface that facilitates query-by-color in Content-Based Image Retrieval (CBIR). Existing CBIR color selection interfaces, are being judged as non-intuitive and difficult to use. Our interface copes with these problems of usability. It is based on 11

  7. Chaos-based Pseudo-random Number Generation

    KAUST Repository

    Barakat, Mohamed L.

    2014-04-10

    Various methods and systems related to chaos-based pseudo-random number generation are presented. In one example, among others, a system includes a pseudo-random number generator (PRNG) to generate a series of digital outputs and a nonlinear post processing circuit to perform an exclusive OR (XOR) operation on a first portion of a current digital output of the PRNG and a permutated version of a corresponding first portion of a previous post processed output to generate a corresponding first portion of a current post processed output. In another example, a method includes receiving at least a first portion of a current output from a PRNG and performing an XOR operation on the first portion of the current PRNG output with a permutated version of a corresponding first portion of a previous post processed output to generate a corresponding first portion of a current post processed output.

  8. Chaos-based Pseudo-random Number Generation

    KAUST Repository

    Barakat, Mohamed L.; Mansingka, Abhinav S.; Radwan, Ahmed Gomaa Ahmed; Salama, Khaled N.

    2014-01-01

    Various methods and systems related to chaos-based pseudo-random number generation are presented. In one example, among others, a system includes a pseudo-random number generator (PRNG) to generate a series of digital outputs and a nonlinear post processing circuit to perform an exclusive OR (XOR) operation on a first portion of a current digital output of the PRNG and a permutated version of a corresponding first portion of a previous post processed output to generate a corresponding first portion of a current post processed output. In another example, a method includes receiving at least a first portion of a current output from a PRNG and performing an XOR operation on the first portion of the current PRNG output with a permutated version of a corresponding first portion of a previous post processed output to generate a corresponding first portion of a current post processed output.

  9. Effects of choice architecture and chef-enhanced meals on the selection and consumption of healthier school foods: a randomized clinical trial.

    Science.gov (United States)

    Cohen, Juliana F W; Richardson, Scott A; Cluggish, Sarah A; Parker, Ellen; Catalano, Paul J; Rimm, Eric B

    2015-05-01

    Little is known about the long-term effect of a chef-enhanced menu on healthier food selection and consumption in school lunchrooms. In addition, it remains unclear if extended exposure to other strategies to promote healthier foods (eg, choice architecture) also improves food selection or consumption. To evaluate the short- and long-term effects of chef-enhanced meals and extended exposure to choice architecture on healthier school food selection and consumption. A school-based randomized clinical trial was conducted during the 2011-2012 school year among 14 elementary and middle schools in 2 urban, low-income school districts (intent-to-treat analysis). Included in the study were 2638 students in grades 3 through 8 attending participating schools (38.4% of eligible participants). Schools were first randomized to receive a professional chef to improve school meal palatability (chef schools) or to a delayed intervention (control group). To assess the effect of choice architecture (smart café), all schools after 3 months were then randomized to the smart café intervention or to the control group. School food selection was recorded, and consumption was measured using plate waste methods. After 3 months, vegetable selection increased in chef vs control schools (odds ratio [OR], 1.75; 95% CI, 1.36-2.24), but there was no effect on the selection of other components or on meal consumption. After long-term or extended exposure to the chef or smart café intervention, fruit selection increased in the chef (OR, 3.08; 95% CI, 2.23-4.25), smart café (OR, 1.45; 95% CI, 1.13-1.87), and chef plus smart café (OR, 3.10; 95% CI, 2.26-4.25) schools compared with the control schools, and consumption increased in the chef schools (OR, 0.17; 95% CI, 0.03-0.30 cups/d). Vegetable selection increased in the chef (OR, 2.54; 95% CI, 1.83-3.54), smart café (OR, 1.91; 95% CI, 1.46-2.50), and chef plus smart café schools (OR, 7.38, 95% CI, 5.26-10.35) compared with the control schools

  10. Impact of selective genotyping in the training population on accuracy and bias of genomic selection.

    Science.gov (United States)

    Zhao, Yusheng; Gowda, Manje; Longin, Friedrich H; Würschum, Tobias; Ranc, Nicolas; Reif, Jochen C

    2012-08-01

    Estimating marker effects based on routinely generated phenotypic data of breeding programs is a cost-effective strategy to implement genomic selection. Truncation selection in breeding populations, however, could have a strong impact on the accuracy to predict genomic breeding values. The main objective of our study was to investigate the influence of phenotypic selection on the accuracy and bias of genomic selection. We used experimental data of 788 testcross progenies from an elite maize breeding program. The testcross progenies were evaluated in unreplicated field trials in ten environments and fingerprinted with 857 SNP markers. Random regression best linear unbiased prediction method was used in combination with fivefold cross-validation based on genotypic sampling. We observed a substantial loss in the accuracy to predict genomic breeding values in unidirectional selected populations. In contrast, estimating marker effects based on bidirectional selected populations led to only a marginal decrease in the prediction accuracy of genomic breeding values. We concluded that bidirectional selection is a valuable approach to efficiently implement genomic selection in applied plant breeding programs.

  11. RandomSpot: A web-based tool for systematic random sampling of virtual slides.

    Science.gov (United States)

    Wright, Alexander I; Grabsch, Heike I; Treanor, Darren E

    2015-01-01

    This paper describes work presented at the Nordic Symposium on Digital Pathology 2014, Linköping, Sweden. Systematic random sampling (SRS) is a stereological tool, which provides a framework to quickly build an accurate estimation of the distribution of objects or classes within an image, whilst minimizing the number of observations required. RandomSpot is a web-based tool for SRS in stereology, which systematically places equidistant points within a given region of interest on a virtual slide. Each point can then be visually inspected by a pathologist in order to generate an unbiased sample of the distribution of classes within the tissue. Further measurements can then be derived from the distribution, such as the ratio of tumor to stroma. RandomSpot replicates the fundamental principle of traditional light microscope grid-shaped graticules, with the added benefits associated with virtual slides, such as facilitated collaboration and automated navigation between points. Once the sample points have been added to the region(s) of interest, users can download the annotations and view them locally using their virtual slide viewing software. Since its introduction, RandomSpot has been used extensively for international collaborative projects, clinical trials and independent research projects. So far, the system has been used to generate over 21,000 sample sets, and has been used to generate data for use in multiple publications, identifying significant new prognostic markers in colorectal, upper gastro-intestinal and breast cancer. Data generated using RandomSpot also has significant value for training image analysis algorithms using sample point coordinates and pathologist classifications.

  12. Species-specific audio detection: a comparison of three template-based detection algorithms using random forests

    Directory of Open Access Journals (Sweden)

    Carlos J. Corrada Bravo

    2017-04-01

    Full Text Available We developed a web-based cloud-hosted system that allow users to archive, listen, visualize, and annotate recordings. The system also provides tools to convert these annotations into datasets that can be used to train a computer to detect the presence or absence of a species. The algorithm used by the system was selected after comparing the accuracy and efficiency of three variants of a template-based detection. The algorithm computes a similarity vector by comparing a template of a species call with time increments across the spectrogram. Statistical features are extracted from this vector and used as input for a Random Forest classifier that predicts presence or absence of the species in the recording. The fastest algorithm variant had the highest average accuracy and specificity; therefore, it was implemented in the ARBIMON web-based system.

  13. Ultrafast quantum random number generation based on quantum phase fluctuations.

    Science.gov (United States)

    Xu, Feihu; Qi, Bing; Ma, Xiongfeng; Xu, He; Zheng, Haoxuan; Lo, Hoi-Kwong

    2012-05-21

    A quantum random number generator (QRNG) can generate true randomness by exploiting the fundamental indeterminism of quantum mechanics. Most approaches to QRNG employ single-photon detection technologies and are limited in speed. Here, we experimentally demonstrate an ultrafast QRNG at a rate over 6 Gbits/s based on the quantum phase fluctuations of a laser operating near threshold. Moreover, we consider a potential adversary who has partial knowledge on the raw data and discuss how one can rigorously remove such partial knowledge with postprocessing. We quantify the quantum randomness through min-entropy by modeling our system and employ two randomness extractors--Trevisan's extractor and Toeplitz-hashing--to distill the randomness, which is information-theoretically provable. The simplicity and high-speed of our experimental setup show the feasibility of a robust, low-cost, high-speed QRNG.

  14. Implementing traceability using particle randomness-based textile printed tags

    Science.gov (United States)

    Agrawal, T. K.; Koehl, L.; Campagne, C.

    2017-10-01

    This article introduces a random particle-based traceability tag for textiles. The proposed tag not only act as a unique signature for the corresponding textile product but also possess the features such as easy to manufacture and hard to copy. It seeks applications in brand authentication and traceability in textile and clothing (T&C) supply chain. A prototype has been developed by screen printing process, in which micron-scale particles were mixed with the printing paste and printed on cotton fabrics to attain required randomness. To encode the randomness, the image of the developed tag was taken and analyzed using image processing. The randomness of the particles acts as a product key or unique signature which is required to decode the tag. Finally, washing and abrasion resistance tests were conducted to check the durability of the printed tag.

  15. Retention of Statistical Concepts in a Preliminary Randomization-Based Introductory Statistics Curriculum

    Science.gov (United States)

    Tintle, Nathan; Topliff, Kylie; VanderStoep, Jill; Holmes, Vicki-Lynn; Swanson, Todd

    2012-01-01

    Previous research suggests that a randomization-based introductory statistics course may improve student learning compared to the consensus curriculum. However, it is unclear whether these gains are retained by students post-course. We compared the conceptual understanding of a cohort of students who took a randomization-based curriculum (n = 76)…

  16. An AES chip with DPA resistance using hardware-based random order execution

    International Nuclear Information System (INIS)

    Yu Bo; Li Xiangyu; Chen Cong; Sun Yihe; Wu Liji; Zhang Xiangmin

    2012-01-01

    This paper presents an AES (advanced encryption standard) chip that combats differential power analysis (DPA) side-channel attack through hardware-based random order execution. Both decryption and encryption procedures of an AES are implemented on the chip. A fine-grained dataflow architecture is proposed, which dynamically exploits intrinsic byte-level independence in the algorithm. A novel circuit called an HMF (Hold-Match-Fetch) unit is proposed for random control, which randomly sets execution orders for concurrent operations. The AES chip was manufactured in SMIC 0.18 μm technology. The average energy for encrypting one group of plain texts (128 bits secrete keys) is 19 nJ. The core area is 0.43 mm 2 . A sophisticated experimental setup was built to test the DPA resistance. Measurement-based experimental results show that one byte of a secret key cannot be disclosed from our chip under random mode after 64000 power traces were used in the DPA attack. Compared with the corresponding fixed order execution, the hardware based random order execution is improved by at least 21 times the DPA resistance. (semiconductor integrated circuits)

  17. An AES chip with DPA resistance using hardware-based random order execution

    Science.gov (United States)

    Bo, Yu; Xiangyu, Li; Cong, Chen; Yihe, Sun; Liji, Wu; Xiangmin, Zhang

    2012-06-01

    This paper presents an AES (advanced encryption standard) chip that combats differential power analysis (DPA) side-channel attack through hardware-based random order execution. Both decryption and encryption procedures of an AES are implemented on the chip. A fine-grained dataflow architecture is proposed, which dynamically exploits intrinsic byte-level independence in the algorithm. A novel circuit called an HMF (Hold-Match-Fetch) unit is proposed for random control, which randomly sets execution orders for concurrent operations. The AES chip was manufactured in SMIC 0.18 μm technology. The average energy for encrypting one group of plain texts (128 bits secrete keys) is 19 nJ. The core area is 0.43 mm2. A sophisticated experimental setup was built to test the DPA resistance. Measurement-based experimental results show that one byte of a secret key cannot be disclosed from our chip under random mode after 64000 power traces were used in the DPA attack. Compared with the corresponding fixed order execution, the hardware based random order execution is improved by at least 21 times the DPA resistance.

  18. Strategies for implementing genomic selection in family-based aquaculture breeding schemes: double haploid sib test populations

    Directory of Open Access Journals (Sweden)

    Nirea Kahsay G

    2012-10-01

    Full Text Available Abstract Background Simulation studies have shown that accuracy and genetic gain are increased in genomic selection schemes compared to traditional aquaculture sib-based schemes. In genomic selection, accuracy of selection can be maximized by increasing the precision of the estimation of SNP effects and by maximizing the relationships between test sibs and candidate sibs. Another means of increasing the accuracy of the estimation of SNP effects is to create individuals in the test population with extreme genotypes. The latter approach was studied here with creation of double haploids and use of non-random mating designs. Methods Six alternative breeding schemes were simulated in which the design of the test population was varied: test sibs inherited maternal (Mat, paternal (Pat or a mixture of maternal and paternal (MatPat double haploid genomes or test sibs were obtained by maximum coancestry mating (MaxC, minimum coancestry mating (MinC, or random (RAND mating. Three thousand test sibs and 3000 candidate sibs were genotyped. The test sibs were recorded for a trait that could not be measured on the candidates and were used to estimate SNP effects. Selection was done by truncation on genome-wide estimated breeding values and 100 individuals were selected as parents each generation, equally divided between both sexes. Results Results showed a 7 to 19% increase in selection accuracy and a 6 to 22% increase in genetic gain in the MatPat scheme compared to the RAND scheme. These increases were greater with lower heritabilities. Among all other scenarios, i.e. Mat, Pat, MaxC, and MinC, no substantial differences in selection accuracy and genetic gain were observed. Conclusions In conclusion, a test population designed with a mixture of paternal and maternal double haploids, i.e. the MatPat scheme, increases substantially the accuracy of selection and genetic gain. This will be particularly interesting for traits that cannot be recorded on the

  19. The Impact of a School-Based Hygiene, Water Quality and Sanitation Intervention on Soil-Transmitted Helminth Reinfection: A Cluster-Randomized Trial

    Science.gov (United States)

    Freeman, Matthew C.; Clasen, Thomas; Brooker, Simon J.; Akoko, Daniel O.; Rheingans, Richard

    2013-01-01

    We conducted a cluster-randomized trial to assess the impact of a school-based water treatment, hygiene, and sanitation program on reducing infection with soil-transmitted helminths (STHs) after school-based deworming. We assessed infection with STHs at baseline and then at two follow-up rounds 8 and 10 months after deworming. Forty government primary schools in Nyanza Province, Kenya were randomly selected and assigned to intervention or control arms. The intervention reduced reinfection prevalence (odds ratio [OR] 0.56, 95% confidence interval [CI] 0.31–1.00) and egg count (rate ratio [RR] 0.34, CI 0.15–0.75) of Ascaris lumbricoides. We found no evidence of significant intervention effects on the overall prevalence and intensity of Trichuris trichiura, hookworm, or Schistosoma mansoni reinfection. Provision of school-based sanitation, water quality, and hygiene improvements may reduce reinfection of STHs after school-based deworming, but the magnitude of the effects may be sex- and helminth species-specific. PMID:24019429

  20. Integral Histogram with Random Projection for Pedestrian Detection.

    Directory of Open Access Journals (Sweden)

    Chang-Hua Liu

    Full Text Available In this paper, we give a systematic study to report several deep insights into the HOG, one of the most widely used features in the modern computer vision and image processing applications. We first show that, its magnitudes of gradient can be randomly projected with random matrix. To handle over-fitting, an integral histogram based on the differences of randomly selected blocks is proposed. The experiments show that both the random projection and integral histogram outperform the HOG feature obviously. Finally, the two ideas are combined into a new descriptor termed IHRP, which outperforms the HOG feature with less dimensions and higher speed.

  1. The Reliability of Randomly Generated Math Curriculum-Based Measurements

    Science.gov (United States)

    Strait, Gerald G.; Smith, Bradley H.; Pender, Carolyn; Malone, Patrick S.; Roberts, Jarod; Hall, John D.

    2015-01-01

    "Curriculum-Based Measurement" (CBM) is a direct method of academic assessment used to screen and evaluate students' skills and monitor their responses to academic instruction and intervention. Interventioncentral.org offers a math worksheet generator at no cost that creates randomly generated "math curriculum-based measures"…

  2. True random number generation from mobile telephone photo based on chaotic cryptography

    International Nuclear Information System (INIS)

    Zhao Liang; Liao Xiaofeng; Xiao Di; Xiang Tao; Zhou Qing; Duan Shukai

    2009-01-01

    A cheap, convenient and universal TRNG based on mobile telephone photo for producing random bit sequence is proposed. To settle the problem of sequential pixels and comparability, three chaos-based approaches are applied to post-process the generated binary image. The random numbers produced by three users are tested using US NIST RNG statistical test software. The experimental results indicate that the Arnold cat map is the fastest way to generate a random bit sequence and can be accepted on general PC. The 'MASK' algorithm also performs well. Finally, comparing with the TRNG of Hu et al. [Hu Y, Liao X, Wong KW, Zhou Q. A true random number generator based on mouse movement and chaotic cryptography. Chaos, Solitons and Fractals 2007. doi: 10.1016/j.chaos.2007.10.022] which is presented by Hu et al., many merits of the proposed TRNG in this paper has been found.

  3. Random number generation based on digital differential chaos

    KAUST Repository

    Zidan, Mohammed A.; Radwan, Ahmed G.; Salama, Khaled N.

    2012-01-01

    In this paper, we present a fully digital differential chaos based random number generator. The output of the digital circuit is proved to be chaotic by calculating the output time series maximum Lyapunov exponent. We introduce a new post processing

  4. ANALYSIS OF FUZZY QUEUES: PARAMETRIC PROGRAMMING APPROACH BASED ON RANDOMNESS - FUZZINESS CONSISTENCY PRINCIPLE

    OpenAIRE

    Dhruba Das; Hemanta K. Baruah

    2015-01-01

    In this article, based on Zadeh’s extension principle we have apply the parametric programming approach to construct the membership functions of the performance measures when the interarrival time and the service time are fuzzy numbers based on the Baruah’s Randomness- Fuzziness Consistency Principle. The Randomness-Fuzziness Consistency Principle leads to defining a normal law of fuzziness using two different laws of randomness. In this article, two fuzzy queues FM...

  5. The Random Walk Model Based on Bipartite Network

    Directory of Open Access Journals (Sweden)

    Zhang Man-Dun

    2016-01-01

    Full Text Available With the continuing development of the electronic commerce and growth of network information, there is a growing possibility for citizens to be confused by the information. Though the traditional technology of information retrieval have the ability to relieve the overload of information in some extent, it can not offer a targeted personality service based on user’s interests and activities. In this context, the recommendation algorithm arose. In this paper, on the basis of conventional recommendation, we studied the scheme of random walk based on bipartite network and the application of it. We put forward a similarity measurement based on implicit feedback. In this method, a uneven character vector is imported(the weight of item in the system. We put forward a improved random walk pattern which make use of partial or incomplete neighbor information to create recommendation information. In the end, there is an experiment in the real data set, the recommendation accuracy and practicality are improved. We promise the reality of the result of the experiment

  6. Supplier selection an MCDA-based approach

    CERN Document Server

    Mukherjee, Krishnendu

    2017-01-01

    The purpose of this book is to present a comprehensive review of the latest research and development trends at the international level for modeling and optimization of the supplier selection process for different industrial sectors. It is targeted to serve two audiences: the MBA and PhD student interested in procurement, and the practitioner who wishes to gain a deeper understanding of procurement analysis with multi-criteria based decision tools to avoid upstream risks to get better supply chain visibility. The book is expected to serve as a ready reference for supplier selection criteria and various multi-criteria based supplier’s evaluation methods for forward, reverse and mass customized supply chain. This book encompasses several criteria, methods for supplier selection in a systematic way based on extensive literature review from 1998 to 2012. It provides several case studies and some useful links which can serve as a starting point for interested researchers. In the appendix several computer code wri...

  7. Opportunistic Relay Selection with Cooperative Macro Diversity

    Directory of Open Access Journals (Sweden)

    Yu Chia-Hao

    2010-01-01

    Full Text Available We apply a fully opportunistic relay selection scheme to study cooperative diversity in a semianalytical manner. In our framework, idle Mobile Stations (MSs are capable of being used as Relay Stations (RSs and no relaying is required if the direct path is strong. Our relay selection scheme is fully selection based: either the direct path or one of the relaying paths is selected. Macro diversity, which is often ignored in analytical works, is taken into account together with micro diversity by using a complete channel model that includes both shadow fading and fast fading effects. The stochastic geometry of the network is taken into account by having a random number of randomly located MSs. The outage probability analysis of the selection differs from the case where only fast fading is considered. Under our framework, distribution of the received power is formulated using different Channel State Information (CSI assumptions to simulate both optimistic and practical environments. The results show that the relay selection gain can be significant given a suitable amount of candidate RSs. Also, while relay selection according to incomplete CSI is diversity suboptimal compared to relay selection based on full CSI, the loss in average throughput is not too significant. This is a consequence of the dominance of geometry over fast fading.

  8. TU-CD-BRA-05: Atlas Selection for Multi-Atlas-Based Image Segmentation Using Surrogate Modeling

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selection is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection

  9. Age replacement policy based on imperfect repair with random probability

    International Nuclear Information System (INIS)

    Lim, J.H.; Qu, Jian; Zuo, Ming J.

    2016-01-01

    In most of literatures of age replacement policy, failures before planned replacement age can be either minimally repaired or perfectly repaired based on the types of failures, cost for repairs and so on. In this paper, we propose age replacement policy based on imperfect repair with random probability. The proposed policy incorporates the case that such intermittent failure can be either minimally repaired or perfectly repaired with random probabilities. The mathematical formulas of the expected cost rate per unit time are derived for both the infinite-horizon case and the one-replacement-cycle case. For each case, we show that the optimal replacement age exists and is finite. - Highlights: • We propose a new age replacement policy with random probability of perfect repair. • We develop the expected cost per unit time. • We discuss the optimal age for replacement minimizing the expected cost rate.

  10. Implications of structural genomics target selection strategies: Pfam5000, whole genome, and random approaches

    Energy Technology Data Exchange (ETDEWEB)

    Chandonia, John-Marc; Brenner, Steven E.

    2004-07-14

    The structural genomics project is an international effort to determine the three-dimensional shapes of all important biological macromolecules, with a primary focus on proteins. Target proteins should be selected according to a strategy which is medically and biologically relevant, of good value, and tractable. As an option to consider, we present the Pfam5000 strategy, which involves selecting the 5000 most important families from the Pfam database as sources for targets. We compare the Pfam5000 strategy to several other proposed strategies that would require similar numbers of targets. These include including complete solution of several small to moderately sized bacterial proteomes, partial coverage of the human proteome, and random selection of approximately 5000 targets from sequenced genomes. We measure the impact that successful implementation of these strategies would have upon structural interpretation of the proteins in Swiss-Prot, TrEMBL, and 131 complete proteomes (including 10 of eukaryotes) from the Proteome Analysis database at EBI. Solving the structures of proteins from the 5000 largest Pfam families would allow accurate fold assignment for approximately 68 percent of all prokaryotic proteins (covering 59 percent of residues) and 61 percent of eukaryotic proteins (40 percent of residues). More fine-grained coverage which would allow accurate modeling of these proteins would require an order of magnitude more targets. The Pfam5000 strategy may be modified in several ways, for example to focus on larger families, bacterial sequences, or eukaryotic sequences; as long as secondary consideration is given to large families within Pfam, coverage results vary only slightly. In contrast, focusing structural genomics on a single tractable genome would have only a limited impact in structural knowledge of other proteomes: a significant fraction (about 30-40 percent of the proteins, and 40-60 percent of the residues) of each proteome is classified in small

  11. Selecting a risk-based tool to aid in decision making

    Energy Technology Data Exchange (ETDEWEB)

    Bendure, A.O.

    1995-03-01

    Selecting a risk-based tool to aid in decision making is as much of a challenge as properly using the tool once it has been selected. Failure to consider customer and stakeholder requirements and the technical bases and differences in risk-based decision making tools will produce confounding and/or politically unacceptable results when the tool is used. Selecting a risk-based decisionmaking tool must therefore be undertaken with the same, if not greater, rigor than the use of the tool once it is selected. This paper presents a process for selecting a risk-based tool appropriate to a set of prioritization or resource allocation tasks, discusses the results of applying the process to four risk-based decision-making tools, and identifies the ``musts`` for successful selection and implementation of a risk-based tool to aid in decision making.

  12. Participant-selected music and physical activity in older adults following cardiac rehabilitation: a randomized controlled trial.

    Science.gov (United States)

    Clark, Imogen N; Baker, Felicity A; Peiris, Casey L; Shoebridge, Georgie; Taylor, Nicholas F

    2017-03-01

    To evaluate effects of participant-selected music on older adults' achievement of activity levels recommended in the physical activity guidelines following cardiac rehabilitation. A parallel group randomized controlled trial with measurements at Weeks 0, 6 and 26. A multisite outpatient rehabilitation programme of a publicly funded metropolitan health service. Adults aged 60 years and older who had completed a cardiac rehabilitation programme. Experimental participants selected music to support walking with guidance from a music therapist. Control participants received usual care only. The primary outcome was the proportion of participants achieving activity levels recommended in physical activity guidelines. Secondary outcomes compared amounts of physical activity, exercise capacity, cardiac risk factors, and exercise self-efficacy. A total of 56 participants, mean age 68.2 years (SD = 6.5), were randomized to the experimental ( n = 28) and control groups ( n = 28). There were no differences between groups in proportions of participants achieving activity recommended in physical activity guidelines at Week 6 or 26. Secondary outcomes demonstrated between-group differences in male waist circumference at both measurements (Week 6 difference -2.0 cm, 95% CI -4.0 to 0; Week 26 difference -2.8 cm, 95% CI -5.4 to -0.1), and observed effect sizes favoured the experimental group for amounts of physical activity (d = 0.30), exercise capacity (d = 0.48), and blood pressure (d = -0.32). Participant-selected music did not increase the proportion of participants achieving recommended amounts of physical activity, but may have contributed to exercise-related benefits.

  13. Introducing two Random Forest based methods for cloud detection in remote sensing images

    Science.gov (United States)

    Ghasemian, Nafiseh; Akhoondzadeh, Mehdi

    2018-07-01

    Cloud detection is a necessary phase in satellite images processing to retrieve the atmospheric and lithospheric parameters. Currently, some cloud detection methods based on Random Forest (RF) model have been proposed but they do not consider both spectral and textural characteristics of the image. Furthermore, they have not been tested in the presence of snow/ice. In this paper, we introduce two RF based algorithms, Feature Level Fusion Random Forest (FLFRF) and Decision Level Fusion Random Forest (DLFRF) to incorporate visible, infrared (IR) and thermal spectral and textural features (FLFRF) including Gray Level Co-occurrence Matrix (GLCM) and Robust Extended Local Binary Pattern (RELBP_CI) or visible, IR and thermal classifiers (DLFRF) for highly accurate cloud detection on remote sensing images. FLFRF first fuses visible, IR and thermal features. Thereafter, it uses the RF model to classify pixels to cloud, snow/ice and background or thick cloud, thin cloud and background. DLFRF considers visible, IR and thermal features (both spectral and textural) separately and inserts each set of features to RF model. Then, it holds vote matrix of each run of the model. Finally, it fuses the classifiers using the majority vote method. To demonstrate the effectiveness of the proposed algorithms, 10 Terra MODIS and 15 Landsat 8 OLI/TIRS images with different spatial resolutions are used in this paper. Quantitative analyses are based on manually selected ground truth data. Results show that after adding RELBP_CI to input feature set cloud detection accuracy improves. Also, the average cloud kappa values of FLFRF and DLFRF on MODIS images (1 and 0.99) are higher than other machine learning methods, Linear Discriminate Analysis (LDA), Classification And Regression Tree (CART), K Nearest Neighbor (KNN) and Support Vector Machine (SVM) (0.96). The average snow/ice kappa values of FLFRF and DLFRF on MODIS images (1 and 0.85) are higher than other traditional methods. The

  14. r2VIM: A new variable selection method for random forests in genome-wide association studies.

    Science.gov (United States)

    Szymczak, Silke; Holzinger, Emily; Dasgupta, Abhijit; Malley, James D; Molloy, Anne M; Mills, James L; Brody, Lawrence C; Stambolian, Dwight; Bailey-Wilson, Joan E

    2016-01-01

    Machine learning methods and in particular random forests (RFs) are a promising alternative to standard single SNP analyses in genome-wide association studies (GWAS). RFs provide variable importance measures (VIMs) to rank SNPs according to their predictive power. However, in contrast to the established genome-wide significance threshold, no clear criteria exist to determine how many SNPs should be selected for downstream analyses. We propose a new variable selection approach, recurrent relative variable importance measure (r2VIM). Importance values are calculated relative to an observed minimal importance score for several runs of RF and only SNPs with large relative VIMs in all of the runs are selected as important. Evaluations on simulated GWAS data show that the new method controls the number of false-positives under the null hypothesis. Under a simple alternative hypothesis with several independent main effects it is only slightly less powerful than logistic regression. In an experimental GWAS data set, the same strong signal is identified while the approach selects none of the SNPs in an underpowered GWAS. The novel variable selection method r2VIM is a promising extension to standard RF for objectively selecting relevant SNPs in GWAS while controlling the number of false-positive results.

  15. Random-Profiles-Based 3D Face Recognition System

    Directory of Open Access Journals (Sweden)

    Joongrock Kim

    2014-03-01

    Full Text Available In this paper, a noble nonintrusive three-dimensional (3D face modeling system for random-profile-based 3D face recognition is presented. Although recent two-dimensional (2D face recognition systems can achieve a reliable recognition rate under certain conditions, their performance is limited by internal and external changes, such as illumination and pose variation. To address these issues, 3D face recognition, which uses 3D face data, has recently received much attention. However, the performance of 3D face recognition highly depends on the precision of acquired 3D face data, while also requiring more computational power and storage capacity than 2D face recognition systems. In this paper, we present a developed nonintrusive 3D face modeling system composed of a stereo vision system and an invisible near-infrared line laser, which can be directly applied to profile-based 3D face recognition. We further propose a novel random-profile-based 3D face recognition method that is memory-efficient and pose-invariant. The experimental results demonstrate that the reconstructed 3D face data consists of more than 50 k 3D point clouds and a reliable recognition rate against pose variation.

  16. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are

  17. A new simple technique for improving the random properties of chaos-based cryptosystems

    Science.gov (United States)

    Garcia-Bosque, M.; Pérez-Resa, A.; Sánchez-Azqueta, C.; Celma, S.

    2018-03-01

    A new technique for improving the security of chaos-based stream ciphers has been proposed and tested experimentally. This technique manages to improve the randomness properties of the generated keystream by preventing the system to fall into short period cycles due to digitation. In order to test this technique, a stream cipher based on a Skew Tent Map algorithm has been implemented on a Virtex 7 FPGA. The randomness of the keystream generated by this system has been compared to the randomness of the keystream generated by the same system with the proposed randomness-enhancement technique. By subjecting both keystreams to the National Institute of Standards and Technology (NIST) tests, we have proved that our method can considerably improve the randomness of the generated keystreams. In order to incorporate our randomness-enhancement technique, only 41 extra slices have been needed, proving that, apart from effective, this method is also efficient in terms of area and hardware resources.

  18. A random spatial network model based on elementary postulates

    Science.gov (United States)

    Karlinger, Michael R.; Troutman, Brent M.

    1989-01-01

    A model for generating random spatial networks that is based on elementary postulates comparable to those of the random topology model is proposed. In contrast to the random topology model, this model ascribes a unique spatial specification to generated drainage networks, a distinguishing property of some network growth models. The simplicity of the postulates creates an opportunity for potential analytic investigations of the probabilistic structure of the drainage networks, while the spatial specification enables analyses of spatially dependent network properties. In the random topology model all drainage networks, conditioned on magnitude (number of first-order streams), are equally likely, whereas in this model all spanning trees of a grid, conditioned on area and drainage density, are equally likely. As a result, link lengths in the generated networks are not independent, as usually assumed in the random topology model. For a preliminary model evaluation, scale-dependent network characteristics, such as geometric diameter and link length properties, and topologic characteristics, such as bifurcation ratio, are computed for sets of drainage networks generated on square and rectangular grids. Statistics of the bifurcation and length ratios fall within the range of values reported for natural drainage networks, but geometric diameters tend to be relatively longer than those for natural networks.

  19. Two-year Randomized Clinical Trial of Self-etching Adhesives and Selective Enamel Etching.

    Science.gov (United States)

    Pena, C E; Rodrigues, J A; Ely, C; Giannini, M; Reis, A F

    2016-01-01

    The aim of this randomized, controlled prospective clinical trial was to evaluate the clinical effectiveness of restoring noncarious cervical lesions with two self-etching adhesive systems applied with or without selective enamel etching. A one-step self-etching adhesive (Xeno V(+)) and a two-step self-etching system (Clearfil SE Bond) were used. The effectiveness of phosphoric acid selective etching of enamel margins was also evaluated. Fifty-six cavities were restored with each adhesive system and divided into two subgroups (n=28; etch and non-etch). All 112 cavities were restored with the nanohybrid composite Esthet.X HD. The clinical effectiveness of restorations was recorded in terms of retention, marginal integrity, marginal staining, caries recurrence, and postoperative sensitivity after 3, 6, 12, 18, and 24 months (modified United States Public Health Service). The Friedman test detected significant differences only after 18 months for marginal staining in the groups Clearfil SE non-etch (p=0.009) and Xeno V(+) etch (p=0.004). One restoration was lost during the trial (Xeno V(+) etch; p>0.05). Although an increase in marginal staining was recorded for groups Clearfil SE non-etch and Xeno V(+) etch, the clinical effectiveness of restorations was considered acceptable for the single-step and two-step self-etching systems with or without selective enamel etching in this 24-month clinical trial.

  20. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  1. A randomized controlled trial of an electronic informed consent process.

    Science.gov (United States)

    Rothwell, Erin; Wong, Bob; Rose, Nancy C; Anderson, Rebecca; Fedor, Beth; Stark, Louisa A; Botkin, Jeffrey R

    2014-12-01

    A pilot study assessed an electronic informed consent model within a randomized controlled trial (RCT). Participants who were recruited for the parent RCT project were randomly selected and randomized to either an electronic consent group (n = 32) or a simplified paper-based consent group (n = 30). Results from the electronic consent group reported significantly higher understanding of the purpose of the study, alternatives to participation, and who to contact if they had questions or concerns about the study. However, participants in the paper-based control group reported higher mean scores on some survey items. This research suggests that an electronic informed consent presentation may improve participant understanding for some aspects of a research study. © The Author(s) 2014.

  2. Personnel Selection Based on Fuzzy Methods

    Directory of Open Access Journals (Sweden)

    Lourdes Cañós

    2011-03-01

    Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.

  3. NetProt: Complex-based Feature Selection.

    Science.gov (United States)

    Goh, Wilson Wen Bin; Wong, Limsoon

    2017-08-04

    Protein complex-based feature selection (PCBFS) provides unparalleled reproducibility with high phenotypic relevance on proteomics data. Currently, there are five PCBFS paradigms, but not all representative methods have been implemented or made readily available. To allow general users to take advantage of these methods, we developed the R-package NetProt, which provides implementations of representative feature-selection methods. NetProt also provides methods for generating simulated differential data and generating pseudocomplexes for complex-based performance benchmarking. The NetProt open source R package is available for download from https://github.com/gohwils/NetProt/releases/ , and online documentation is available at http://rpubs.com/gohwils/204259 .

  4. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Science.gov (United States)

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  5. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Sungho Kim

    2016-07-01

    Full Text Available Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR images or infrared (IR images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter and an asymmetric morphological closing filter (AMCF, post-filter into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic

  6. Sequence Based Prediction of Antioxidant Proteins Using a Classifier Selection Strategy.

    Directory of Open Access Journals (Sweden)

    Lina Zhang

    Full Text Available Antioxidant proteins perform significant functions in maintaining oxidation/antioxidation balance and have potential therapies for some diseases. Accurate identification of antioxidant proteins could contribute to revealing physiological processes of oxidation/antioxidation balance and developing novel antioxidation-based drugs. In this study, an ensemble method is presented to predict antioxidant proteins with hybrid features, incorporating SSI (Secondary Structure Information, PSSM (Position Specific Scoring Matrix, RSA (Relative Solvent Accessibility, and CTD (Composition, Transition, Distribution. The prediction results of the ensemble predictor are determined by an average of prediction results of multiple base classifiers. Based on a classifier selection strategy, we obtain an optimal ensemble classifier composed of RF (Random Forest, SMO (Sequential Minimal Optimization, NNA (Nearest Neighbor Algorithm, and J48 with an accuracy of 0.925. A Relief combined with IFS (Incremental Feature Selection method is adopted to obtain optimal features from hybrid features. With the optimal features, the ensemble method achieves improved performance with a sensitivity of 0.95, a specificity of 0.93, an accuracy of 0.94, and an MCC (Matthew's Correlation Coefficient of 0.880, far better than the existing method. To evaluate the prediction performance objectively, the proposed method is compared with existing methods on the same independent testing dataset. Encouragingly, our method performs better than previous studies. In addition, our method achieves more balanced performance with a sensitivity of 0.878 and a specificity of 0.860. These results suggest that the proposed ensemble method can be a potential candidate for antioxidant protein prediction. For public access, we develop a user-friendly web server for antioxidant protein identification that is freely accessible at http://antioxidant.weka.cc.

  7. Why the null matters: statistical tests, random walks and evolution.

    Science.gov (United States)

    Sheets, H D; Mitchell, C E

    2001-01-01

    A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.

  8. Lines of Descent Under Selection

    Science.gov (United States)

    Baake, Ellen; Wakolbinger, Anton

    2017-11-01

    We review recent progress on ancestral processes related to mutation-selection models, both in the deterministic and the stochastic setting. We mainly rely on two concepts, namely, the killed ancestral selection graph and the pruned lookdown ancestral selection graph. The killed ancestral selection graph gives a representation of the type of a random individual from a stationary population, based upon the individual's potential ancestry back until the mutations that define the individual's type. The pruned lookdown ancestral selection graph allows one to trace the ancestry of individuals from a stationary distribution back into the distant past, thus leading to the stationary distribution of ancestral types. We illustrate the results by applying them to a prototype model for the error threshold phenomenon.

  9. Prediction error variance and expected response to selection, when selection is based on the best predictor - for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    DEFF Research Database (Denmark)

    Andersen, Anders Holst; Korsgaard, Inge Riis; Jensen, Just

    2002-01-01

    In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed...... or random effects). In the different models, expressions are given (when these can be found - otherwise unbiased estimates are given) for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non...... Gaussian traits are generalisations of the well-known formulas for Gaussian traits - and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part...

  10. Review of Random Phase Encoding in Volume Holographic Storage

    Directory of Open Access Journals (Sweden)

    Wei-Chia Su

    2012-09-01

    Full Text Available Random phase encoding is a unique technique for volume hologram which can be applied to various applications such as holographic multiplexing storage, image encryption, and optical sensing. In this review article, we first review and discuss diffraction selectivity of random phase encoding in volume holograms, which is the most important parameter related to multiplexing capacity of volume holographic storage. We then review an image encryption system based on random phase encoding. The alignment of phase key for decryption of the encoded image stored in holographic memory is analyzed and discussed. In the latter part of the review, an all-optical sensing system implemented by random phase encoding and holographic interconnection is presented.

  11. Competitive seeds-selection in complex networks

    Science.gov (United States)

    Zhao, Jiuhua; Liu, Qipeng; Wang, Lin; Wang, Xiaofan

    2017-02-01

    This paper investigates a competitive diffusion model where two competitors simultaneously select a set of nodes (seeds) in the network to influence. We focus on the problem of how to select these seeds such that, when the diffusion process terminates, a competitor can obtain more supports than its opponent. Instead of studying this problem in the game-theoretic framework as in the existing work, in this paper we design several heuristic seed-selection strategies inspired by commonly used centrality measures-Betweenness Centrality (BC), Closeness Centrality (CC), Degree Centrality (DC), Eigenvector Centrality (EC), and K-shell Centrality (KS). We mainly compare three centrality-based strategies, which have better performances in competing with the random selection strategy, through simulations on both real and artificial networks. Even though network structure varies across different networks, we find certain common trend appearing in all of these networks. Roughly speaking, BC-based strategy and DC-based strategy are better than CC-based strategy. Moreover, if a competitor adopts CC-based strategy, then BC-based strategy is a better strategy than DC-based strategy for his opponent, and the superiority of BC-based strategy decreases as the heterogeneity of the network decreases.

  12. ANALYSIS OF FUZZY QUEUES: PARAMETRIC PROGRAMMING APPROACH BASED ON RANDOMNESS - FUZZINESS CONSISTENCY PRINCIPLE

    Directory of Open Access Journals (Sweden)

    Dhruba Das

    2015-04-01

    Full Text Available In this article, based on Zadeh’s extension principle we have apply the parametric programming approach to construct the membership functions of the performance measures when the interarrival time and the service time are fuzzy numbers based on the Baruah’s Randomness- Fuzziness Consistency Principle. The Randomness-Fuzziness Consistency Principle leads to defining a normal law of fuzziness using two different laws of randomness. In this article, two fuzzy queues FM/M/1 and M/FM/1 has been studied and constructed their membership functions of the system characteristics based on the aforesaid principle. The former represents a queue with fuzzy exponential arrivals and exponential service rate while the latter represents a queue with exponential arrival rate and fuzzy exponential service rate.

  13. A comprehensive comparison of random forests and support vector machines for microarray-based cancer classification

    Directory of Open Access Journals (Sweden)

    Wang Lily

    2008-07-01

    Full Text Available Abstract Background Cancer diagnosis and clinical outcome prediction are among the most important emerging applications of gene expression microarray technology with several molecular signatures on their way toward clinical deployment. Use of the most accurate classification algorithms available for microarray gene expression data is a critical ingredient in order to develop the best possible molecular signatures for patient care. As suggested by a large body of literature to date, support vector machines can be considered "best of class" algorithms for classification of such data. Recent work, however, suggests that random forest classifiers may outperform support vector machines in this domain. Results In the present paper we identify methodological biases of prior work comparing random forests and support vector machines and conduct a new rigorous evaluation of the two algorithms that corrects these limitations. Our experiments use 22 diagnostic and prognostic datasets and show that support vector machines outperform random forests, often by a large margin. Our data also underlines the importance of sound research design in benchmarking and comparison of bioinformatics algorithms. Conclusion We found that both on average and in the majority of microarray datasets, random forests are outperformed by support vector machines both in the settings when no gene selection is performed and when several popular gene selection methods are used.

  14. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, J; Zhang, Qi; Fitzek, F H P

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...... reduction in the number of transmitted packets can be achieved. However, NC introduces additional computations and potentially a non-negligible transmission overhead, both of which depend on the chosen coding parameters. Therefore it is necessary to consider the trade-off that these coding parameters...... present in order to obtain the lowest energy consumption per transmitted bit. This problem is analyzed and suitable coding parameters are determined for the popular Tmote Sky platform. Compared to the use of traditional RLNC, these parameters enable a reduction in the energy spent per bit which grows...

  15. Selective Distance-Based K+ Quantification on Paper-Based Microfluidics.

    Science.gov (United States)

    Gerold, Chase T; Bakker, Eric; Henry, Charles S

    2018-04-03

    In this study, paper-based microfluidic devices (μPADs) capable of K + quantification in aqueous samples, as well as in human serum, using both colorimetric and distance-based methods are described. A lipophilic phase containing potassium ionophore I (valinomycin) was utilized to achieve highly selective quantification of K + in the presence of Na + , Li + , and Mg 2+ ions. Successful addition of a suspended lipophilic phase to a wax printed paper-based device is described and offers a solution to current approaches that rely on organic solvents, which damage wax barriers. The approach provides an avenue for future alkali/alkaline quantification utilizing μPADs. Colorimetric spot tests allowed for K + quantification from 0.1-5.0 mM using only 3.00 μL of sample solution. Selective distance-based quantification required small sample volumes (6.00 μL) and gave responses sensitive enough to distinguish between 1.0 and 2.5 mM of sample K + . μPADs using distance-based methods were also capable of differentiating between 4.3 and 6.9 mM K + in human serum samples. Distance-based methods required no digital analysis, electronic hardware, or pumps; any steps required for quantification could be carried out using the naked eye.

  16. A Bidirectional Generalized Synchronization Theorem-Based Chaotic Pseudo-random Number Generator

    Directory of Open Access Journals (Sweden)

    Han Shuangshuang

    2013-07-01

    Full Text Available Based on a bidirectional generalized synchronization theorem for discrete chaos system, this paper introduces a new 5-dimensional bidirectional generalized chaos synchronization system (BGCSDS, whose prototype is a novel chaotic system introduced in [12]. Numerical simulation showed that two pair variables of the BGCSDS achieve generalized chaos synchronization via a transform H.A chaos-based pseudo-random number generator (CPNG was designed by the new BGCSDS. Using the FIPS-140-2 tests issued by the National Institute of Standard and Technology (NIST verified the randomness of the 1000 binary number sequences generated via the CPNG and the RC4 algorithm respectively. The results showed that all the tested sequences passed the FIPS-140-2 tests. The confidence interval analysis showed the statistical properties of the randomness of the sequences generated via the CPNG and the RC4 algorithm do not have significant differences.

  17. Application of random coherence order selection in gradient-enhanced multidimensional NMR

    International Nuclear Information System (INIS)

    Bostock, Mark J.; Nietlispach, Daniel

    2016-01-01

    Development of multidimensional NMR is essential to many applications, for example in high resolution structural studies of biomolecules. Multidimensional techniques enable separation of NMR signals over several dimensions, improving signal resolution, whilst also allowing identification of new connectivities. However, these advantages come at a significant cost. The Fourier transform theorem requires acquisition of a grid of regularly spaced points to satisfy the Nyquist criterion, while frequency discrimination and acquisition of a pure phase spectrum require acquisition of both quadrature components for each time point in every indirect (non-acquisition) dimension, adding a factor of 2 N -1 to the number of free- induction decays which must be acquired, where N is the number of dimensions. Compressed sensing (CS) ℓ 1 -norm minimisation in combination with non-uniform sampling (NUS) has been shown to be extremely successful in overcoming the Nyquist criterion. Previously, maximum entropy reconstruction has also been used to overcome the limitation of frequency discrimination, processing data acquired with only one quadrature component at a given time interval, known as random phase detection (RPD), allowing a factor of two reduction in the number of points for each indirect dimension (Maciejewski et al. 2011 PNAS 108 16640). However, whilst this approach can be easily applied in situations where the quadrature components are acquired as amplitude modulated data, the same principle is not easily extended to phase modulated (P-/N-type) experiments where data is acquired in the form exp (iωt) or exp (-iωt), and which make up many of the multidimensional experiments used in modern NMR. Here we demonstrate a modification of the CS ℓ 1 -norm approach to allow random coherence order selection (RCS) for phase modulated experiments; we generalise the nomenclature for RCS and RPD as random quadrature detection (RQD). With this method, the power of RQD can be extended

  18. The role of repetition and reinforcement in school-based oral health education-a cluster randomized controlled trial.

    Science.gov (United States)

    Haleem, Abdul; Khan, Muhammad Khalil; Sufia, Shamta; Chaudhry, Saima; Siddiqui, Muhammad Irfanullah; Khan, Ayyaz Ali

    2016-01-04

    Repetition and reinforcement have been shown to play a crucial role in the sustainability of the effect of Oral Health Education (OHE) programs. However, its relevance to school-based OHE imparted by different personnel is not depicted by the existing dental literature. The present study was undertaken to determine the effectiveness of the repeated and reinforced OHE (RR-OHE) compared to one-time OHE intervention and to assess its role in school-based OHE imparted by dentist, teachers and peers. The study was a cluster randomized controlled trial that involved 935 adolescents aged 10-11 years. Twenty four boys' and girls' schools selected at random in two towns of Karachi, Pakistan were randomly assigned to three groups to receive OHE by dentist (DL), teachers (TL) and peer-leaders (PL). The groups received a single OHE session and were evaluated post-intervention and 6 months after. The three groups were then exposed to OHE for 6 months followed by 1 year of no OHE activity. Two further evaluations at 6-month and 12-month intervals were conducted. The data were collected by a self-administered questionnaire preceded by a structured interview and followed by oral examination of participants. The adolescents' oral health knowledge (OHK) in the DL and PL groups increased significantly by a single OHE session compared to their baseline knowledge (p strategy. Although the OHK scores of the DL and PL groups decreased significantly at 12-month evaluation of RR-OHE (p play a key role in school-based OHE irrespective of educators. The trained teachers and peers can play a complementary role in RR-OHE.

  19. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  20. Pseudo-random number generator for the Sigma 5 computer

    Science.gov (United States)

    Carroll, S. N.

    1983-01-01

    A technique is presented for developing a pseudo-random number generator based on the linear congruential form. The two numbers used for the generator are a prime number and a corresponding primitive root, where the prime is the largest prime number that can be accurately represented on a particular computer. The primitive root is selected by applying Marsaglia's lattice test. The technique presented was applied to write a random number program for the Sigma 5 computer. The new program, named S:RANDOM1, is judged to be superior to the older program named S:RANDOM. For applications requiring several independent random number generators, a table is included showing several acceptable primitive roots. The technique and programs described can be applied to any computer having word length different from that of the Sigma 5.

  1. On Random Numbers and Design

    Science.gov (United States)

    Ben-Ari, Morechai

    2004-01-01

    The term "random" is frequently used in discussion of the theory of evolution, even though the mathematical concept of randomness is problematic and of little relevance in the theory. Therefore, since the core concept of the theory of evolution is the non-random process of natural selection, the term random should not be used in teaching the…

  2. A randomized controlled trial investigating the use of a predictive nomogram for the selection of the FSH starting dose in IVF/ICSI cycles.

    Science.gov (United States)

    Allegra, Adolfo; Marino, Angelo; Volpes, Aldo; Coffaro, Francesco; Scaglione, Piero; Gullo, Salvatore; La Marca, Antonio

    2017-04-01

    The number of oocytes retrieved is a relevant intermediate outcome in women undergoing IVF/intracytoplasmic sperm injection (ICSI). This trial compared the efficiency of the selection of the FSH starting dose according to a nomogram based on multiple biomarkers (age, day 3 FSH, anti-Müllerian hormone) versus an age-based strategy. The primary outcome measure was the proportion of women with an optimal number of retrieved oocytes defined as 8-14. At their first IVF/ICSI cycle, 191 patients underwent a long gonadotrophin-releasing hormone agonist protocol and were randomized to receive a starting dose of recombinant (human) FSH, based on their age (150 IU if ≤35 years, 225 IU if >35 years) or based on the nomogram. Optimal response was observed in 58/92 patients (63%) in the nomogram group and in 42/99 (42%) in the control group (+21%, 95% CI = 0.07 to 0.35, P = 0.0037). No significant differences were found in the clinical pregnancy rate or the number of embryos cryopreserved per patient. The study showed that the FSH starting dose selected according to ovarian reserve is associated with an increase in the proportion of patients with an optimal response: large trials are recommended to investigate any possible effect on the live-birth rate. Copyright © 2017 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  3. Early selection in open-pollinated Eucalyptus families based on competition covariates

    Directory of Open Access Journals (Sweden)

    Bruno Ettore Pavan

    2014-06-01

    Full Text Available The objetive of this work was to evaluate the influence of intergenotypic competition in open-pollinated families of Eucalyptus and its effects on early selection efficiency. Two experiments were carried out, in which the timber volume was evaluated at three ages, in a randomized complete block design. Data from the three years of evaluation (experiment 1, at 2, 4, and 7 years; and experiment 2, at 2, 5, and 7 years were analyzed using mixed models. The following were estimated: variance components, genetic parameters, selection gains, effective number, early selection efficiency, selection gain per unit time, and coincidence of selection with and without the use of competition covariates. Competition effect was nonsignificant for ages under three years, and adjustment using competition covariates was unnecessary. Early selection for families is effective; families that have a late growth spurt are more vulnerable to competition, which markedly impairs ranking at the end of the cycle. Early selection is efficient according to all adopted criteria, and the age of around three years is the most recommended, given the high efficiency and accuracy rate in the indication of trees and families. The addition of competition covariates at the end of the cycle improves early selection efficiency for almost all studied criteria.

  4. Selecting for Fast Protein-Protein Association As Demonstrated on a Random TEM1 Yeast Library Binding BLIP.

    Science.gov (United States)

    Cohen-Khait, Ruth; Schreiber, Gideon

    2018-04-27

    Protein-protein interactions mediate the vast majority of cellular processes. Though protein interactions obey basic chemical principles also within the cell, the in vivo physiological environment may not allow for equilibrium to be reached. Thus, in vitro measured thermodynamic affinity may not provide a complete picture of protein interactions in the biological context. Binding kinetics composed of the association and dissociation rate constants are relevant and important in the cell. Therefore, changes in protein-protein interaction kinetics have a significant impact on the in vivo activity of the proteins. The common protocol for the selection of tighter binders from a mutant library selects for protein complexes with slower dissociation rate constants. Here we describe a method to specifically select for variants with faster association rate constants by using pre-equilibrium selection, starting from a large random library. Toward this end, we refine the selection conditions of a TEM1-β-lactamase library against its natural nanomolar affinity binder β-lactamase inhibitor protein (BLIP). The optimal selection conditions depend on the ligand concentration and on the incubation time. In addition, we show that a second sort of the library helps to separate signal from noise, resulting in a higher percent of faster binders in the selected library. Fast associating protein variants are of particular interest for drug development and other biotechnological applications.

  5. Unwilling or Unable to Cheat? Evidence from a Randomized Tax Audit Experiment in Denmark

    DEFF Research Database (Denmark)

    Kleven, Henrik Jacobsen; Knudsen, Martin B.; Kreiner, Claus Thustrup

    2010-01-01

    This paper analyzes a randomized tax enforcement experiment in Denmark. In the base year, a stratified and representative sample of over 40,000 individual income tax filers was selected for the experiment. Half of the tax filers were randomly selected to be thoroughly audited, while the rest were...... deliberately not audited. The following year, "threat-of-audit" letters were randomly assigned and sent to tax filers in both groups. Using comprehensive administrative tax data, we present four main findings. First, we find that the tax evasion rate is very small (0.3%) for income subject to third...... impact on tax evasion, but that this effect is small in comparison to avoidance responses. Third, we find that prior audits substantially increase self-reported income, implying that individuals update their beliefs about detection probability based on experiencing an audit. Fourth, threat-of-audit...

  6. Cermet based solar selective absorbers : further selectivity improvement and developing new fabrication technique

    OpenAIRE

    Nejati, Mohammadreza

    2008-01-01

    Spectral selectivity of cermet based selective absorbers were increased by inducing surface roughness on the surface of the cermet layer using a roughening technique (deposition on hot substrates) or by micro-structuring the metallic substrates before deposition of the absorber coating using laser and imprint structuring techniques. Cu-Al2O3 cermet absorbers with very rough surfaces and excellent selectivity were obtained by employing a roughness template layer under the infrared reflective l...

  7. Performance-Based Technology Selection Filter description report

    International Nuclear Information System (INIS)

    O'Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL)

  8. Performance-Based Technology Selection Filter description report

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL).

  9. A Table-Based Random Sampling Simulation for Bioluminescence Tomography

    Directory of Open Access Journals (Sweden)

    Xiaomeng Zhang

    2006-01-01

    Full Text Available As a popular simulation of photon propagation in turbid media, the main problem of Monte Carlo (MC method is its cumbersome computation. In this work a table-based random sampling simulation (TBRS is proposed. The key idea of TBRS is to simplify multisteps of scattering to a single-step process, through randomly table querying, thus greatly reducing the computing complexity of the conventional MC algorithm and expediting the computation. The TBRS simulation is a fast algorithm of the conventional MC simulation of photon propagation. It retained the merits of flexibility and accuracy of conventional MC method and adapted well to complex geometric media and various source shapes. Both MC simulations were conducted in a homogeneous medium in our work. Also, we present a reconstructing approach to estimate the position of the fluorescent source based on the trial-and-error theory as a validation of the TBRS algorithm. Good agreement is found between the conventional MC simulation and the TBRS simulation.

  10. Post-processing Free Quantum Random Number Generator Based on Avalanche Photodiode Array

    International Nuclear Information System (INIS)

    Li Yang; Liao Sheng-Kai; Liang Fu-Tian; Shen Qi; Liang Hao; Peng Cheng-Zhi

    2016-01-01

    Quantum random number generators adopting single photon detection have been restricted due to the non-negligible dead time of avalanche photodiodes (APDs). We propose a new approach based on an APD array to improve the generation rate of random numbers significantly. This method compares the detectors' responses to consecutive optical pulses and generates the random sequence. We implement a demonstration experiment to show its simplicity, compactness and scalability. The generated numbers are proved to be unbiased, post-processing free, ready to use, and their randomness is verified by using the national institute of standard technology statistical test suite. The random bit generation efficiency is as high as 32.8% and the potential generation rate adopting the 32 × 32 APD array is up to tens of Gbits/s. (paper)

  11. A true random number generator based on mouse movement and chaotic cryptography

    International Nuclear Information System (INIS)

    Hu Yue; Liao Xiaofeng; Wong, Kwok-wo; Zhou Qing

    2009-01-01

    True random number generators are in general more secure than pseudo random number generators. In this paper, we propose a novel true random number generator which generates a 256-bit random number by computer mouse movement. It is cheap, convenient and universal for personal computers. To eliminate the effect of similar movement patterns generated by the same user, three chaos-based approaches, namely, discretized 2D chaotic map permutation, spatiotemporal chaos and 'MASK' algorithm, are adopted to post-process the captured mouse movements. Random bits generated by three users are tested using NIST statistical tests. Both the spatiotemporal chaos approach and the 'MASK' algorithm pass the tests successfully. However, the latter has a better performance in terms of efficiency and effectiveness and so is more practical for common personal computer applications.

  12. Discrete least squares polynomial approximation with random evaluations - application to PDEs with Random parameters

    KAUST Repository

    Nobile, Fabio

    2015-01-07

    We consider a general problem F(u, y) = 0 where u is the unknown solution, possibly Hilbert space valued, and y a set of uncertain parameters. We specifically address the situation in which the parameterto-solution map u(y) is smooth, however y could be very high (or even infinite) dimensional. In particular, we are interested in cases in which F is a differential operator, u a Hilbert space valued function and y a distributed, space and/or time varying, random field. We aim at reconstructing the parameter-to-solution map u(y) from random noise-free or noisy observations in random points by discrete least squares on polynomial spaces. The noise-free case is relevant whenever the technique is used to construct metamodels, based on polynomial expansions, for the output of computer experiments. In the case of PDEs with random parameters, the metamodel is then used to approximate statistics of the output quantity. We discuss the stability of discrete least squares on random points show convergence estimates both in expectation and probability. We also present possible strategies to select, either a-priori or by adaptive algorithms, sequences of approximating polynomial spaces that allow to reduce, and in some cases break, the curse of dimensionality

  13. Direct random insertion mutagenesis of Helicobacter pylori

    NARCIS (Netherlands)

    de Jonge, Ramon; Bakker, Dennis; van Vliet, Arnoud H. M.; Kuipers, Ernst J.; Vandenbroucke-Grauls, Christina M. J. E.; Kusters, Johannes G.

    2003-01-01

    Random insertion mutagenesis is a widely used technique for the identification of bacterial virulence genes. Most strategies for random mutagenesis involve cloning in Escherichia coli for passage of plasmids or for phenotypic selection. This can result in biased selection due to restriction or

  14. Direct random insertion mutagenesis of Helicobacter pylori.

    NARCIS (Netherlands)

    Jonge, de R.; Bakker, D.; Vliet, van AH; Kuipers, E.J.; Vandenbroucke-Grauls, C.M.J.E.; Kusters, J.G.

    2003-01-01

    Random insertion mutagenesis is a widely used technique for the identification of bacterial virulence genes. Most strategies for random mutagenesis involve cloning in Escherichia coli for passage of plasmids or for phenotypic selection. This can result in biased selection due to restriction or

  15. Random fiber lasers based on artificially controlled backscattering fibers

    Science.gov (United States)

    Chen, Daru; Wang, Xiaoliang; She, Lijuan; Qiang, Zexuan; Yu, Zhangwei

    2017-10-01

    The random fiber laser (RFL) which is a milestone in laser physics and nonlinear optics, has attracted considerable attention recently. Most previous RFLs are based on distributed feedback of Rayleigh scattering amplified through stimulated Raman/Brillouin scattering effect in single mode fibers, which required long-distance (tens of kilometers) single mode fibers and high threshold up to watt-level due to the extremely small Rayleigh scattering coefficient of the fiber. We proposed and demonstrated a half-open cavity RFL based on a segment of a artificially controlled backscattering SMF(ACB-SMF) with a length of 210m, 310m or 390m. A fiber Bragg grating with the central wavelength of 1530nm and a segment of ACB-SMF forms the half-open cavity. The proposed RFL achieves the threshold of 25mW, 30mW and 30mW, respectively. Random lasing at the wavelength of 1530nm and the extinction ratio of 50dB is achieved when a segment of 5m EDF is pumped by a 980nm LD in the RFL. Another half-open cavity RFL based on a segment of a artificially controlled backscattering EDF(ACBS-EDF) is also demonstrated without an ACB-SMF. The 3m ACB-EDF is fabricated by using the femtosecond laser with pulse energy of 0.34mJ which introduces about 50 reflectors in the EDF. Random lasing at the wavelength of 1530nm is achieved with the output power of 7.5mW and the efficiency of 1.88%. Two novel RFLs with much short cavities have been achieved with low threshold and high efficiency.

  16. Prevalence of at-risk genotypes for genotoxic effects decreases with age in a randomly selected population in Flanders: a cross sectional study

    Directory of Open Access Journals (Sweden)

    van Delft Joost HM

    2011-10-01

    Full Text Available Abstract Background We hypothesized that in Flanders (Belgium, the prevalence of at-risk genotypes for genotoxic effects decreases with age due to morbidity and mortality resulting from chronic diseases. Rather than polymorphisms in single genes, the interaction of multiple genetic polymorphisms in low penetrance genes involved in genotoxic effects might be of relevance. Methods Genotyping was performed on 399 randomly selected adults (aged 50-65 and on 442 randomly selected adolescents. Based on their involvement in processes relevant to genotoxicity, 28 low penetrance polymorphisms affecting the phenotype in 19 genes were selected (xenobiotic metabolism, oxidative stress defense and DNA repair, respectively 13, 6 and 9 polymorphisms. Polymorphisms which, based on available literature, could not clearly be categorized a priori as leading to an 'increased risk' or a 'protective effect' were excluded. Results The mean number of risk alleles for all investigated polymorphisms was found to be lower in the 'elderly' (17.0 ± 2.9 than the 'adolescent' (17.6 ± 3.1 subpopulation (P = 0.002. These results were not affected by gender nor smoking. The prevalence of a high (> 17 = median number of risk alleles was less frequent in the 'elderly' (40.6% than the 'adolescent' (51.4% subpopulation (P = 0.002. In particular for phase II enzymes, the mean number of risk alleles was lower in the 'elderly' (4.3 ± 1.6 than the 'adolescent' age group (4.8 ± 1.9 P 4 = median number of risk alleles was less frequent in the 'elderly' (41.3% than the adolescent subpopulation (56.3%, P 8 = median number of risk alleles for DNA repair enzyme-coding genes was lower in the 'elderly' (37,3% than the 'adolescent' subpopulation (45.6%, P = 0.017. Conclusions These observations are consistent with the hypothesis that, in Flanders, the prevalence of at-risk alleles in genes involved in genotoxic effects decreases with age, suggesting that persons carrying a higher number of

  17. Effectiveness of Wii-based rehabilitation in stroke: A randomized controlled study

    OpenAIRE

    Ayça Utkan Karasu; Elif Balevi Batur; Gülçin Kaymak Karataş

    2018-01-01

    Objective: To investigate the efficacy of Nintendo Wii Fit®-based balance rehabilitation as an adjunc-tive therapy to conventional rehabilitation in stroke patients. Methods: During the study period, 70 stroke patients were evaluated. Of these, 23 who met the study criteria were randomly assigned to either the experimental group (n = 12) or the control group (n = 11) by block randomization. Primary outcome measures were Berg Balance Scale, Functional Reach Test, Postural Asses...

  18. Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope

    Energy Technology Data Exchange (ETDEWEB)

    Quan, Wei; Lv, Lin, E-mail: lvlinlch1990@163.com; Liu, Baiqi [School of Instrument Science and Opto-Electronics Engineering, Beihang University, Beijing 100191 (China)

    2014-11-15

    In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.

  19. Selective oropharyngeal decontamination versus selective digestive decontamination in critically ill patients: a meta-analysis of randomized controlled trials

    Directory of Open Access Journals (Sweden)

    Zhao D

    2015-07-01

    Full Text Available Di Zhao,1,* Jian Song,2,* Xuan Gao,3 Fei Gao,4 Yupeng Wu,2 Yingying Lu,5 Kai Hou1 1Department of Neurosurgery, The First Hospital of Hebei Medical University, 2Department of Neurosurgery, 3Department of Neurology, The Second Hospital of Hebei Medical University, 4Hebei Provincial Procurement Centers for Medical Drugs and Devices, 5Department of Neurosurgery, The Second Hospital of Hebei Medical University, Shijiazhuang People’s Republic of China *These authors contributed equally to this work Background: Selective digestive decontamination (SDD and selective oropharyngeal decontamination (SOD are associated with reduced mortality and infection rates among patients in intensive care units (ICUs; however, whether SOD has a superior effect than SDD remains uncertain. Hence, we conducted a meta-analysis of randomized controlled trials (RCTs to compare SOD with SDD in terms of clinical outcomes and antimicrobial resistance rates in patients who were critically ill. Methods: RCTs published in PubMed, Embase, and Web of Science were systematically reviewed to compare the effects of SOD and SDD in patients who were critically ill. Outcomes included day-28 mortality, length of ICU stay, length of hospital stay, duration of mechanical ventilation, ICU-acquired bacteremia, and prevalence of antibiotic-resistant Gram-negative bacteria. Results were expressed as risk ratio (RR with 95% confidence intervals (CIs, and weighted mean differences (WMDs with 95% CIs. Pooled estimates were performed using a fixed-effects model or random-effects model, depending on the heterogeneity among studies. Results: A total of four RCTs involving 23,822 patients met the inclusion criteria and were included in this meta-analysis. Among patients whose admitting specialty was surgery, cardiothoracic surgery (57.3% and neurosurgery (29.7% were the two main types of surgery being performed. Pooled results showed that SOD had similar effects as SDD in day-28 mortality (RR =1

  20. Construction Tender Subcontract Selection using Case-based Reasoning

    Directory of Open Access Journals (Sweden)

    Due Luu

    2012-11-01

    Full Text Available Obtaining competitive quotations from suitably qualified subcontractors at tender tim n significantly increase the chance of w1nmng a construction project. Amidst an increasingly growing trend to subcontracting in Australia, selecting appropriate subcontractors for a construction project can be a daunting task requiring the analysis of complex and dynamic criteria such as past performance, suitable experience, track record of competitive pricing, financial stability and so on. Subcontractor selection is plagued with uncertainty and vagueness and these conditions are difficul_t o represent in generalised sets of rules. DeciSIOns pertaining to the selection of subcontr:act?s tender time are usually based on the mtu1t1onand past experience of construction estimators. Case-based reasoning (CBR may be an appropriate method of addressing the chal_lenges of selecting subcontractors because CBR 1s able to harness the experiential knowledge of practitioners. This paper reviews the practicality and suitability of a CBR approach for subcontractor tender selection through the development of a prototype CBR procurement advisory system. In this system, subcontractor selection cases are represented by a set of attributes elicited from experienced construction estimators. The results indicate that CBR can enhance the appropriateness of the selection of subcontractors for construction projects.

  1. Novel β-lactamase-random peptide fusion libraries for phage display selection of cancer cell-targeting agents suitable for enzyme prodrug therapy

    Science.gov (United States)

    Shukla, Girja S.; Krag, David N.

    2010-01-01

    Novel phage-displayed random linear dodecapeptide (X12) and cysteine-constrained decapeptide (CX10C) libraries constructed in fusion to the amino-terminus of P99 β-lactamase molecules were used for identifying β-lactamase-linked cancer cell-specific ligands. The size and quality of both libraries were comparable to the standards of other reported phage display systems. Using the single-round panning method based on phage DNA recovery, we identified severalβ-lactamase fusion peptides that specifically bind to live human breast cancer MDA-MB-361 cells. The β-lactamase fusion to the peptides helped in conducting the enzyme activity-based clone normalization and cell-binding screening in a very time- and cost-efficient manner. The methods were suitable for 96-well readout as well as microscopic imaging. The success of the biopanning was indicated by the presence of ~40% cancer cell-specific clones among recovered phages. One of the binding clones appeared multiple times. The cancer cell-binding fusion peptides also shared several significant motifs. This opens a new way of preparing and selecting phage display libraries. The cancer cell-specific β-lactamase-linked affinity reagents selected from these libraries can be used for any application that requires a reporter for tracking the ligand molecules. Furthermore, these affinity reagents have also a potential for their direct use in the targeted enzyme prodrug therapy of cancer. PMID:19751096

  2. Multiple Attribute Decision Making Based Relay Vehicle Selection for Electric Vehicle Communication

    Directory of Open Access Journals (Sweden)

    Zhao Qiang

    2015-01-01

    Full Text Available Large-scale electric vehicle integration into power grid and charging randomly will cause serious impacts on the normal operation of power grid. Therefore, it is necessary to control the charging behavior of electric vehicle, while information transmission for electric vehicle is significant. Due to the highly mobile characteristics of vehicle, transferring information to power grid directly might be inaccessible. Relay vehicle (RV can be used for supporting multi-hop connection between SV and power grid. This paper proposes a multiple attribute decision making (MADM-based RV selection algorithm, which considers multiple attribute, including data transfer rate, delay, route duration. It takes the characteristics of electric vehicle communication into account, which can provide protection for the communication services of electric vehicle charging and discharging. Numerical results demonstrate that compared to previous algorithm, the proposed algorithm offer better performance in terms of throughput, transmission delay.

  3. Cellular Automata-Based Parallel Random Number Generators Using FPGAs

    Directory of Open Access Journals (Sweden)

    David H. K. Hoe

    2012-01-01

    Full Text Available Cellular computing represents a new paradigm for implementing high-speed massively parallel machines. Cellular automata (CA, which consist of an array of locally connected processing elements, are a basic form of a cellular-based architecture. The use of field programmable gate arrays (FPGAs for implementing CA accelerators has shown promising results. This paper investigates the design of CA-based pseudo-random number generators (PRNGs using an FPGA platform. To improve the quality of the random numbers that are generated, the basic CA structure is enhanced in two ways. First, the addition of a superrule to each CA cell is considered. The resulting self-programmable CA (SPCA uses the superrule to determine when to make a dynamic rule change in each CA cell. The superrule takes its inputs from neighboring cells and can be considered itself a second CA working in parallel with the main CA. When implemented on an FPGA, the use of lookup tables in each logic cell removes any restrictions on how the super-rules should be defined. Second, a hybrid configuration is formed by combining a CA with a linear feedback shift register (LFSR. This is advantageous for FPGA designs due to the compactness of the LFSR implementations. A standard software package for statistically evaluating the quality of random number sequences known as Diehard is used to validate the results. Both the SPCA and the hybrid CA/LFSR were found to pass all the Diehard tests.

  4. Coordinate based random effect size meta-analysis of neuroimaging studies.

    Science.gov (United States)

    Tench, C R; Tanasescu, Radu; Constantinescu, C S; Auer, D P; Cottam, W J

    2017-06-01

    Low power in neuroimaging studies can make them difficult to interpret, and Coordinate based meta-analysis (CBMA) may go some way to mitigating this issue. CBMA has been used in many analyses to detect where published functional MRI or voxel-based morphometry studies testing similar hypotheses report significant summary results (coordinates) consistently. Only the reported coordinates and possibly t statistics are analysed, and statistical significance of clusters is determined by coordinate density. Here a method of performing coordinate based random effect size meta-analysis and meta-regression is introduced. The algorithm (ClusterZ) analyses both coordinates and reported t statistic or Z score, standardised by the number of subjects. Statistical significance is determined not by coordinate density, but by a random effects meta-analyses of reported effects performed cluster-wise using standard statistical methods and taking account of censoring inherent in the published summary results. Type 1 error control is achieved using the false cluster discovery rate (FCDR), which is based on the false discovery rate. This controls both the family wise error rate under the null hypothesis that coordinates are randomly drawn from a standard stereotaxic space, and the proportion of significant clusters that are expected under the null. Such control is necessary to avoid propagating and even amplifying the very issues motivating the meta-analysis in the first place. ClusterZ is demonstrated on both numerically simulated data and on real data from reports of grey matter loss in multiple sclerosis (MS) and syndromes suggestive of MS, and of painful stimulus in healthy controls. The software implementation is available to download and use freely. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Mindfulness-based stress reduction for residents: A randomized controlled trial

    NARCIS (Netherlands)

    Verweij, H.; Ravesteijn, H.J. van; Hooff, M.L.M. van; Lagro-Janssen, A.L.M.; Speckens, A.E.M.

    2018-01-01

    Background: Burnout is highly prevalent in residents. No randomized controlled trials have been conducted measuring the effects of Mindfulness-Based Stress Reduction (MBSR) on burnout in residents. Objective: To determine the effectiveness of MBSR in reducing burnout in residents. Design: A

  6. Feature Selection Based on Mutual Correlation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Somol, Petr; Ververidis, D.; Kotropoulos, C.

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 569-577 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA AV ČR IAA2075302 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection Subject RIV: BD - Theory of Information Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/haindl-feature selection based on mutual correlation.pdf

  7. Random and non-random mating populations: Evolutionary dynamics in meiotic drive.

    Science.gov (United States)

    Sarkar, Bijan

    2016-01-01

    Game theoretic tools are utilized to analyze a one-locus continuous selection model of sex-specific meiotic drive by considering nonequivalence of the viabilities of reciprocal heterozygotes that might be noticed at an imprinted locus. The model draws attention to the role of viability selections of different types to examine the stable nature of polymorphic equilibrium. A bridge between population genetics and evolutionary game theory has been built up by applying the concept of the Fundamental Theorem of Natural Selection. In addition to pointing out the influences of male and female segregation ratios on selection, configuration structure reveals some noted results, e.g., Hardy-Weinberg frequencies hold in replicator dynamics, occurrence of faster evolution at the maximized variance fitness, existence of mixed Evolutionarily Stable Strategy (ESS) in asymmetric games, the tending evolution to follow not only a 1:1 sex ratio but also a 1:1 different alleles ratio at particular gene locus. Through construction of replicator dynamics in the group selection framework, our selection model introduces a redefining bases of game theory to incorporate non-random mating where a mating parameter associated with population structure is dependent on the social structure. Also, the model exposes the fact that the number of polymorphic equilibria will depend on the algebraic expression of population structure. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Information Gain Based Dimensionality Selection for Classifying Text Documents

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexity is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.

  9. Sensitivity analysis for missing dichotomous outcome data in multi-visit randomized clinical trial with randomization-based covariance adjustment.

    Science.gov (United States)

    Li, Siying; Koch, Gary G; Preisser, John S; Lam, Diana; Sanchez-Kam, Matilde

    2017-01-01

    Dichotomous endpoints in clinical trials have only two possible outcomes, either directly or via categorization of an ordinal or continuous observation. It is common to have missing data for one or more visits during a multi-visit study. This paper presents a closed form method for sensitivity analysis of a randomized multi-visit clinical trial that possibly has missing not at random (MNAR) dichotomous data. Counts of missing data are redistributed to the favorable and unfavorable outcomes mathematically to address possibly informative missing data. Adjusted proportion estimates and their closed form covariance matrix estimates are provided. Treatment comparisons over time are addressed with Mantel-Haenszel adjustment for a stratification factor and/or randomization-based adjustment for baseline covariables. The application of such sensitivity analyses is illustrated with an example. An appendix outlines an extension of the methodology to ordinal endpoints.

  10. An Improved Ensemble of Random Vector Functional Link Networks Based on Particle Swarm Optimization with Double Optimization Strategy.

    Science.gov (United States)

    Ling, Qing-Hua; Song, Yu-Qing; Han, Fei; Yang, Dan; Huang, De-Shuang

    2016-01-01

    For ensemble learning, how to select and combine the candidate classifiers are two key issues which influence the performance of the ensemble system dramatically. Random vector functional link networks (RVFL) without direct input-to-output links is one of suitable base-classifiers for ensemble systems because of its fast learning speed, simple structure and good generalization performance. In this paper, to obtain a more compact ensemble system with improved convergence performance, an improved ensemble of RVFL based on attractive and repulsive particle swarm optimization (ARPSO) with double optimization strategy is proposed. In the proposed method, ARPSO is applied to select and combine the candidate RVFL. As for using ARPSO to select the optimal base RVFL, ARPSO considers both the convergence accuracy on the validation data and the diversity of the candidate ensemble system to build the RVFL ensembles. In the process of combining RVFL, the ensemble weights corresponding to the base RVFL are initialized by the minimum norm least-square method and then further optimized by ARPSO. Finally, a few redundant RVFL is pruned, and thus the more compact ensemble of RVFL is obtained. Moreover, in this paper, theoretical analysis and justification on how to prune the base classifiers on classification problem is presented, and a simple and practically feasible strategy for pruning redundant base classifiers on both classification and regression problems is proposed. Since the double optimization is performed on the basis of the single optimization, the ensemble of RVFL built by the proposed method outperforms that built by some single optimization methods. Experiment results on function approximation and classification problems verify that the proposed method could improve its convergence accuracy as well as reduce the complexity of the ensemble system.

  11. 47 CFR 1.1604 - Post-selection hearings.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...

  12. Order-based representation in random networks of cortical neurons.

    Directory of Open Access Journals (Sweden)

    Goded Shahaf

    2008-11-01

    Full Text Available The wide range of time scales involved in neural excitability and synaptic transmission might lead to ongoing change in the temporal structure of responses to recurring stimulus presentations on a trial-to-trial basis. This is probably the most severe biophysical constraint on putative time-based primitives of stimulus representation in neuronal networks. Here we show that in spontaneously developing large-scale random networks of cortical neurons in vitro the order in which neurons are recruited following each stimulus is a naturally emerging representation primitive that is invariant to significant temporal changes in spike times. With a relatively small number of randomly sampled neurons, the information about stimulus position is fully retrievable from the recruitment order. The effective connectivity that makes order-based representation invariant to time warping is characterized by the existence of stations through which activity is required to pass in order to propagate further into the network. This study uncovers a simple invariant in a noisy biological network in vitro; its applicability under in vivo constraints remains to be seen.

  13. Random genetic drift, natural selection, and noise in human cranial evolution.

    Science.gov (United States)

    Roseman, Charles C

    2016-08-01

    This study assesses the extent to which relationships among groups complicate comparative studies of adaptation in recent human cranial variation and the extent to which departures from neutral additive models of evolution hinder the reconstruction of population relationships among groups using cranial morphology. Using a maximum likelihood evolutionary model fitting approach and a mixed population genomic and cranial data set, I evaluate the relative fits of several widely used models of human cranial evolution. Moreover, I compare the goodness of fit of models of cranial evolution constrained by genomic variation to test hypotheses about population specific departures from neutrality. Models from population genomics are much better fits to cranial variation than are traditional models from comparative human biology. There is not enough evolutionary information in the cranium to reconstruct much of recent human evolution but the influence of population history on cranial variation is strong enough to cause comparative studies of adaptation serious difficulties. Deviations from a model of random genetic drift along a tree-like population history show the importance of environmental effects, gene flow, and/or natural selection on human cranial variation. Moreover, there is a strong signal of the effect of natural selection or an environmental factor on a group of humans from Siberia. The evolution of the human cranium is complex and no one evolutionary process has prevailed at the expense of all others. A holistic unification of phenome, genome, and environmental context, gives us a strong point of purchase on these problems, which is unavailable to any one traditional approach alone. Am J Phys Anthropol 160:582-592, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. SNRFCB: sub-network based random forest classifier for predicting chemotherapy benefit on survival for cancer treatment.

    Science.gov (United States)

    Shi, Mingguang; He, Jianmin

    2016-04-01

    Adjuvant chemotherapy (CTX) should be individualized to provide potential survival benefit and avoid potential harm to cancer patients. Our goal was to establish a computational approach for making personalized estimates of the survival benefit from adjuvant CTX. We developed Sub-Network based Random Forest classifier for predicting Chemotherapy Benefit (SNRFCB) based gene expression datasets of lung cancer. The SNRFCB approach was then validated in independent test cohorts for identifying chemotherapy responder cohorts and chemotherapy non-responder cohorts. SNRFCB involved the pre-selection of gene sub-network signatures based on the mutations and on protein-protein interaction data as well as the application of the random forest algorithm to gene expression datasets. Adjuvant CTX was significantly associated with the prolonged overall survival of lung cancer patients in the chemotherapy responder group (P = 0.008), but it was not beneficial to patients in the chemotherapy non-responder group (P = 0.657). Adjuvant CTX was significantly associated with the prolonged overall survival of lung cancer squamous cell carcinoma (SQCC) subtype patients in the chemotherapy responder cohorts (P = 0.024), but it was not beneficial to patients in the chemotherapy non-responder cohorts (P = 0.383). SNRFCB improved prediction performance as compared to the machine learning method, support vector machine (SVM). To test the general applicability of the predictive model, we further applied the SNRFCB approach to human breast cancer datasets and also observed superior performance. SNRFCB could provide recurrent probability for individual patients and identify which patients may benefit from adjuvant CTX in clinical trials.

  15. A Randomized Controlled Trial of COMPASS Web-Based and Face-to-Face Teacher Coaching in Autism

    Science.gov (United States)

    Ruble, Lisa A.; McGrew, John H.; Toland, Michael D.; Dalrymple, Nancy J.; Jung, Lee Ann

    2013-01-01

    Objective Most children with autism rely on schools as their primary source of intervention, yet research has suggested that teachers rarely use evidence-based practices. To address the need for improved educational outcomes, a previously tested consultation intervention called the Collaborative Model for Promoting Competence and Success (COMPASS; Ruble, Dalrymple, & McGrew, 2010; Ruble, Dalrymple, & McGrew, 2012) was evaluated in a 2nd randomized controlled trial, with the addition of a web-based group. Method Forty-nine teacher–child dyads were randomized into 1 of 3 groups: (1) a placebo control (PBO) group, (2) COMPASS followed by face-to-face (FF) coaching sessions, and (3) COMPASS followed by web-based (WEB) coaching sessions. Three individualized goals (social, communication, and independence skills) were selected for intervention for each child. The primary outcome of independent ratings of child goal attainment and several process measures (e.g., consultant and teacher fidelity) were evaluated. Results Using an intent-to-treat approach, findings replicated earlier results with a very large effect size (d = 1.41) for the FF group and a large effect size (d = 1.12) for the WEB group relative to the PBO group. There were no differences in overall change across goal domains between the FF and WEB groups, suggesting the efficacy of videoconferencing technology. Conclusions COMPASS is effective and results in improved educational outcomes for young children with autism. Videoconferencing technology, as a scalable tool, has promise for facilitating access to autism specialists and bridging the research-to-practice gap. PMID:23438314

  16. Impact of Menu Sequencing on Internet-Based Educational Module Selection

    Science.gov (United States)

    Bensley, Robert; Brusk, John J.; Rivas, Jason; Anderson, Judith V.

    2006-01-01

    Patterns of Internet-based menu item selection can occur for a number of reasons, many of which may not be based on interest in topic. It then becomes important to ensure menu order is devised in a way that ensures the greatest accuracy in matching user need with selection. This study examined the impact of menu rotation on the selection of…

  17. Feature-selective attention in healthy old age: a selective decline in selective attention?

    Science.gov (United States)

    Quigley, Cliodhna; Müller, Matthias M

    2014-02-12

    Deficient selection against irrelevant information has been proposed to underlie age-related cognitive decline. We recently reported evidence for maintained early sensory selection when older and younger adults used spatial selective attention to perform a challenging task. Here we explored age-related differences when spatial selection is not possible and feature-selective attention must be deployed. We additionally compared the integrity of feedforward processing by exploiting the well established phenomenon of suppression of visual cortical responses attributable to interstimulus competition. Electroencephalogram was measured while older and younger human adults responded to brief occurrences of coherent motion in an attended stimulus composed of randomly moving, orientation-defined, flickering bars. Attention was directed to horizontal or vertical bars by a pretrial cue, after which two orthogonally oriented, overlapping stimuli or a single stimulus were presented. Horizontal and vertical bars flickered at different frequencies and thereby elicited separable steady-state visual-evoked potentials, which were used to examine the effect of feature-based selection and the competitive influence of a second stimulus on ongoing visual processing. Age differences were found in feature-selective attentional modulation of visual responses: older adults did not show consistent modulation of magnitude or phase. In contrast, the suppressive effect of a second stimulus was robust and comparable in magnitude across age groups, suggesting that bottom-up processing of the current stimuli is essentially unchanged in healthy old age. Thus, it seems that visual processing per se is unchanged, but top-down attentional control is compromised in older adults when space cannot be used to guide selection.

  18. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  19. A novel image encryption algorithm based on synchronized random bit generated in cascade-coupled chaotic semiconductor ring lasers

    Science.gov (United States)

    Li, Jiafu; Xiang, Shuiying; Wang, Haoning; Gong, Junkai; Wen, Aijun

    2018-03-01

    In this paper, a novel image encryption algorithm based on synchronization of physical random bit generated in a cascade-coupled semiconductor ring lasers (CCSRL) system is proposed, and the security analysis is performed. In both transmitter and receiver parts, the CCSRL system is a master-slave configuration consisting of a master semiconductor ring laser (M-SRL) with cross-feedback and a solitary SRL (S-SRL). The proposed image encryption algorithm includes image preprocessing based on conventional chaotic maps, pixel confusion based on control matrix extracted from physical random bit, and pixel diffusion based on random bit stream extracted from physical random bit. Firstly, the preprocessing method is used to eliminate the correlation between adjacent pixels. Secondly, physical random bit with verified randomness is generated based on chaos in the CCSRL system, and is used to simultaneously generate the control matrix and random bit stream. Finally, the control matrix and random bit stream are used for the encryption algorithm in order to change the position and the values of pixels, respectively. Simulation results and security analysis demonstrate that the proposed algorithm is effective and able to resist various typical attacks, and thus is an excellent candidate for secure image communication application.

  20. Community-based peer-led diabetes self-management: a randomized trial.

    Science.gov (United States)

    Lorig, Kate; Ritter, Philip L; Villa, Frank J; Armas, Jean

    2009-01-01

    The purpose of this study is to determine the effectiveness of a community-based diabetes self-management program comparing treatment participants to a randomized usual-care control group at 6 months. A total of 345 adults with type 2 diabetes but no criteria for high A1C were randomized to a usual-care control group or 6-week community-based, peer-led diabetes self-management program (DSMP). Randomized participants were compared at 6 months. The DSMP intervention participants were followed for an additional 6 months (12 months total). A1C and body mass index were measured at baseline, 6 months, and 12 months. All other data were collected by self-administered questionnaires. At 6 months, DSMP participants did not demonstrate improvements in A1C as compared with controls. Baseline A1C was much lower than in similar trials. Participants did have significant improvements in depression, symptoms of hypoglycemia, communication with physicians, healthy eating, and reading food labels (P < .01). They also had significant improvements in patient activation and self-efficacy. At 12 months, DSMP intervention participants continued to demonstrate improvements in depression, communication with physicians, healthy eating, patient activation, and self-efficacy (P < .01). There were no significant changes in utilization measures. These findings suggest that people with diabetes without elevated A1C can benefit from a community-based, peer-led diabetes program. Given the large number of people with diabetes and lack of low-cost diabetes education, the DSMP deserves consideration for implementation.

  1. When to base clinical policies on observational versus randomized trial data.

    Science.gov (United States)

    Hornberger, J; Wrone, E

    1997-10-15

    Physicians must decide when the evidence is sufficient to adopt a new clinical policy. Analysis of large clinical and administrative databases is becoming an important source of evidence for changing clinical policies. Because such analysis cannot control for the effects of all potential confounding variables, physicians risk drawing the wrong conclusion about the cause-and-effect relation between a change in clinical policy and outcomes. Randomized studies offer protection against drawing a conclusion that would lead to adoption of an inferior policy. However, a randomized study may be difficult to justify because of the extra costs of collecting data for a randomized study and concerns that a study will not directly benefit the patients enrolled in the study. This article reviews the advantages and disadvantages of basing clinical policy on analysis of large databases compared with conducting a randomized study. A technique is described and illustrated for accessing the potential costs and benefits of conducting such a study. This type of analysis formed the basis for a physician-managed health care organization deciding to sponsor a randomized study among patients with end-stage renal disease as part of a quality-improvement initiative.

  2. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  3. Cross-covariance functions for multivariate random fields based on latent dimensions

    KAUST Repository

    Apanasovich, T. V.; Genton, M. G.

    2010-01-01

    The problem of constructing valid parametric cross-covariance functions is challenging. We propose a simple methodology, based on latent dimensions and existing covariance models for univariate random fields, to develop flexible, interpretable

  4. A Data Management System Integrating Web-based Training and Randomized Trials: Requirements, Experiences and Recommendations.

    Science.gov (United States)

    Muroff, Jordana; Amodeo, Maryann; Larson, Mary Jo; Carey, Margaret; Loftin, Ralph D

    2011-01-01

    This article describes a data management system (DMS) developed to support a large-scale randomized study of an innovative web-course that was designed to improve substance abuse counselors' knowledge and skills in applying a substance abuse treatment method (i.e., cognitive behavioral therapy; CBT). The randomized trial compared the performance of web-course-trained participants (intervention group) and printed-manual-trained participants (comparison group) to determine the effectiveness of the web-course in teaching CBT skills. A single DMS was needed to support all aspects of the study: web-course delivery and management, as well as randomized trial management. The authors briefly reviewed several other systems that were described as built either to handle randomized trials or to deliver and evaluate web-based training. However it was clear that these systems fell short of meeting our needs for simultaneous, coordinated management of the web-course and the randomized trial. New England Research Institute's (NERI) proprietary Advanced Data Entry and Protocol Tracking (ADEPT) system was coupled with the web-programmed course and customized for our purposes. This article highlights the requirements for a DMS that operates at the intersection of web-based course management systems and randomized clinical trial systems, and the extent to which the coupled, customized ADEPT satisfied those requirements. Recommendations are included for institutions and individuals considering conducting randomized trials and web-based training programs, and seeking a DMS that can meet similar requirements.

  5. Vast Portfolio Selection with Gross-exposure Constraints().

    Science.gov (United States)

    Fan, Jianqing; Zhang, Jingjin; Yu, Ke

    2012-01-01

    We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000.

  6. An automatic optimum number of well-distributed ground control lines selection procedure based on genetic algorithm

    Science.gov (United States)

    Yavari, Somayeh; Valadan Zoej, Mohammad Javad; Salehi, Bahram

    2018-05-01

    The procedure of selecting an optimum number and best distribution of ground control information is important in order to reach accurate and robust registration results. This paper proposes a new general procedure based on Genetic Algorithm (GA) which is applicable for all kinds of features (point, line, and areal features). However, linear features due to their unique characteristics are of interest in this investigation. This method is called Optimum number of Well-Distributed ground control Information Selection (OWDIS) procedure. Using this method, a population of binary chromosomes is randomly initialized. The ones indicate the presence of a pair of conjugate lines as a GCL and zeros specify the absence. The chromosome length is considered equal to the number of all conjugate lines. For each chromosome, the unknown parameters of a proper mathematical model can be calculated using the selected GCLs (ones in each chromosome). Then, a limited number of Check Points (CPs) are used to evaluate the Root Mean Square Error (RMSE) of each chromosome as its fitness value. The procedure continues until reaching a stopping criterion. The number and position of ones in the best chromosome indicate the selected GCLs among all conjugate lines. To evaluate the proposed method, a GeoEye and an Ikonos Images are used over different areas of Iran. Comparing the obtained results by the proposed method in a traditional RFM with conventional methods that use all conjugate lines as GCLs shows five times the accuracy improvement (pixel level accuracy) as well as the strength of the proposed method. To prevent an over-parametrization error in a traditional RFM due to the selection of a high number of improper correlated terms, an optimized line-based RFM is also proposed. The results show the superiority of the combination of the proposed OWDIS method with an optimized line-based RFM in terms of increasing the accuracy to better than 0.7 pixel, reliability, and reducing systematic

  7. Random Valued Impulse Noise Removal Using Region Based Detection Approach

    Directory of Open Access Journals (Sweden)

    S. Banerjee

    2017-12-01

    Full Text Available Removal of random valued noisy pixel is extremely challenging when the noise density is above 50%. The existing filters are generally not capable of eliminating such noise when density is above 70%. In this paper a region wise density based detection algorithm for random valued impulse noise has been proposed. On the basis of the intensity values, the pixels of a particular window are sorted and then stored into four regions. The higher density based region is considered for stepwise detection of noisy pixels. As a result of this detection scheme a maximum of 75% of noisy pixels can be detected. For this purpose this paper proposes a unique noise removal algorithm. It was experimentally proved that the proposed algorithm not only performs exceptionally when it comes to visual qualitative judgment of standard images but also this filter combination outsmarts the existing algorithm in terms of MSE, PSNR and SSIM comparison even up to 70% noise density level.

  8. High-speed true random number generation based on paired memristors for security electronics

    Science.gov (United States)

    Zhang, Teng; Yin, Minghui; Xu, Changmin; Lu, Xiayan; Sun, Xinhao; Yang, Yuchao; Huang, Ru

    2017-11-01

    True random number generator (TRNG) is a critical component in hardware security that is increasingly important in the era of mobile computing and internet of things. Here we demonstrate a TRNG using intrinsic variation of memristors as a natural source of entropy that is otherwise undesirable in most applications. The random bits were produced by cyclically switching a pair of tantalum oxide based memristors and comparing their resistance values in the off state, taking advantage of the more pronounced resistance variation compared with that in the on state. Using an alternating read scheme in the designed TRNG circuit, the unbiasedness of the random numbers was significantly improved, and the bitstream passed standard randomness tests. The Pt/TaO x /Ta memristors fabricated in this work have fast programming/erasing speeds of ˜30 ns, suggesting a high random number throughput. The approach proposed here thus holds great promise for physically-implemented random number generation.

  9. Preferential selection based on strategy persistence and memory promotes cooperation in evolutionary prisoner's dilemma games

    Science.gov (United States)

    Liu, Yuanming; Huang, Changwei; Dai, Qionglin

    2018-06-01

    Strategy imitation plays a crucial role in evolutionary dynamics when we investigate the spontaneous emergence of cooperation under the framework of evolutionary game theory. Generally, when an individual updates his strategy, he needs to choose a role model whom he will learn from. In previous studies, individuals choose role models randomly from their neighbors. In recent works, researchers have considered that individuals choose role models according to neighbors' attractiveness characterized by the present network topology or historical payoffs. Here, we associate an individual's attractiveness with the strategy persistence, which characterizes how frequently he changes his strategy. We introduce a preferential parameter α to describe the nonlinear correlation between the selection probability and the strategy persistence and the memory length of individuals M into the evolutionary games. We investigate the effects of α and M on cooperation. Our results show that cooperation could be promoted when α > 0 and at the same time M > 1, which corresponds to the situation that individuals are inclined to select their neighbors with relatively higher persistence levels during the evolution. Moreover, we find that the cooperation level could reach the maximum at an optimal memory length when α > 0. Our work sheds light on how to promote cooperation through preferential selection based on strategy persistence and a limited memory length.

  10. Synthesis and characterization of sugar-based methacrylates and their random copolymers by ATRP

    Directory of Open Access Journals (Sweden)

    G. Acik

    2017-10-01

    Full Text Available Various sugar-based methacrylate monomers have been prepared and randomly copolymerized with methyl methacrylate (MMA using classical atom transfer radical polymerization (ATRP. Firstly, four different sugar-based methacrylates are synthesized by two-step method: (i etherification of protected monosaccharides with epichlorohydrin and (ii following ring-opening reaction of obtained epoxides with methacrylic acid (MAA in the presence of triethylamine. Next, these monomers are copolymerized with MMA via ATRP at 90 °C to obtain corresponding random copolymers. The molecular weights of the copolymers are determined by both GPC (gel permeation chromatography and 1H-NMR (nuclear magnetic resonance spectroscopy analyses and found as 10600~16800 and 12200~18500 g/mol, respectively. Moreover, the copolymer compositions are also determined by 1H-NMR analysis using characteristic signals of the monomers and found as about 94.1~97.8%, which are good agreement with feeding ratio. In addition, the glass transition temperatures of copolymers are found as 101.2~102.9 °C by changing type and composition of sugar-based methacrylate monomers. Overall, a series of well-defined random copolymers comprising different sugar-based methacrylates and methyl methacrylates were successfully synthesized by classical ATRP method.

  11. Effects of Video Game Training on Measures of Selective Attention and Working Memory in Older Adults: Results from a Randomized Controlled Trial

    Science.gov (United States)

    Ballesteros, Soledad; Mayas, Julia; Prieto, Antonio; Ruiz-Marquez, Eloísa; Toril, Pilar; Reales, José M.

    2017-01-01

    Video game training with older adults potentially enhances aspects of cognition that decline with aging and could therefore offer a promising training approach. Although, previous published studies suggest that training can produce transfer, many of them have certain shortcomings. This randomized controlled trial (RCT; Clinicaltrials.gov ID: NCT02796508) tried to overcome some of these limitations by incorporating an active control group and the assessment of motivation and expectations. Seventy-five older volunteers were randomly assigned to the experimental group trained for 16 sessions with non-action video games from Lumosity, a commercial platform (http://www.lumosity.com/) or to an active control group trained for the same number of sessions with simulation strategy games. The final sample included 55 older adults (30 in the experimental group and 25 in the active control group). Participants were tested individually before and after training to assess working memory (WM) and selective attention and also reported their perceived improvement, motivation and engagement. The results showed improved performance across the training sessions. The main results were: (1) the experimental group did not show greater improvements in measures of selective attention and working memory than the active control group (the opposite occurred in the oddball task); (2) a marginal training effect was observed for the N-back task, but not for the Stroop task while both groups improved in the Corsi Blocks task. Based on these results, one can conclude that training with non-action games provide modest benefits for untrained tasks. The effect is not specific for that kind of training as a similar effect was observed for strategy video games. Groups did not differ in motivation, engagement or expectations. PMID:29163136

  12. Effects of Video Game Training on Measures of Selective Attention and Working Memory in Older Adults: Results from a Randomized Controlled Trial

    Directory of Open Access Journals (Sweden)

    Soledad Ballesteros

    2017-11-01

    Full Text Available Video game training with older adults potentially enhances aspects of cognition that decline with aging and could therefore offer a promising training approach. Although, previous published studies suggest that training can produce transfer, many of them have certain shortcomings. This randomized controlled trial (RCT; Clinicaltrials.gov ID: NCT02796508 tried to overcome some of these limitations by incorporating an active control group and the assessment of motivation and expectations. Seventy-five older volunteers were randomly assigned to the experimental group trained for 16 sessions with non-action video games from Lumosity, a commercial platform (http://www.lumosity.com/ or to an active control group trained for the same number of sessions with simulation strategy games. The final sample included 55 older adults (30 in the experimental group and 25 in the active control group. Participants were tested individually before and after training to assess working memory (WM and selective attention and also reported their perceived improvement, motivation and engagement. The results showed improved performance across the training sessions. The main results were: (1 the experimental group did not show greater improvements in measures of selective attention and working memory than the active control group (the opposite occurred in the oddball task; (2 a marginal training effect was observed for the N-back task, but not for the Stroop task while both groups improved in the Corsi Blocks task. Based on these results, one can conclude that training with non-action games provide modest benefits for untrained tasks. The effect is not specific for that kind of training as a similar effect was observed for strategy video games. Groups did not differ in motivation, engagement or expectations.

  13. Effects of Video Game Training on Measures of Selective Attention and Working Memory in Older Adults: Results from a Randomized Controlled Trial.

    Science.gov (United States)

    Ballesteros, Soledad; Mayas, Julia; Prieto, Antonio; Ruiz-Marquez, Eloísa; Toril, Pilar; Reales, José M

    2017-01-01

    Video game training with older adults potentially enhances aspects of cognition that decline with aging and could therefore offer a promising training approach. Although, previous published studies suggest that training can produce transfer, many of them have certain shortcomings. This randomized controlled trial (RCT; Clinicaltrials.gov ID: NCT02796508) tried to overcome some of these limitations by incorporating an active control group and the assessment of motivation and expectations. Seventy-five older volunteers were randomly assigned to the experimental group trained for 16 sessions with non-action video games from Lumosity , a commercial platform (http://www.lumosity.com/) or to an active control group trained for the same number of sessions with simulation strategy games. The final sample included 55 older adults (30 in the experimental group and 25 in the active control group). Participants were tested individually before and after training to assess working memory (WM) and selective attention and also reported their perceived improvement, motivation and engagement. The results showed improved performance across the training sessions. The main results were: (1) the experimental group did not show greater improvements in measures of selective attention and working memory than the active control group (the opposite occurred in the oddball task); (2) a marginal training effect was observed for the N -back task, but not for the Stroop task while both groups improved in the Corsi Blocks task. Based on these results, one can conclude that training with non-action games provide modest benefits for untrained tasks. The effect is not specific for that kind of training as a similar effect was observed for strategy video games. Groups did not differ in motivation, engagement or expectations.

  14. Norm based Threshold Selection for Fault Detectors

    DEFF Research Database (Denmark)

    Rank, Mike Lind; Niemann, Henrik

    1998-01-01

    The design of fault detectors for fault detection and isolation (FDI) in dynamic systems is considered from a norm based point of view. An analysis of norm based threshold selection is given based on different formulations of FDI problems. Both the nominal FDI problem as well as the uncertain FDI...... problem are considered. Based on this analysis, a performance index based on norms of the involved transfer functions is given. The performance index allows us also to optimize the structure of the fault detection filter directly...

  15. A Community-Based Randomized Trial of Hepatitis B Screening Among High-Risk Vietnamese Americans.

    Science.gov (United States)

    Ma, Grace X; Fang, Carolyn Y; Seals, Brenda; Feng, Ziding; Tan, Yin; Siu, Philip; Yeh, Ming Chin; Golub, Sarit A; Nguyen, Minhhuyen T; Tran, Tam; Wang, Minqi

    2017-03-01

    To evaluate the effectiveness of a community-based liver cancer prevention program on hepatitis B virus (HBV) screening among low-income, underserved Vietnamese Americans at high risk. We conducted a cluster randomized trial involving 36 Vietnamese community-based organizations and 2337 participants in Pennsylvania, New Jersey, and New York City between 2009 and 2014. We randomly assigned 18 community-based organizations to a community-based multilevel HBV screening intervention (n = 1131). We randomly assigned the remaining 18 community-based organizations to a general cancer education program (n = 1206), which included information about HBV-related liver cancer prevention. We assessed HBV screening rates at 6-month follow-up. Intervention participants were significantly more likely to have undergone HBV screening (88.1%) than were control group participants (4.6%). In a Cochran-Mantel-Haenszel analysis, the intervention effect on screening outcomes remained statistically significant after adjustment for demographic and health care access variables, including income, having health insurance, having a regular health provider, and English proficiency. A community-based, culturally appropriate, multilevel HBV screening intervention effectively increases screening rates in a high-risk, hard-to-reach Vietnamese American population.

  16. Selection of Shared and Neoantigen-Reactive T Cells for Adoptive Cell Therapy Based on CD137 Separation

    Directory of Open Access Journals (Sweden)

    Sivan Seliktar-Ofir

    2017-10-01

    Full Text Available Adoptive cell therapy (ACT of autologous tumor infiltrating lymphocytes (TIL is an effective immunotherapy for patients with solid tumors, yielding objective response rates of around 40% in refractory patients with metastatic melanoma. Most clinical centers utilize bulk, randomly isolated TIL from the tumor tissue for ex vivo expansion and infusion. Only a minor fraction of the administered T cells recognizes tumor antigens, such as shared and mutation-derived neoantigens, and consequently eliminates the tumor. Thus, there are many ongoing effects to identify and select tumor-specific TIL for therapy; however, those approaches are very costly and require months, which is unreasonable for most metastatic patients. CD137 (4-1BB has been identified as a co-stimulatory marker, which is induced upon the specific interaction of T cells with their target cell. Therefore, CD137 can be a useful biomarker and an important tool for the selection of tumor-reactive T cells. Here, we developed and validated a simple and time efficient method for the selection of CD137-expressing T cells for therapy based on magnetic bead separation. CD137 selection was performed with clinical grade compliant reagents, and TIL were expanded in a large-scale manner to meet cell numbers required for the patient setting in a GMP facility. For the first time, the methodology was designed to comply with both clinical needs and limitations, and its feasibility was assessed. CD137-selected TIL demonstrated significantly increased antitumor reactivity and were enriched for T cells recognizing neoantigens as well as shared tumor antigens. CD137-based selection enabled the enrichment of tumor-reactive T cells without the necessity of knowing the epitope specificity or the antigen type. The direct implementation of the CD137 separation method to the cell production of TIL may provide a simple way to improve the clinical efficiency of TIL ACT.

  17. A data based random number generator for a multivariate distribution (using stochastic interpolation)

    Science.gov (United States)

    Thompson, J. R.; Taylor, M. S.

    1982-01-01

    Let X be a K-dimensional random variable serving as input for a system with output Y (not necessarily of dimension k). given X, an outcome Y or a distribution of outcomes G(Y/X) may be obtained either explicitly or implicity. The situation is considered in which there is a real world data set X sub j sub = 1 (n) and a means of simulating an outcome Y. A method for empirical random number generation based on the sample of observations of the random variable X without estimating the underlying density is discussed.

  18. Fiber-Type Random Laser Based on a Cylindrical Waveguide with a Disordered Cladding Layer.

    Science.gov (United States)

    Zhang, Wei Li; Zheng, Meng Ya; Ma, Rui; Gong, Chao Yang; Yang, Zhao Ji; Peng, Gang Ding; Rao, Yun Jiang

    2016-05-25

    This letter reports a fiber-type random laser (RL) which is made from a capillary coated with a disordered layer at its internal surface and filled with a gain (laser dye) solution in the core region. This fiber-type optical structure, with the disordered layer providing randomly scattered light into the gain region and the cylindrical waveguide providing confinement of light, assists the formation of random lasing modes and enables a flexible and efficient way of making random lasers. We found that the RL is sensitive to laser dye concentration in the core region and there exists a fine exponential relationship between the lasing intensity and particle concentration in the gain solution. The proposed structure could be a fine platform of realizing random lasing and random lasing based sensing.

  19. Randomness in the network inhibits cooperation based on the bounded rational collective altruistic decision

    International Nuclear Information System (INIS)

    Ohdaira, Tetsushi

    2014-01-01

    Previous studies discussing cooperation employ the best decision that every player knows all information regarding the payoff matrix and selects the strategy of the highest payoff. Therefore, they do not discuss cooperation based on the altruistic decision with limited information (bounded rational altruistic decision). In addition, they do not cover the case where every player can submit his/her strategy several times in a match of the game. This paper is based on Ohdaira's reconsideration of the bounded rational altruistic decision, and also employs the framework of the prisoner's dilemma game (PDG) with sequential strategy. The distinction between this study and the Ohdaira's reconsideration is that the former covers the model of multiple groups, but the latter deals with the model of only two groups. Ohdaira's reconsideration shows that the bounded rational altruistic decision facilitates much more cooperation in the PDG with sequential strategy than Ohdaira and Terano's bounded rational second-best decision does. However, the detail of cooperation of multiple groups based on the bounded rational altruistic decision has not been resolved yet. This study, therefore, shows how randomness in the network composed of multiple groups affects the increase of the average frequency of mutual cooperation (cooperation between groups) based on the bounded rational altruistic decision of multiple groups. We also discuss the results of the model in comparison with related studies which employ the best decision. (paper)

  20. Randomness in the network inhibits cooperation based on the bounded rational collective altruistic decision

    Science.gov (United States)

    Ohdaira, Tetsushi

    2014-07-01

    Previous studies discussing cooperation employ the best decision that every player knows all information regarding the payoff matrix and selects the strategy of the highest payoff. Therefore, they do not discuss cooperation based on the altruistic decision with limited information (bounded rational altruistic decision). In addition, they do not cover the case where every player can submit his/her strategy several times in a match of the game. This paper is based on Ohdaira's reconsideration of the bounded rational altruistic decision, and also employs the framework of the prisoner's dilemma game (PDG) with sequential strategy. The distinction between this study and the Ohdaira's reconsideration is that the former covers the model of multiple groups, but the latter deals with the model of only two groups. Ohdaira's reconsideration shows that the bounded rational altruistic decision facilitates much more cooperation in the PDG with sequential strategy than Ohdaira and Terano's bounded rational second-best decision does. However, the detail of cooperation of multiple groups based on the bounded rational altruistic decision has not been resolved yet. This study, therefore, shows how randomness in the network composed of multiple groups affects the increase of the average frequency of mutual cooperation (cooperation between groups) based on the bounded rational altruistic decision of multiple groups. We also discuss the results of the model in comparison with related studies which employ the best decision.

  1. ANALYTIC WORD RECOGNITION WITHOUT SEGMENTATION BASED ON MARKOV RANDOM FIELDS

    NARCIS (Netherlands)

    Coisy, C.; Belaid, A.

    2004-01-01

    In this paper, a method for analytic handwritten word recognition based on causal Markov random fields is described. The words models are HMMs where each state corresponds to a letter; each letter is modelled by a NSHP­HMM (Markov field). Global models are build dynamically, and used for recognition

  2. Blind Measurement Selection: A Random Matrix Theory Approach

    KAUST Repository

    Elkhalil, Khalil; Kammoun, Abla; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    -aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also

  3. School-Based Smoking Prevention Programs for Middle School Students in Nowshahr- Iran: a Randomized Controlled Trial

    Directory of Open Access Journals (Sweden)

    Maryam Khazaee-Pool

    2016-11-01

    Full Text Available Background: Smoking among youths is a main public health concern, and detecting predictors of smoking is essential for designing preventive programs. Any interventional program should plan with highlighting on behavioral change models and based on operative interventional program. So, this study aimed to investigate school-based smoking prevention programs for middle school students in Nowshahr, Iran.Materials and Methods: A quasi-experimental study was performed with 280 male students aged 15-17 years selected by multistage sampling. For this purpose, 6 middle schools were randomly recruited from male students in Nowshahr- Iran. Then, 140 students were randomly chosen for each the experimental and the control groups. After pretest, educational program based on Health Belief Model were performed in experimental group. Also, post-test was applied four months after interventional program in both experimental and control group.Results: Based on the results, the prevalence of smoking was higher at age 14 old in both experimental (38.7% and control (30 % groups. About 35% of participants in the experimental group and 33.6% in control group had smoker father. Additionally, 10% in experimental group and 7.8% in control group had smoker mother. Most main cause for smoking in 57.9% of the experimental group and 52.63% of the control group was reducing anxiety. Results also shown that there was a significant difference between students in the experimental and control groups after performing educational program in the mean scores of perceived susceptibility, perceived severity, perceived benefits, perceived barriers, perceived self-efficacy, and preventive behaviors of smoking (P < 0.05.Conclusion: By performing educational program, it was found that the prevalence of cigarette smoking was decreased in the intervention group. So, with a better understanding of factors affecting on this complex behavior (cigarette smoking, it can be a valuable phase to

  4. Annotation-Based Whole Genomic Prediction and Selection

    DEFF Research Database (Denmark)

    Kadarmideen, Haja; Do, Duy Ngoc; Janss, Luc

    Genomic selection is widely used in both animal and plant species, however, it is performed with no input from known genomic or biological role of genetic variants and therefore is a black box approach in a genomic era. This study investigated the role of different genomic regions and detected QTLs...... in their contribution to estimated genomic variances and in prediction of genomic breeding values by applying SNP annotation approaches to feed efficiency. Ensembl Variant Predictor (EVP) and Pig QTL database were used as the source of genomic annotation for 60K chip. Genomic prediction was performed using the Bayes...... classes. Predictive accuracy was 0.531, 0.532, 0.302, and 0.344 for DFI, RFI, ADG and BF, respectively. The contribution per SNP to total genomic variance was similar among annotated classes across different traits. Predictive performance of SNP classes did not significantly differ from randomized SNP...

  5. Enhancing Security of Double Random Phase Encoding Based on Random S-Box

    Science.gov (United States)

    Girija, R.; Singh, Hukum

    2018-06-01

    In this paper, we propose a novel asymmetric cryptosystem for double random phase encoding (DRPE) using random S-Box. While utilising S-Box separately is not reliable and DRPE does not support non-linearity, so, our system unites the effectiveness of S-Box with an asymmetric system of DRPE (through Fourier transform). The uniqueness of proposed cryptosystem lies on employing high sensitivity dynamic S-Box for our DRPE system. The randomness and scalability achieved due to applied technique is an additional feature of the proposed solution. The firmness of random S-Box is investigated in terms of performance parameters such as non-linearity, strict avalanche criterion, bit independence criterion, linear and differential approximation probabilities etc. S-Boxes convey nonlinearity to cryptosystems which is a significant parameter and very essential for DRPE. The strength of proposed cryptosystem has been analysed using various parameters such as MSE, PSNR, correlation coefficient analysis, noise analysis, SVD analysis, etc. Experimental results are conferred in detail to exhibit proposed cryptosystem is highly secure.

  6. Exploring pseudo- and chaotic random Monte Carlo simulations

    Science.gov (United States)

    Blais, J. A. Rod; Zhang, Zhan

    2011-07-01

    Computer simulations are an increasingly important area of geoscience research and development. At the core of stochastic or Monte Carlo simulations are the random number sequences that are assumed to be distributed with specific characteristics. Computer-generated random numbers, uniformly distributed on (0, 1), can be very different depending on the selection of pseudo-random number (PRN) or chaotic random number (CRN) generators. In the evaluation of some definite integrals, the resulting error variances can even be of different orders of magnitude. Furthermore, practical techniques for variance reduction such as importance sampling and stratified sampling can be applied in most Monte Carlo simulations and significantly improve the results. A comparative analysis of these strategies has been carried out for computational applications in planar and spatial contexts. Based on these experiments, and on some practical examples of geodetic direct and inverse problems, conclusions and recommendations concerning their performance and general applicability are included.

  7. Random a-adic groups and random net fractals

    Energy Technology Data Exchange (ETDEWEB)

    Li Yin [Department of Mathematics, Nanjing University, Nanjing 210093 (China)], E-mail: Lyjerry7788@hotmail.com; Su Weiyi [Department of Mathematics, Nanjing University, Nanjing 210093 (China)], E-mail: suqiu@nju.edu.cn

    2008-08-15

    Based on random a-adic groups, this paper investigates the relationship between the existence conditions of a positive flow in a random network and the estimation of the Hausdorff dimension of a proper random net fractal. Subsequently we describe some particular random fractals for which our results can be applied. Finally the Mauldin and Williams theorem is shown to be very important example for a random Cantor set with application in physics as shown in E-infinity theory.

  8. Research on machine learning framework based on random forest algorithm

    Science.gov (United States)

    Ren, Qiong; Cheng, Hui; Han, Hai

    2017-03-01

    With the continuous development of machine learning, industry and academia have released a lot of machine learning frameworks based on distributed computing platform, and have been widely used. However, the existing framework of machine learning is limited by the limitations of machine learning algorithm itself, such as the choice of parameters and the interference of noises, the high using threshold and so on. This paper introduces the research background of machine learning framework, and combined with the commonly used random forest algorithm in machine learning classification algorithm, puts forward the research objectives and content, proposes an improved adaptive random forest algorithm (referred to as ARF), and on the basis of ARF, designs and implements the machine learning framework.

  9. The Hidden Flow Structure and Metric Space of Network Embedding Algorithms Based on Random Walks.

    Science.gov (United States)

    Gu, Weiwei; Gong, Li; Lou, Xiaodan; Zhang, Jiang

    2017-10-13

    Network embedding which encodes all vertices in a network as a set of numerical vectors in accordance with it's local and global structures, has drawn widespread attention. Network embedding not only learns significant features of a network, such as the clustering and linking prediction but also learns the latent vector representation of the nodes which provides theoretical support for a variety of applications, such as visualization, link prediction, node classification, and recommendation. As the latest progress of the research, several algorithms based on random walks have been devised. Although those algorithms have drawn much attention for their high scores in learning efficiency and accuracy, there is still a lack of theoretical explanation, and the transparency of those algorithms has been doubted. Here, we propose an approach based on the open-flow network model to reveal the underlying flow structure and its hidden metric space of different random walk strategies on networks. We show that the essence of embedding based on random walks is the latent metric structure defined on the open-flow network. This not only deepens our understanding of random- walk-based embedding algorithms but also helps in finding new potential applications in network embedding.

  10. Fully digital jerk-based chaotic oscillators for high throughput pseudo-random number generators up to 8.77Gbits/s

    KAUST Repository

    Mansingka, Abhinav S.

    2014-06-18

    This paper introduces fully digital implementations of four di erent systems in the 3rd order jerk-equation based chaotic family using the Euler approximation. The digitization approach enables controllable chaotic systems that reliably provide sinusoidal or chaotic output based on a selection input. New systems are introduced, derived using logical and arithmetic operations between two system implementations of different bus widths, with up to 100x higher maximum Lyapunov exponent than the original jerkequation based chaotic systems. The resulting chaotic output is shown to pass the NIST sp. 800-22 statistical test suite for pseudorandom number generators without post-processing by only eliminating the statistically defective bits. The systems are designed in Verilog HDL and experimentally verified on a Xilinx Virtex 4 FPGA for a maximum throughput of 15.59 Gbits/s for the native chaotic output and 8.77 Gbits/s for the resulting pseudo-random number generators.

  11. [Electroencephalogram Feature Selection Based on Correlation Coefficient Analysis].

    Science.gov (United States)

    Zhou, Jinzhi; Tang, Xiaofang

    2015-08-01

    In order to improve the accuracy of classification with small amount of motor imagery training data on the development of brain-computer interface (BCD systems, we proposed an analyzing method to automatically select the characteristic parameters based on correlation coefficient analysis. Throughout the five sample data of dataset IV a from 2005 BCI Competition, we utilized short-time Fourier transform (STFT) and correlation coefficient calculation to reduce the number of primitive electroencephalogram dimension, then introduced feature extraction based on common spatial pattern (CSP) and classified by linear discriminant analysis (LDA). Simulation results showed that the average rate of classification accuracy could be improved by using correlation coefficient feature selection method than those without using this algorithm. Comparing with support vector machine (SVM) optimization features algorithm, the correlation coefficient analysis can lead better selection parameters to improve the accuracy of classification.

  12. Antenna Selection for Full-Duplex MIMO Two-Way Communication Systems

    KAUST Repository

    Wilson-Nunn, Daniel; Chaaban, Anas; Sezgin, Aydin; Alouini, Mohamed-Slim

    2017-01-01

    Antenna selection for full-duplex communication between two nodes, each equipped with a predefined number of antennae and transmit/receive chains, is studied. Selection algorithms are proposed based on magnitude, orthogonality, and determinant criteria. The algorithms are compared to optimal selection obtained by exhaustive search as well as random selection, and are shown to yield performance fairly close to optimal at a much lower complexity. Performance comparison for a Rayleigh fading symmetric channel reveals that selecting a single transmit antenna is best at low signal-to-noise ratio (SNR), while selecting an equal number of transmit and receive antennae is best at high SNR.

  13. Antenna Selection for Full-Duplex MIMO Two-Way Communication Systems

    KAUST Repository

    Wilson-Nunn, Daniel

    2017-03-11

    Antenna selection for full-duplex communication between two nodes, each equipped with a predefined number of antennae and transmit/receive chains, is studied. Selection algorithms are proposed based on magnitude, orthogonality, and determinant criteria. The algorithms are compared to optimal selection obtained by exhaustive search as well as random selection, and are shown to yield performance fairly close to optimal at a much lower complexity. Performance comparison for a Rayleigh fading symmetric channel reveals that selecting a single transmit antenna is best at low signal-to-noise ratio (SNR), while selecting an equal number of transmit and receive antennae is best at high SNR.

  14. Response to family selection and genetic parameters in Japanese quail selected for four week breast weight

    DEFF Research Database (Denmark)

    Khaldari, Majid; Yeganeh, Hassan Mehrabani; Pakdel, Abbas

    2011-01-01

    An experiment was conducted to investigate the effect of short-term selection for 4 week breast weight (4wk BRW), and to estimate genetic parameters of body weight, and carcass traits. A selection (S) line and control (C) line was randomly selected from a base population. Data were collected over...... was 0.35±0.06. There were a significant difference for BW, and carcass weights but not for carcass percent components between lines (Pcarcass and leg weights were 0.46, 0.41 and 0.47, and 13.2, 16.2, 4.4 %, respectively....... The genetic correlations of BRW with BW, carcass, leg, and back weights were 0.85, 0.88 and 0.72, respectively. Selection for 4 wk BRW improved feed conversion ratio (FCR) about 0.19 units over the selection period. Inbreeding caused an insignificant decline of the mean of some traits. Results from...

  15. Nurse-Moderated Internet-Based Support for New Mothers: Non-Inferiority, Randomized Controlled Trial.

    Science.gov (United States)

    Sawyer, Michael G; Reece, Christy E; Bowering, Kerrie; Jeffs, Debra; Sawyer, Alyssa C P; Mittinty, Murthy; Lynch, John W

    2017-07-24

    Internet-based interventions moderated by community nurses have the potential to improve support offered to new mothers, many of whom now make extensive use of the Internet to obtain information about infant care. However, evidence from population-based randomized controlled trials is lacking. The aim of this study was to test the non-inferiority of outcomes for mothers and infants who received a clinic-based postnatal health check plus nurse-moderated, Internet-based group support when infants were aged 1-7 months as compared with outcomes for those who received standard care consisting of postnatal home-based support provided by a community nurse. The design of the study was a pragmatic, preference, non-inferiority randomized control trial. Participants were recruited from mothers contacted for their postnatal health check, which is offered to all mothers in South Australia. Mothers were assigned either (1) on the basis of their preference to clinic+Internet or home-based support groups (n=328), or (2) randomly assigned to clinic+Internet or home-based groups if they declared no strong preference (n=491). The overall response rate was 44.8% (819/1827). The primary outcome was parenting self-competence, as measured by the Parenting Stress Index (PSI) Competence subscale, and the Karitane Parenting Confidence Scale scores. Secondary outcome measures included PSI Isolation, Interpersonal Support Evaluation List-Short Form, Maternal Support Scale, Ages and Stages Questionnaire-Social-Emotional and MacArthur Communicative Development Inventory (MCDI) scores. Assessments were completed offline via self-assessment questionnaires at enrolment (mean child age=4.1 weeks, SD 1.3) and again when infants were aged 9, 15, and 21 months. Generalized estimating equations adjusting for post-randomization baseline imbalances showed that differences in outcomes between mothers in the clinic+Internet and home-based support groups did not exceed the pre-specified margin of

  16. Utility and Cost-Effectiveness of Motivational Messaging to Increase Survey Response in Physicians: A Randomized Controlled Trial

    Science.gov (United States)

    Chan, Randolph C. H.; Mak, Winnie W. S.; Pang, Ingrid H. Y.; Wong, Samuel Y. S.; Tang, Wai Kwong; Lau, Joseph T. F.; Woo, Jean; Lee, Diana T. F.; Cheung, Fanny M.

    2018-01-01

    The present study examined whether, when, and how motivational messaging can boost the response rate of postal surveys for physicians based on Higgin's regulatory focus theory, accounting for its cost-effectiveness. A three-arm, blinded, randomized controlled design was used. A total of 3,270 doctors were randomly selected from the registration…

  17. Multispectral iris recognition based on group selection and game theory

    Science.gov (United States)

    Ahmad, Foysal; Roy, Kaushik

    2017-05-01

    A commercially available iris recognition system uses only a narrow band of the near infrared spectrum (700-900 nm) while iris images captured in the wide range of 405 nm to 1550 nm offer potential benefits to enhance recognition performance of an iris biometric system. The novelty of this research is that a group selection algorithm based on coalition game theory is explored to select the best patch subsets. In this algorithm, patches are divided into several groups based on their maximum contribution in different groups. Shapley values are used to evaluate the contribution of patches in different groups. Results show that this group selection based iris recognition

  18. Mobile access to virtual randomization for investigator-initiated trials.

    Science.gov (United States)

    Deserno, Thomas M; Keszei, András P

    2017-08-01

    Background/aims Randomization is indispensable in clinical trials in order to provide unbiased treatment allocation and a valid statistical inference. Improper handling of allocation lists can be avoided using central systems, for example, human-based services. However, central systems are unaffordable for investigator-initiated trials and might be inaccessible from some places, where study subjects need allocations. We propose mobile access to virtual randomization, where the randomization lists are non-existent and the appropriate allocation is computed on demand. Methods The core of the system architecture is an electronic data capture system or a clinical trial management system, which is extended by an R interface connecting the R server using the Java R Interface. Mobile devices communicate via the representational state transfer web services. Furthermore, a simple web-based setup allows configuring the appropriate statistics by non-statisticians. Our comprehensive R script supports simple randomization, restricted randomization using a random allocation rule, block randomization, and stratified randomization for un-blinded, single-blinded, and double-blinded trials. For each trial, the electronic data capture system or the clinical trial management system stores the randomization parameters and the subject assignments. Results Apps are provided for iOS and Android and subjects are randomized using smartphones. After logging onto the system, the user selects the trial and the subject, and the allocation number and treatment arm are displayed instantaneously and stored in the core system. So far, 156 subjects have been allocated from mobile devices serving five investigator-initiated trials. Conclusion Transforming pre-printed allocation lists into virtual ones ensures the correct conduct of trials and guarantees a strictly sequential processing in all trial sites. Covering 88% of all randomization models that are used in recent trials, virtual randomization

  19. Controlling Chronic Diseases Through Evidence-Based Decision Making: A Group-Randomized Trial.

    Science.gov (United States)

    Brownson, Ross C; Allen, Peg; Jacob, Rebekah R; deRuyter, Anna; Lakshman, Meenakshi; Reis, Rodrigo S; Yan, Yan

    2017-11-30

    Although practitioners in state health departments are ideally positioned to implement evidence-based interventions, few studies have examined how to build their capacity to do so. The objective of this study was to explore how to increase the use of evidence-based decision-making processes at both the individual and organization levels. We conducted a 2-arm, group-randomized trial with baseline data collection and follow-up at 18 to 24 months. Twelve state health departments were paired and randomly assigned to intervention or control condition. In the 6 intervention states, a multiday training on evidence-based decision making was conducted from March 2014 through March 2015 along with a set of supplemental capacity-building activities. Individual-level outcomes were evidence-based decision making skills of public health practitioners; organization-level outcomes were access to research evidence and participatory decision making. Mixed analysis of covariance models was used to evaluate the intervention effect by accounting for the cluster randomized trial design. Analysis was performed from March through May 2017. Participation 18 to 24 months after initial training was 73.5%. In mixed models adjusted for participant and state characteristics, the intervention group improved significantly in the overall skill gap (P = .01) and in 6 skill areas. Among the 4 organizational variables, only access to evidence and skilled staff showed an intervention effect (P = .04). Tailored and active strategies are needed to build capacity at the individual and organization levels for evidence-based decision making. Our study suggests several dissemination interventions for consideration by leaders seeking to improve public health practice.

  20. Evolving artificial metalloenzymes via random mutagenesis

    Science.gov (United States)

    Yang, Hao; Swartz, Alan M.; Park, Hyun June; Srivastava, Poonam; Ellis-Guardiola, Ken; Upp, David M.; Lee, Gihoon; Belsare, Ketaki; Gu, Yifan; Zhang, Chen; Moellering, Raymond E.; Lewis, Jared C.

    2018-03-01

    Random mutagenesis has the potential to optimize the efficiency and selectivity of protein catalysts without requiring detailed knowledge of protein structure; however, introducing synthetic metal cofactors complicates the expression and screening of enzyme libraries, and activity arising from free cofactor must be eliminated. Here we report an efficient platform to create and screen libraries of artificial metalloenzymes (ArMs) via random mutagenesis, which we use to evolve highly selective dirhodium cyclopropanases. Error-prone PCR and combinatorial codon mutagenesis enabled multiplexed analysis of random mutations, including at sites distal to the putative ArM active site that are difficult to identify using targeted mutagenesis approaches. Variants that exhibited significantly improved selectivity for each of the cyclopropane product enantiomers were identified, and higher activity than previously reported ArM cyclopropanases obtained via targeted mutagenesis was also observed. This improved selectivity carried over to other dirhodium-catalysed transformations, including N-H, S-H and Si-H insertion, demonstrating that ArMs evolved for one reaction can serve as starting points to evolve catalysts for others.

  1. Development of Base Transceiver Station Selection Algorithm for ...

    African Journals Online (AJOL)

    TEMS) equipment was carried out on the existing BTSs, and a linear algorithm optimization program based on the spectral link efficiency of each BTS was developed, the output of this site optimization gives the selected number of base station sites ...

  2. Transportability of an Evidence-Based Early Childhood Intervention in a Low-Income African Country: Results of a Cluster Randomized Controlled Study.

    Science.gov (United States)

    Huang, Keng-Yen; Nakigudde, Janet; Rhule, Dana; Gumikiriza-Onoria, Joy Louise; Abura, Gloria; Kolawole, Bukky; Ndyanabangi, Sheila; Kim, Sharon; Seidman, Edward; Ogedegbe, Gbenga; Brotman, Laurie Miller

    2017-11-01

    Children in Sub-Saharan Africa (SSA) are burdened by significant unmet mental health needs. Despite the successes of numerous school-based interventions for promoting child mental health, most evidence-based interventions (EBIs) are not available in SSA. This study investigated the implementation quality and effectiveness of one component of an EBI from a developed country (USA) in a SSA country (Uganda). The EBI component, Professional Development, was provided by trained Ugandan mental health professionals to Ugandan primary school teachers. It included large-group experiential training and small-group coaching to introduce and support a range of evidence-based practices (EBPs) to create nurturing and predictable classroom experiences. The study was guided by the Consolidated Framework for Implementation Research, the Teacher Training Implementation Model, and the RE-AIM evaluation framework. Effectiveness outcomes were studied using a cluster randomized design, in which 10 schools were randomized to intervention and wait-list control conditions. A total of 79 early childhood teachers participated. Teacher knowledge and the use of EBPs were assessed at baseline and immediately post-intervention (4-5 months later). A sample of 154 parents was randomly selected to report on child behavior at baseline and post-intervention. Linear mixed effect modeling was applied to examine effectiveness outcomes. Findings support the feasibility of training Ugandan mental health professionals to provide Professional Development for Ugandan teachers. Professional Development was delivered with high levels of fidelity and resulted in improved teacher EBP knowledge and the use of EBPs in the classroom, and child social competence.

  3. A Simple K-Map Based Variable Selection Scheme in the Direct ...

    African Journals Online (AJOL)

    A multiplexer with (n-l) data select inputs can realise directly a function of n variables. In this paper, a simple k-map based variable selection scheme is proposed such that an n variable logic function can be synthesised using a multiplexer with (n-q) data input variables and q data select variables. The procedure is based on ...

  4. Gene selection and classification for cancer microarray data based on machine learning and similarity measures

    Directory of Open Access Journals (Sweden)

    Liu Qingzhong

    2011-12-01

    Full Text Available Abstract Background Microarray data have a high dimension of variables and a small sample size. In microarray data analyses, two important issues are how to choose genes, which provide reliable and good prediction for disease status, and how to determine the final gene set that is best for classification. Associations among genetic markers mean one can exploit information redundancy to potentially reduce classification cost in terms of time and money. Results To deal with redundant information and improve classification, we propose a gene selection method, Recursive Feature Addition, which combines supervised learning and statistical similarity measures. To determine the final optimal gene set for prediction and classification, we propose an algorithm, Lagging Prediction Peephole Optimization. By using six benchmark microarray gene expression data sets, we compared Recursive Feature Addition with recently developed gene selection methods: Support Vector Machine Recursive Feature Elimination, Leave-One-Out Calculation Sequential Forward Selection and several others. Conclusions On average, with the use of popular learning machines including Nearest Mean Scaled Classifier, Support Vector Machine, Naive Bayes Classifier and Random Forest, Recursive Feature Addition outperformed other methods. Our studies also showed that Lagging Prediction Peephole Optimization is superior to random strategy; Recursive Feature Addition with Lagging Prediction Peephole Optimization obtained better testing accuracies than the gene selection method varSelRF.

  5. Vast Portfolio Selection with Gross-exposure Constraints*

    Science.gov (United States)

    Fan, Jianqing; Zhang, Jingjin; Yu, Ke

    2012-01-01

    We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404

  6. A note on mate allocation for dominance handling in genomic selection

    Directory of Open Access Journals (Sweden)

    Toro Miguel A

    2010-08-01

    Full Text Available Abstract Estimation of non-additive genetic effects in animal breeding is important because it increases the accuracy of breeding value prediction and the value of mate allocation procedures. With the advent of genomic selection these ideas should be revisited. The objective of this study was to quantify the efficiency of including dominance effects and practising mating allocation under a whole-genome evaluation scenario. Four strategies of selection, carried out during five generations, were compared by simulation techniques. In the first scenario (MS, individuals were selected based on their own phenotypic information. In the second (GSA, they were selected based on the prediction generated by the Bayes A method of whole-genome evaluation under an additive model. In the third (GSD, the model was expanded to include dominance effects. These three scenarios used random mating to construct future generations, whereas in the fourth one (GSD + MA, matings were optimized by simulated annealing. The advantage of GSD over GSA ranges from 9 to 14% of the expected response and, in addition, using mate allocation (GSD + MA provides an additional response ranging from 6% to 22%. However, mate selection can improve the expected genetic response over random mating only in the first generation of selection. Furthermore, the efficiency of genomic selection is eroded after a few generations of selection, thus, a continued collection of phenotypic data and re-evaluation will be required.

  7. Goal selection versus process control while learning to use a brain-computer interface

    Science.gov (United States)

    Royer, Audrey S.; Rose, Minn L.; He, Bin

    2011-06-01

    A brain-computer interface (BCI) can be used to accomplish a task without requiring motor output. Two major control strategies used by BCIs during task completion are process control and goal selection. In process control, the user exerts continuous control and independently executes the given task. In goal selection, the user communicates their goal to the BCI and then receives assistance executing the task. A previous study has shown that goal selection is more accurate and faster in use. An unanswered question is, which control strategy is easier to learn? This study directly compares goal selection and process control while learning to use a sensorimotor rhythm-based BCI. Twenty young healthy human subjects were randomly assigned either to a goal selection or a process control-based paradigm for eight sessions. At the end of the study, the best user from each paradigm completed two additional sessions using all paradigms randomly mixed. The results of this study were that goal selection required a shorter training period for increased speed, accuracy, and information transfer over process control. These results held for the best subjects as well as in the general subject population. The demonstrated characteristics of goal selection make it a promising option to increase the utility of BCIs intended for both disabled and able-bodied users.

  8. Influence of Maximum Inbreeding Avoidance under BLUP EBV Selection on Pinzgau Population Diversity

    Directory of Open Access Journals (Sweden)

    Radovan Kasarda

    2011-05-01

    Full Text Available Evaluated was effect of mating (random vs. maximum avoidance of inbreeding under BLUP EBV selection strategy. Existing population structure was under Monte Carlo stochastic simulation analyzed from the point to minimize increase of inbreeding. Maximum avoidance of inbreeding under BLUP selection resulted into comparable increase of inbreeding then random mating in average of 10 generation development. After 10 generations of simulation of mating strategy was observed ΔF= 6,51 % (2 sires, 5,20 % (3 sires, 3,22 % (4 sires resp. 2,94 % (5 sires. With increased number of sires selected, decrease of inbreeding was observed. With use of 4, resp. 5 sires increase of inbreeding was comparable to random mating with phenotypic selection. For saving of genetic diversity and prevention of population loss is important to minimize increase of inbreeding in small populations. Classical approach was based on balancing ratio of sires and dams in mating program. Contrariwise in the most of commercial populations small number of sires was used with high mating ratio.

  9. Random fiber laser based on artificially controlled backscattering fibers.

    Science.gov (United States)

    Wang, Xiaoliang; Chen, Daru; Li, Haitao; She, Lijuan; Wu, Qiong

    2018-01-10

    The random fiber laser (RFL), which is a milestone in laser physics and nonlinear optics, has attracted considerable attention recently. Most previously reported RFLs are based on distributed feedback of Rayleigh scattering amplified through the stimulated Raman-Brillouin scattering effect in single-mode fibers, which require long-distance (tens of kilometers) single-mode fibers and high threshold, up to watt level, due to the extremely small Rayleigh scattering coefficient of the fiber. We proposed and demonstrated a half-open-cavity RFL based on a segment of an artificially controlled backscattering single-mode fiber with a length of 210 m, 310 m, or 390 m. A fiber Bragg grating with a central wavelength of 1530 nm and a segment of artificially controlled backscattering single-mode fiber fabricated by using a femtosecond laser form the half-open cavity. The proposed RFL achieves thresholds of 25 mW, 30 mW, and 30 mW, respectively. Random lasing at a wavelength of 1530 nm and extinction ratio of 50 dB is achieved when a segment of 5 m erbium-doped fiber is pumped by a 980 nm laser diode in the RFL. A novel RFL with many short cavities has been achieved with low threshold.

  10. Multiple ECG Fiducial Points-Based Random Binary Sequence Generation for Securing Wireless Body Area Networks.

    Science.gov (United States)

    Zheng, Guanglou; Fang, Gengfa; Shankaran, Rajan; Orgun, Mehmet A; Zhou, Jie; Qiao, Li; Saleem, Kashif

    2017-05-01

    Generating random binary sequences (BSes) is a fundamental requirement in cryptography. A BS is a sequence of N bits, and each bit has a value of 0 or 1. For securing sensors within wireless body area networks (WBANs), electrocardiogram (ECG)-based BS generation methods have been widely investigated in which interpulse intervals (IPIs) from each heartbeat cycle are processed to produce BSes. Using these IPI-based methods to generate a 128-bit BS in real time normally takes around half a minute. In order to improve the time efficiency of such methods, this paper presents an ECG multiple fiducial-points based binary sequence generation (MFBSG) algorithm. The technique of discrete wavelet transforms is employed to detect arrival time of these fiducial points, such as P, Q, R, S, and T peaks. Time intervals between them, including RR, RQ, RS, RP, and RT intervals, are then calculated based on this arrival time, and are used as ECG features to generate random BSes with low latency. According to our analysis on real ECG data, these ECG feature values exhibit the property of randomness and, thus, can be utilized to generate random BSes. Compared with the schemes that solely rely on IPIs to generate BSes, this MFBSG algorithm uses five feature values from one heart beat cycle, and can be up to five times faster than the solely IPI-based methods. So, it achieves a design goal of low latency. According to our analysis, the complexity of the algorithm is comparable to that of fast Fourier transforms. These randomly generated ECG BSes can be used as security keys for encryption or authentication in a WBAN system.

  11. An AUC-based permutation variable importance measure for random forests.

    Science.gov (United States)

    Janitza, Silke; Strobl, Carolin; Boulesteix, Anne-Laure

    2013-04-05

    The random forest (RF) method is a commonly used tool for classification with high dimensional data as well as for ranking candidate predictors based on the so-called random forest variable importance measures (VIMs). However the classification performance of RF is known to be suboptimal in case of strongly unbalanced data, i.e. data where response class sizes differ considerably. Suggestions were made to obtain better classification performance based either on sampling procedures or on cost sensitivity analyses. However to our knowledge the performance of the VIMs has not yet been examined in the case of unbalanced response classes. In this paper we explore the performance of the permutation VIM for unbalanced data settings and introduce an alternative permutation VIM based on the area under the curve (AUC) that is expected to be more robust towards class imbalance. We investigated the performance of the standard permutation VIM and of our novel AUC-based permutation VIM for different class imbalance levels using simulated data and real data. The results suggest that the new AUC-based permutation VIM outperforms the standard permutation VIM for unbalanced data settings while both permutation VIMs have equal performance for balanced data settings. The standard permutation VIM loses its ability to discriminate between associated predictors and predictors not associated with the response for increasing class imbalance. It is outperformed by our new AUC-based permutation VIM for unbalanced data settings, while the performance of both VIMs is very similar in the case of balanced classes. The new AUC-based VIM is implemented in the R package party for the unbiased RF variant based on conditional inference trees. The codes implementing our study are available from the companion website: http://www.ibe.med.uni-muenchen.de/organisation/mitarbeiter/070_drittmittel/janitza/index.html.

  12. Ray tracing method for simulation of laser beam interaction with random packings of powders

    Science.gov (United States)

    Kovalev, O. B.; Kovaleva, I. O.; Belyaev, V. V.

    2018-03-01

    Selective laser sintering is a technology of rapid manufacturing of a free form that is created as a solid object by selectively fusing successive layers of powder using a laser. The motivation of this study is due to the currently insufficient understanding of the processes and phenomena of selective laser melting of powders whose time scales differ by orders of magnitude. To construct random packings from mono- and polydispersed solid spheres, the algorithm of their generation based on the discrete element method is used. A numerical method of ray tracing is proposed that is used to simulate the interaction of laser radiation with a random bulk packing of spherical particles and to predict the optical properties of the granular layer, the extinction and absorption coefficients, depending on the optical properties of a powder material.

  13. Prediction error variance and expected response to selection, when selection is based on the best predictor – for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2002-05-01

    Full Text Available Abstract In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects. In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model or a generalised version of heritability plays a central role in these formulas.

  14. A simple highly sensitive and selective aptamer-based colorimetric sensor for environmental toxins microcystin-LR in water samples.

    Science.gov (United States)

    Li, Xiuyan; Cheng, Ruojie; Shi, Huijie; Tang, Bo; Xiao, Hanshuang; Zhao, Guohua

    2016-03-05

    A simple and highly sensitive aptamer-based colorimetric sensor was developed for selective detection of Microcystin-LR (MC-LR). The aptamer (ABA) was employed as recognition element which could bind MC-LR with high-affinity, while gold nanoparticles (AuNPs) worked as sensing materials whose plasma resonance absorption peaks red shifted upon binding of the targets at a high concentration of sodium chloride. With the addition of MC-LR, the random coil aptamer adsorbed on Au NPs altered into regulated structure to form MC-LR-aptamer complexes and broke away from the surface of Au NPs, leading to the aggregation of AuNPs, and the color converted from red to blue due to the interparticle plasmon coupling. Results showed that our aptamer-based colorimetric sensor exhibited rapid and sensitive detection performance for MC-LR with linear range from 0.5 nM to 7.5 μM and the detection limit reached 0.37 nM. Meanwhile, the pollutants usually coexisting with MC-LR in pollutant water samples had not demonstrated disturbance for detecting of MC-LR. The mechanism was also proposed suggesting that high affinity interaction between aptamer and MC-LR significantly enhanced the sensitivity and selectivity for MC-LR detection. Besides, the established method was utilized in analyzing real water samples and splendid sensitivity and selectivity were obtained as well. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Pigmented skin lesion detection using random forest and wavelet-based texture

    Science.gov (United States)

    Hu, Ping; Yang, Tie-jun

    2016-10-01

    The incidence of cutaneous malignant melanoma, a disease of worldwide distribution and is the deadliest form of skin cancer, has been rapidly increasing over the last few decades. Because advanced cutaneous melanoma is still incurable, early detection is an important step toward a reduction in mortality. Dermoscopy photographs are commonly used in melanoma diagnosis and can capture detailed features of a lesion. A great variability exists in the visual appearance of pigmented skin lesions. Therefore, in order to minimize the diagnostic errors that result from the difficulty and subjectivity of visual interpretation, an automatic detection approach is required. The objectives of this paper were to propose a hybrid method using random forest and Gabor wavelet transformation to accurately differentiate which part belong to lesion area and the other is not in a dermoscopy photographs and analyze segmentation accuracy. A random forest classifier consisting of a set of decision trees was used for classification. Gabor wavelets transformation are the mathematical model of visual cortical cells of mammalian brain and an image can be decomposed into multiple scales and multiple orientations by using it. The Gabor function has been recognized as a very useful tool in texture analysis, due to its optimal localization properties in both spatial and frequency domain. Texture features based on Gabor wavelets transformation are found by the Gabor filtered image. Experiment results indicate the following: (1) the proposed algorithm based on random forest outperformed the-state-of-the-art in pigmented skin lesions detection (2) and the inclusion of Gabor wavelet transformation based texture features improved segmentation accuracy significantly.

  16. Supplier selection based on multi-criterial AHP method

    Directory of Open Access Journals (Sweden)

    Jana Pócsová

    2010-03-01

    Full Text Available This paper describes a case-study of supplier selection based on multi-criterial Analytic Hierarchy Process (AHP method.It is demonstrated that using adequate mathematical method can bring us “unprejudiced” conclusion, even if the alternatives (suppliercompanies are very similar in given selection-criteria. The result is the best possible supplier company from the viewpoint of chosen criteriaand the price of the product.

  17. Experimental study of a quantum random-number generator based on two independent lasers

    Science.gov (United States)

    Sun, Shi-Hai; Xu, Feihu

    2017-12-01

    A quantum random-number generator (QRNG) can produce true randomness by utilizing the inherent probabilistic nature of quantum mechanics. Recently, the spontaneous-emission quantum phase noise of the laser has been widely deployed for quantum random-number generation, due to its high rate, its low cost, and the feasibility of chip-scale integration. Here, we perform a comprehensive experimental study of a phase-noise-based QRNG with two independent lasers, each of which operates in either continuous-wave (CW) or pulsed mode. We implement the QRNG by operating the two lasers in three configurations, namely, CW + CW, CW + pulsed, and pulsed + pulsed, and demonstrate their trade-offs, strengths, and weaknesses.

  18. Vector control of wind turbine on the basis of the fuzzy selective neural net*

    Science.gov (United States)

    Engel, E. A.; Kovalev, I. V.; Engel, N. E.

    2016-04-01

    An article describes vector control of wind turbine based on fuzzy selective neural net. Based on the wind turbine system’s state, the fuzzy selective neural net tracks an maximum power point under random perturbations. Numerical simulations are accomplished to clarify the applicability and advantages of the proposed vector wind turbine’s control on the basis of the fuzzy selective neuronet. The simulation results show that the proposed intelligent control of wind turbine achieves real-time control speed and competitive performance, as compared to a classical control model with PID controllers based on traditional maximum torque control strategy.

  19. Selective mutism.

    Science.gov (United States)

    Hua, Alexandra; Major, Nili

    2016-02-01

    Selective mutism is a disorder in which an individual fails to speak in certain social situations though speaks normally in other settings. Most commonly, this disorder initially manifests when children fail to speak in school. Selective mutism results in significant social and academic impairment in those affected by it. This review will summarize the current understanding of selective mutism with regard to diagnosis, epidemiology, cause, prognosis, and treatment. Studies over the past 20 years have consistently demonstrated a strong relationship between selective mutism and anxiety, most notably social phobia. These findings have led to the recent reclassification of selective mutism as an anxiety disorder in the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition. In addition to anxiety, several other factors have been implicated in the development of selective mutism, including communication delays and immigration/bilingualism, adding to the complexity of the disorder. In the past few years, several randomized studies have supported the efficacy of psychosocial interventions based on a graduated exposure to situations requiring verbal communication. Less data are available regarding the use of pharmacologic treatment, though there are some studies that suggest a potential benefit. Selective mutism is a disorder that typically emerges in early childhood and is currently conceptualized as an anxiety disorder. The development of selective mutism appears to result from the interplay of a variety of genetic, temperamental, environmental, and developmental factors. Although little has been published about selective mutism in the general pediatric literature, pediatric clinicians are in a position to play an important role in the early diagnosis and treatment of this debilitating condition.

  20. Knowledge based expert system approach to instrumentation selection (INSEL

    Directory of Open Access Journals (Sweden)

    S. Barai

    2004-08-01

    Full Text Available The selection of appropriate instrumentation for any structural measurement of civil engineering structure is a complex task. Recent developments in Artificial Intelligence (AI can help in an organized use of experiential knowledge available on instrumentation for laboratory and in-situ measurement. Usually, the instrumentation decision is based on the experience and judgment of experimentalists. The heuristic knowledge available for different types of measurement is domain dependent and the information is scattered in varied knowledge sources. The knowledge engineering techniques can help in capturing the experiential knowledge. This paper demonstrates a prototype knowledge based system for INstrument SELection (INSEL assistant where the experiential knowledge for various structural domains can be captured and utilized for making instrumentation decision. In particular, this Knowledge Based Expert System (KBES encodes the heuristics on measurement and demonstrates the instrument selection process with reference to steel bridges. INSEL runs on a microcomputer and uses an INSIGHT 2+ environment.

  1. Diversified models for portfolio selection based on uncertain semivariance

    Science.gov (United States)

    Chen, Lin; Peng, Jin; Zhang, Bo; Rosyida, Isnaini

    2017-02-01

    Since the financial markets are complex, sometimes the future security returns are represented mainly based on experts' estimations due to lack of historical data. This paper proposes a semivariance method for diversified portfolio selection, in which the security returns are given subjective to experts' estimations and depicted as uncertain variables. In the paper, three properties of the semivariance of uncertain variables are verified. Based on the concept of semivariance of uncertain variables, two types of mean-semivariance diversified models for uncertain portfolio selection are proposed. Since the models are complex, a hybrid intelligent algorithm which is based on 99-method and genetic algorithm is designed to solve the models. In this hybrid intelligent algorithm, 99-method is applied to compute the expected value and semivariance of uncertain variables, and genetic algorithm is employed to seek the best allocation plan for portfolio selection. At last, several numerical examples are presented to illustrate the modelling idea and the effectiveness of the algorithm.

  2. Can group-based reassuring information alter low back pain behavior? A cluster-randomized controlled trial

    DEFF Research Database (Denmark)

    Frederiksen, Pernille; Indahl, Aage; Andersen, Lars L

    2017-01-01

    -randomized controlled trial. METHODS: Publically employed workers (n = 505) from 11 Danish municipality centers were randomized at center-level (cluster) to either intervention (two 1-hour group-based talks at the workplace) or control. The talks provided reassuring information together with a simple non...

  3. 44 CFR 321.2 - Selection of the mobilization base.

    Science.gov (United States)

    2010-10-01

    ... base. 321.2 Section 321.2 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY PREPAREDNESS MAINTENANCE OF THE MOBILIZATION BASE (DEPARTMENT OF DEFENSE, DEPARTMENT OF ENERGY, MARITIME ADMINISTRATION) § 321.2 Selection of the mobilization base. (a) The Department...

  4. SHER: A Colored Petri Net Based Random Mobility Model for Wireless Communications

    Science.gov (United States)

    Khan, Naeem Akhtar; Ahmad, Farooq; Khan, Sher Afzal

    2015-01-01

    In wireless network research, simulation is the most imperative technique to investigate the network’s behavior and validation. Wireless networks typically consist of mobile hosts; therefore, the degree of validation is influenced by the underlying mobility model, and synthetic models are implemented in simulators because real life traces are not widely available. In wireless communications, mobility is an integral part while the key role of a mobility model is to mimic the real life traveling patterns to study. The performance of routing protocols and mobility management strategies e.g. paging, registration and handoff is highly dependent to the selected mobility model. In this paper, we devise and evaluate the Show Home and Exclusive Regions (SHER), a novel two-dimensional (2-D) Colored Petri net (CPN) based formal random mobility model, which exhibits sociological behavior of a user. The model captures hotspots where a user frequently visits and spends time. Our solution eliminates six key issues of the random mobility models, i.e., sudden stops, memoryless movements, border effect, temporal dependency of velocity, pause time dependency, and speed decay in a single model. The proposed model is able to predict the future location of a mobile user and ultimately improves the performance of wireless communication networks. The model follows a uniform nodal distribution and is a mini simulator, which exhibits interesting mobility patterns. The model is also helpful to those who are not familiar with the formal modeling, and users can extract meaningful information with a single mouse-click. It is noteworthy that capturing dynamic mobility patterns through CPN is the most challenging and virulent activity of the presented research. Statistical and reachability analysis techniques are presented to elucidate and validate the performance of our proposed mobility model. The state space methods allow us to algorithmically derive the system behavior and rectify the

  5. Attentional Selection of Feature Conjunctions Is Accomplished by Parallel and Independent Selection of Single Features.

    Science.gov (United States)

    Andersen, Søren K; Müller, Matthias M; Hillyard, Steven A

    2015-07-08

    Experiments that study feature-based attention have often examined situations in which selection is based on a single feature (e.g., the color red). However, in more complex situations relevant stimuli may not be set apart from other stimuli by a single defining property but by a specific combination of features. Here, we examined sustained attentional selection of stimuli defined by conjunctions of color and orientation. Human observers attended to one out of four concurrently presented superimposed fields of randomly moving horizontal or vertical bars of red or blue color to detect brief intervals of coherent motion. Selective stimulus processing in early visual cortex was assessed by recordings of steady-state visual evoked potentials (SSVEPs) elicited by each of the flickering fields of stimuli. We directly contrasted attentional selection of single features and feature conjunctions and found that SSVEP amplitudes on conditions in which selection was based on a single feature only (color or orientation) exactly predicted the magnitude of attentional enhancement of SSVEPs when attending to a conjunction of both features. Furthermore, enhanced SSVEP amplitudes elicited by attended stimuli were accompanied by equivalent reductions of SSVEP amplitudes elicited by unattended stimuli in all cases. We conclude that attentional selection of a feature-conjunction stimulus is accomplished by the parallel and independent facilitation of its constituent feature dimensions in early visual cortex. The ability to perceive the world is limited by the brain's processing capacity. Attention affords adaptive behavior by selectively prioritizing processing of relevant stimuli based on their features (location, color, orientation, etc.). We found that attentional mechanisms for selection of different features belonging to the same object operate independently and in parallel: concurrent attentional selection of two stimulus features is simply the sum of attending to each of those

  6. Water-based vs. non-water-based physiotherapy for rehabilitation of postural deformities in Parkinson's disease: a randomized controlled pilot study.

    Science.gov (United States)

    Volpe, Daniele; Giantin, Maria Giulia; Manuela, Pilleri; Filippetto, Consuelo; Pelosin, Elisa; Abbruzzese, Giovanni; Antonini, Angelo

    2017-08-01

    To compare the efficacy of two physiotherapy protocols (water-based vs. non-water-based) on postural deformities of patients with Parkinson's disease. A single blind, randomized controlled pilot study. Inpatient (Rehabilitative Department). A total of 30 patients with idiopathic Parkinson's disease. Participants were randomly assigned to one of two eight-week treatment groups: Water-based ( n = 15) or non-water-based physiotherapy exercises ( n = 15). Changes in the degree of cervical and dorsal flexion and in the angle of lateral inclination of the trunk (evaluated by means of a posturographic system) were used as primary outcomes. Unified Parkinson Disease Rating Scale section III, Time Up and Go Test, Berg Balance Scale, Activities-specific Balance Confidence, Falls Efficacy Scale and the Parkinson's disease quality of life questionnaire (39 items) were the secondary outcomes. All outcomes were assessed at baseline, at the end of training and eight weeks after treatment. Patients were always tested at the time of their optimal antiparkinsonian medication ('on' phase). After the treatment, only Parkinson's disease subjects randomized to water-based treatment showed a significant improvement of trunk posture with a significant reduction of cervical flexion (water-based group: -65.2°; non-water-based group: +1.7°) and dorsal flexion (water-based group: -22.5°; non-water-based group: -6.5°) and lateral inclination of the trunk (water-based group: -2.3°; non-water-based group: +0.3°). Both groups presented significant improvements in the secondary clinical outcomes without between-group differences. Our results show that water-based physiotherapy was effective for improving postural deformities in patients with Parkinson's disease.

  7. Object width modulates object-based attentional selection.

    Science.gov (United States)

    Nah, Joseph C; Neppi-Modona, Marco; Strother, Lars; Behrmann, Marlene; Shomstein, Sarah

    2018-04-24

    Visual input typically includes a myriad of objects, some of which are selected for further processing. While these objects vary in shape and size, most evidence supporting object-based guidance of attention is drawn from paradigms employing two identical objects. Importantly, object size is a readily perceived stimulus dimension, and whether it modulates the distribution of attention remains an open question. Across four experiments, the size of the objects in the display was manipulated in a modified version of the two-rectangle paradigm. In Experiment 1, two identical parallel rectangles of two sizes (thin or thick) were presented. Experiments 2-4 employed identical trapezoids (each having a thin and thick end), inverted in orientation. In the experiments, one end of an object was cued and participants performed either a T/L discrimination or a simple target-detection task. Combined results show that, in addition to the standard object-based attentional advantage, there was a further attentional benefit for processing information contained in the thick versus thin end of objects. Additionally, eye-tracking measures demonstrated increased saccade precision towards thick object ends, suggesting that Fitts's Law may play a role in object-based attentional shifts. Taken together, these results suggest that object-based attentional selection is modulated by object width.

  8. Variable Selection in Time Series Forecasting Using Random Forests

    Directory of Open Access Journals (Sweden)

    Hristos Tyralis

    2017-10-01

    Full Text Available Time series forecasting using machine learning algorithms has gained popularity recently. Random forest is a machine learning algorithm implemented in time series forecasting; however, most of its forecasting properties have remained unexplored. Here we focus on assessing the performance of random forests in one-step forecasting using two large datasets of short time series with the aim to suggest an optimal set of predictor variables. Furthermore, we compare its performance to benchmarking methods. The first dataset is composed by 16,000 simulated time series from a variety of Autoregressive Fractionally Integrated Moving Average (ARFIMA models. The second dataset consists of 135 mean annual temperature time series. The highest predictive performance of RF is observed when using a low number of recent lagged predictor variables. This outcome could be useful in relevant future applications, with the prospect to achieve higher predictive accuracy.

  9. Optimal Site Selection of Electric Vehicle Charging Stations Based on a Cloud Model and the PROMETHEE Method

    Directory of Open Access Journals (Sweden)

    Yunna Wu

    2016-03-01

    Full Text Available The task of site selection for electric vehicle charging stations (EVCS is hugely important from the perspective of harmonious and sustainable development. However, flaws and inadequacies in the currently used multi-criteria decision making methods could result in inaccurate and irrational decision results. First of all, the uncertainty of the information cannot be described integrally in the evaluation of the EVCS site selection. Secondly, rigorous consideration of the mutual influence between the various criteria is lacking, which is mainly evidenced in two aspects: one is ignoring the correlation, and the other is the unconscionable measurements. Last but not least, the ranking method adopted in previous studies is not very appropriate for evaluating the EVCS site selection problem. As a result of the above analysis, a Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE method-based decision system combined with the cloud model is proposed in this paper for EVCS site selection. Firstly, the use of the PROMETHEE method can bolster the confidence and visibility for decision makers. Secondly, the cloud model is recommended to describe the fuzziness and randomness of linguistic terms integrally and accurately. Finally, the Analytical Network Process (ANP method is adopted to measure the correlation of the indicators with a greatly simplified calculation of the parameters and the steps required.

  10. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed; Kong, Xin-Bing; Liu, Zhi; Jing, Bing-Yi; Gao, Xin

    2013-01-01

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  11. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed

    2013-01-07

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  12. The Long-Term Effectiveness of a Selective, Personality-Targeted Prevention Program in Reducing Alcohol Use and Related Harms: A Cluster Randomized Controlled Trial

    Science.gov (United States)

    Newton, Nicola C.; Conrod, Patricia J.; Slade, Tim; Carragher, Natacha; Champion, Katrina E.; Barrett, Emma L.; Kelly, Erin V.; Nair, Natasha K.; Stapinski, Lexine; Teesson, Maree

    2016-01-01

    Background: This study investigated the long-term effectiveness of Preventure, a selective personality-targeted prevention program, in reducing the uptake of alcohol, harmful use of alcohol, and alcohol-related harms over a 3-year period. Methods: A cluster randomized controlled trial was conducted to assess the effectiveness of Preventure.…

  13. Implementation of evidence-based antenatal care in Mozambique: a cluster randomized controlled trial: study protocol.

    Science.gov (United States)

    Chavane, Leonardo; Merialdi, Mario; Betrán, Ana Pilar; Requejo-Harris, Jennifer; Bergel, Eduardo; Aleman, Alicia; Colomar, Mercedes; Cafferata, Maria Luisa; Carbonell, Alicia; Crahay, Beatrice; Delvaux, Therese; Geelhoed, Diederike; Gülmezoglu, Metin; Malapende, Celsa Regina; Melo, Armando; Nguyen, My Huong; Osman, Nafissa Bique; Widmer, Mariana; Temmerman, Marleen; Althabe, Fernando

    2014-05-21

    Antenatal care (ANC) reduces maternal and perinatal morbidity and mortality directly through the detection and treatment of pregnancy-related illnesses, and indirectly through the detection of women at increased risk of delivery complications. The potential benefits of quality antenatal care services are most significant in low-resource countries where morbidity and mortality levels among women of reproductive age and neonates are higher.WHO developed an ANC model that recommended the delivery of services scientifically proven to improve maternal, perinatal and neonatal outcomes. The aim of this study is to determine the effect of an intervention designed to increase the use of the package of evidence-based services included in the WHO ANC model in Mozambique. The primary hypothesis is that the intervention will increase the use of evidence-based practices during ANC visits in comparison to the standard dissemination channels currently used in the country. This is a demonstration project to be developed through a facility-based cluster randomized controlled trial with a stepped wedge design. The intervention was tailored, based on formative research findings, to be readily applicable to local prenatal care services and acceptable to local pregnant women and health providers. The intervention includes four components: the provision of kits with all necessary medicines and laboratory supplies for ANC (medical and non-medical equipment), a storage system, a tracking system, and training sessions for health care providers. Ten clinics were selected and will start receiving the intervention in a random order. Outcomes will be computed at each time point when a new clinic starts the intervention. The primary outcomes are the delivery of selected health care practices to women attending the first ANC visit, and secondary outcomes are the delivery of selected health care practices to women attending second and higher ANC visits as well as the attitude of midwives in

  14. Registry-based randomized controlled trials merged the strength of randomized controlled trails and observational studies and give rise to more pragmatic trials.

    Science.gov (United States)

    Mathes, Tim; Buehn, Stefanie; Prengel, Peggy; Pieper, Dawid

    2018-01-01

    The objective of this study was to analyze the features of registry-based randomized trials (rRCTs). We systematically searched PubMed for rRCTs. Study selection was performed independently by two reviewers. We extracted all data in standardized tables and prepared descriptive summary statistics. The search resulted in 1,202 hits. We included 71 rRCTs. Most rRCTs were from Denmark and Sweden. Chronic conditions were considered in 82.2%. A preventive intervention was examined in 45.1%. The median of included patients was 2,000 (range: 69-246,079). Definition of the study population was mostly broad. Study procedures were regularly little standardized. The number of included and analyzed patients was the same in 82.1%. In half of the rRCTs, more than one registry was utilized. Various linkage techniques were used. In median, two outcomes were collected from the registry/ies. The median follow-up of the rRCTs was 5.3 years (range: 6 weeks to 27 years). Information on quality of registry data was reported in 11.3%. rRCTs can provide valid (randomization, low lost-to-follow-up rates, generalizable) patient important long-term comparative-effectiveness data for relative little effort. Researchers planning an RCT should always check whether existing registries can be used for data collection. Reporting on data quality must be improved for use in evidence synthesis. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Selective Sequential Zero-Base Budgeting Procedures Based on Total Factor Productivity Indicators

    OpenAIRE

    A. Ishikawa; E. F. Sudit

    1981-01-01

    The authors' purpose in this paper is to develop productivity-based sequential budgeting procedures designed to expedite identification of major problem areas in bugetary performance, as well as to reduce the costs associated with comprehensive zero-base analyses. The concept of total factor productivity is reviewed and its relations to ordinary and zero-based budgeting are discussed in detail. An outline for a selective sequential analysis based on monitoring of three key indicators of (a) i...

  16. Assessment of fracture risk: value of random population-based samples--the Geelong Osteoporosis Study.

    Science.gov (United States)

    Henry, M J; Pasco, J A; Seeman, E; Nicholson, G C; Sanders, K M; Kotowicz, M A

    2001-01-01

    Fracture risk is determined by bone mineral density (BMD). The T-score, a measure of fracture risk, is the position of an individual's BMD in relation to a reference range. The aim of this study was to determine the magnitude of change in the T-score when different sampling techniques were used to produce the reference range. Reference ranges were derived from three samples, drawn from the same region: (1) an age-stratified population-based random sample, (2) unselected volunteers, and (3) a selected healthy subset of the population-based sample with no diseases or drugs known to affect bone. T-scores were calculated using the three reference ranges for a cohort of women who had sustained a fracture and as a group had a low mean BMD (ages 35-72 yr; n = 484). For most comparisons, the T-scores for the fracture cohort were more negative using the population reference range. The difference in T-scores reached 1.0 SD. The proportion of the fracture cohort classified as having osteoporosis at the spine was 26, 14, and 23% when the population, volunteer, and healthy reference ranges were applied, respectively. The use of inappropriate reference ranges results in substantial changes to T-scores and may lead to inappropriate management.

  17. Address-based versus random-digit-dial surveys: comparison of key health and risk indicators.

    Science.gov (United States)

    Link, Michael W; Battaglia, Michael P; Frankel, Martin R; Osborn, Larry; Mokdad, Ali H

    2006-11-15

    Use of random-digit dialing (RDD) for conducting health surveys is increasingly problematic because of declining participation rates and eroding frame coverage. Alternative survey modes and sampling frames may improve response rates and increase the validity of survey estimates. In a 2005 pilot study conducted in six states as part of the Behavioral Risk Factor Surveillance System, the authors administered a mail survey to selected household members sampled from addresses in a US Postal Service database. The authors compared estimates based on data from the completed mail surveys (n = 3,010) with those from the Behavioral Risk Factor Surveillance System telephone surveys (n = 18,780). The mail survey data appeared reasonably complete, and estimates based on data from the two survey modes were largely equivalent. Differences found, such as differences in the estimated prevalences of binge drinking (mail = 20.3%, telephone = 13.1%) or behaviors linked to human immunodeficiency virus transmission (mail = 7.1%, telephone = 4.2%), were consistent with previous research showing that, for questions about sensitive behaviors, self-administered surveys generally produce higher estimates than interviewer-administered surveys. The mail survey also provided access to cell-phone-only households and households without telephones, which cannot be reached by means of standard RDD surveys.

  18. Random number generation based on digital differential chaos

    KAUST Repository

    Zidan, Mohammed A.

    2012-07-29

    In this paper, we present a fully digital differential chaos based random number generator. The output of the digital circuit is proved to be chaotic by calculating the output time series maximum Lyapunov exponent. We introduce a new post processing technique to improve the distribution and statistical properties of the generated data. The post-processed output passes the NIST Sp. 800-22 statistical tests. The system is written in Verilog VHDL and realized on Xilinx Virtex® FPGA. The generator can fit into a very small area and have a maximum throughput of 2.1 Gb/s.

  19. Selection of 3013 Containers for Field Surveillance

    International Nuclear Information System (INIS)

    Larry Peppers; Elizabeth Kelly; James McClard; Gary Friday; Theodore Venetz; Jerry Stakebade

    2007-01-01

    This report revises and combines three earlier reports dealing with the binning, statistical sampling, and sample selection of 3013 containers for field surveillance. It includes changes to the binning specification resulting from completion of the Savannah River Site packaging campaign and new information from the shelf-life program and field surveillance activities. The revised bin assignments result in changes to the random sample specification. These changes are necessary to meet the statistical requirements of the surveillance program. This report will be reviewed regularly and revised as needed. Section 1 of this report summarizes the results of an extensive effort to assign all of the current and projected 3013 containers in the Department of Energy (DOE) inventory to one of three bins (Innocuous, Pressure and Corrosion, or Pressure) based on potential failure mechanisms. Grouping containers into bins provides a framework to make a statistical selection of individual containers from the entire population for destructive and nondestructive field surveillance. The binning process consisted of three main steps. First, the packaged containers were binned using information in the Integrated Surveillance Program database and a decision tree. The second task was to assign those containers that could not be binned using the decision tree to a specific bin using container-by-container engineering review. The final task was to evaluate containers not yet packaged and assign them to bins using process knowledge. The technical basis for the decisions made during the binning process is included in Section 1. A composite decision tree and a summary table show all of the containers projected to be in the DOE inventory at the conclusion of packaging at all sites. Decision trees that provide an overview of the binning process and logic are included for each site. Section 2 of this report describes the approach to the statistical selection of containers for surveillance and

  20. Condition-Based Conveyor Belt Replacement Strategy in Lignite Mines with Random Belt Deterioration

    Science.gov (United States)

    Blazej, Ryszard; Jurdziak, Leszek

    2017-12-01

    In Polish lignite surface mines, condition-based belt replacement strategies are applied in order to assure profitable refurbishment of worn out belts performed by external firms specializing in belt maintenance. In two of three lignite mines, staff asses belt condition subjectively during visual inspections. Only one mine applies specialized diagnostic device (HRDS) allowing objective magnetic evaluation of belt core condition in order to choose the most profitable moment for the dismantling of worn out belt segments from conveyors and sending them to the maintenance firm which provides their refurbishment. This article describes the advantages of a new diagnostic device called DiagBelt. It was developed at the Faculty of Geoengineering, Mining and Geology, Wroclaw University of Science and Technology. Economic gains from its application are calculated for the lignite mine and for the belt maintenance firm, taking into account random life (durability) of new and reconditioned belts (after the 1st and the 2nd refurbishment). Recursive calculations for following years allow the estimation of the length and costs of replaced, reconditioned and purchased belts on an annual basis, while the use of the Monte Carlo method allows the estimation of their variability caused by random deterioration of belts. Savings are obtained due to better selection of moments (times) for the replacement of belt segments and die to the possibility to qualify worn out belts for refurbishment without the need to remove their covers. In effect, increased belt durability and lowered share of waste belts (which were not qualified for reconditioning) create savings which can quickly cover expenditures on new diagnostic tools and regular belt inspections in the mine.

  1. Generating equilateral random polygons in confinement III

    International Nuclear Information System (INIS)

    Diao, Y; Ernst, C; Montemayor, A; Ziegler, U

    2012-01-01

    In this paper we continue our earlier studies (Diao et al 2011 J. Phys. A: Math. Theor. 44 405202, Diao et al J. Phys. A: Math. Theor. 45 275203) on the generation methods of random equilateral polygons confined in a sphere. The first half of this paper is concerned with the generation of confined equilateral random walks. We show that if the selection of a vertex is uniform subject to the position of its previous vertex and the confining condition, then the distributions of the vertices are not uniform, although there exists a distribution such that if the initial vertex is selected following this distribution, then all vertices of the random walk follow this same distribution. Thus in order to generate a confined equilateral random walk, the selection of a vertex cannot be uniform subject to the position of its previous vertex and the confining condition. We provide a simple algorithm capable of generating confined equilateral random walks whose vertex distribution is almost uniform in the confinement sphere. In the second half of this paper we show that any process generating confined equilateral random walks can be turned into a process generating confined equilateral random polygons with the property that the vertex distribution of the polygons approaches the vertex distribution of the walks as the polygons get longer and longer. In our earlier studies, the starting point of the confined polygon is fixed at the center of the sphere. The new approach here allows us to move the starting point of the confined polygon off the center of the sphere. (paper)

  2. Comparing Effectiveness of Mindfulness-Based Relapse Prevention with Treatment as Usual on Impulsivity and Relapse for Methadone-Treated Patients: A Randomized Clinical Trial.

    Science.gov (United States)

    Yaghubi, Mehdi; Zargar, Fatemeh; Akbari, Hossein

    2017-07-01

    Impulsivity is one of the causes of relapse that can affect treatment outcomes. Studies have shown that addiction treatments can reduce impulsivity in drug-dependent individuals. Studies also have suggested that mindfulness is associated with impulsivity. However, no study has investigated the effectiveness of the mindfulness-based intervention on impulsivity in opioid-dependent individuals. This study aimed to compare the effectiveness of mindfulness-based relapse prevention (MBRP) with treatment as usual (TAU) in terms of impulsivity and relapse for methadone-treated patients. The present randomized controlled clinical trial was performed in Kashan, Iran, in 2015. The study population was opioid-dependent patients referred to Maintenance Treatment Centers. Seventy patients were selected by random sampling and were assigned in two groups (MBRP and TAU) randomly. The participants of two groups filled out Barratt impulsivity scale (BIS-11) as a pre-test and 8 weeks later as post-test and 2 months later as a follow-up. Both groups received methadone-therapy. The MBRP group received 8 sessions of group therapy, while the control group did not receive any group psychotherapy session. Finally, data from 60 patients were analyzed statistically. The MBRP group had decreased impulsivity significantly (P relapse frequency (P relapse probability. These findings suggest that MBRP is useful for opioid-dependent individuals with high-level impulsivity, and relapse prevention.

  3. Affinity selection of Nipah and Hendra virus-related vaccine candidates from a complex random peptide library displayed on bacteriophage virus-like particles

    Energy Technology Data Exchange (ETDEWEB)

    Peabody, David S.; Chackerian, Bryce; Ashley, Carlee; Carnes, Eric; Negrete, Oscar

    2017-01-24

    The invention relates to virus-like particles of bacteriophage MS2 (MS2 VLPs) displaying peptide epitopes or peptide mimics of epitopes of Nipah Virus envelope glycoprotein that elicit an immune response against Nipah Virus upon vaccination of humans or animals. Affinity selection on Nipah Virus-neutralizing monoclonal antibodies using random sequence peptide libraries on MS2 VLPs selected peptides with sequence similarity to peptide sequences found within the envelope glycoprotein of Nipah itself, thus identifying the epitopes the antibodies recognize. The selected peptide sequences themselves are not necessarily identical in all respects to a sequence within Nipah Virus glycoprotein, and therefore may be referred to as epitope mimics VLPs displaying these epitope mimics can serve as vaccine. On the other hand, display of the corresponding wild-type sequence derived from Nipah Virus and corresponding to the epitope mapped by affinity selection, may also be used as a vaccine.

  4. SADA: Ecological Risk Based Decision Support System for Selective Remediation

    Science.gov (United States)

    Spatial Analysis and Decision Assistance (SADA) is freeware that implements terrestrial ecological risk assessment and yields a selective remediation design using its integral geographical information system, based on ecological and risk assessment inputs. Selective remediation ...

  5. A novel attack method about double-random-phase-encoding-based image hiding method

    Science.gov (United States)

    Xu, Hongsheng; Xiao, Zhijun; Zhu, Xianchen

    2018-03-01

    By using optical image processing techniques, a novel text encryption and hiding method applied by double-random phase-encoding technique is proposed in the paper. The first step is that the secret message is transformed into a 2-dimension array. The higher bits of the elements in the array are used to fill with the bit stream of the secret text, while the lower bits are stored specific values. Then, the transformed array is encoded by double random phase encoding technique. Last, the encoded array is embedded on a public host image to obtain the image embedded with hidden text. The performance of the proposed technique is tested via analytical modeling and test data stream. Experimental results show that the secret text can be recovered either accurately or almost accurately, while maintaining the quality of the host image embedded with hidden data by properly selecting the method of transforming the secret text into an array and the superimposition coefficient.

  6. Mobile phone based imaging system for selected tele-healthcare applications

    OpenAIRE

    Condominas Guàrdia, Jordi

    2013-01-01

    A mobile phone based telemedicine study is developed to see how feasible phone usage is in selected health care applications. The research is divided into three different objectives. The first objective is to compile the technical characteristics of selected mobile phones from telemedicine perspective. The second objective is to develop techniques to acquire quality images of skin with mobile phones. Finally a smartphone based telemedicine application will be developed to asses...

  7. On the benefits of location-based relay selection in mobile wireless networks

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2016-01-01

    We consider infrastructure-based mobile networks that are assisted by a single relay transmission where both the downstream destination and relay nodes are mobile. Selecting the optimal transmission path for a destination node requires up-to-date link quality estimates of all relevant links....... If the relay selection is based on link quality measurements, the number of links to update grows quadratically with the number of nodes, and measurements need to be updated frequently when nodes are mobile. In this paper, we consider a location-based relay selection scheme where link qualities are estimated...... from node positions; in the scenario of a node-based location system such as GPS, the location-based approach reduces signaling overhead, which in this case only grows linearly with the number of nodes. This paper studies these two relay selection approaches and investigates how they are affected...

  8. Pseudo-Random Number Generator Based on Coupled Map Lattices

    Science.gov (United States)

    Lü, Huaping; Wang, Shihong; Hu, Gang

    A one-way coupled chaotic map lattice is used for generating pseudo-random numbers. It is shown that with suitable cooperative applications of both chaotic and conventional approaches, the output of the spatiotemporally chaotic system can easily meet the practical requirements of random numbers, i.e., excellent random statistical properties, long periodicity of computer realizations, and fast speed of random number generations. This pseudo-random number generator system can be used as ideal synchronous and self-synchronizing stream cipher systems for secure communications.

  9. A universal algorithm to generate pseudo-random numbers based on uniform mapping as homeomorphism

    International Nuclear Information System (INIS)

    Fu-Lai, Wang

    2010-01-01

    A specific uniform map is constructed as a homeomorphism mapping chaotic time series into [0,1] to obtain sequences of standard uniform distribution. With the uniform map, a chaotic orbit and a sequence orbit obtained are topologically equivalent to each other so the map can preserve the most dynamic properties of chaotic systems such as permutation entropy. Based on the uniform map, a universal algorithm to generate pseudo random numbers is proposed and the pseudo random series is tested to follow the standard 0–1 random distribution both theoretically and experimentally. The algorithm is not complex, which does not impose high requirement on computer hard ware and thus computation speed is fast. The method not only extends the parameter spaces but also avoids the drawback of small function space caused by constraints on chaotic maps used to generate pseudo random numbers. The algorithm can be applied to any chaotic system and can produce pseudo random sequence of high quality, thus can be a good universal pseudo random number generator. (general)

  10. A universal algorithm to generate pseudo-random numbers based on uniform mapping as homeomorphism

    Science.gov (United States)

    Wang, Fu-Lai

    2010-09-01

    A specific uniform map is constructed as a homeomorphism mapping chaotic time series into [0,1] to obtain sequences of standard uniform distribution. With the uniform map, a chaotic orbit and a sequence orbit obtained are topologically equivalent to each other so the map can preserve the most dynamic properties of chaotic systems such as permutation entropy. Based on the uniform map, a universal algorithm to generate pseudo random numbers is proposed and the pseudo random series is tested to follow the standard 0-1 random distribution both theoretically and experimentally. The algorithm is not complex, which does not impose high requirement on computer hard ware and thus computation speed is fast. The method not only extends the parameter spaces but also avoids the drawback of small function space caused by constraints on chaotic maps used to generate pseudo random numbers. The algorithm can be applied to any chaotic system and can produce pseudo random sequence of high quality, thus can be a good universal pseudo random number generator.

  11. Personnel Selection Method Based on Personnel-Job Matching

    OpenAIRE

    Li Wang; Xilin Hou; Lili Zhang

    2013-01-01

    The existing personnel selection decisions in practice are based on the evaluation of job seeker's human capital, and it may be difficult to make personnel-job matching and make each party satisfy. Therefore, this paper puts forward a new personnel selection method by consideration of bilateral matching. Starting from the employment thoughts of ¡°satisfy¡±, the satisfaction evaluation indicator system of each party are constructed. The multi-objective optimization model is given according to ...

  12. Feature selection and classification of mechanical fault of an induction motor using random forest classifier

    OpenAIRE

    Patel, Raj Kumar; Giri, V.K.

    2016-01-01

    Fault detection and diagnosis is the most important technology in condition-based maintenance (CBM) system for rotating machinery. This paper experimentally explores the development of a random forest (RF) classifier, a recently emerged machine learning technique, for multi-class mechanical fault diagnosis in bearing of an induction motor. Firstly, the vibration signals are collected from the bearing using accelerometer sensor. Parameters from the vibration signal are extracted in the form of...

  13. Random Forest Application for NEXRAD Radar Data Quality Control

    Science.gov (United States)

    Keem, M.; Seo, B. C.; Krajewski, W. F.

    2017-12-01

    Identification and elimination of non-meteorological radar echoes (e.g., returns from ground, wind turbines, and biological targets) are the basic data quality control steps before radar data use in quantitative applications (e.g., precipitation estimation). Although WSR-88Ds' recent upgrade to dual-polarization has enhanced this quality control and echo classification, there are still challenges to detect some non-meteorological echoes that show precipitation-like characteristics (e.g., wind turbine or anomalous propagation clutter embedded in rain). With this in mind, a new quality control method using Random Forest is proposed in this study. This classification algorithm is known to produce reliable results with less uncertainty. The method introduces randomness into sampling and feature selections and integrates consequent multiple decision trees. The multidimensional structure of the trees can characterize the statistical interactions of involved multiple features in complex situations. The authors explore the performance of Random Forest method for NEXRAD radar data quality control. Training datasets are selected using several clear cases of precipitation and non-precipitation (but with some non-meteorological echoes). The model is structured using available candidate features (from the NEXRAD data) such as horizontal reflectivity, differential reflectivity, differential phase shift, copolar correlation coefficient, and their horizontal textures (e.g., local standard deviation). The influence of each feature on classification results are quantified by variable importance measures that are automatically estimated by the Random Forest algorithm. Therefore, the number and types of features in the final forest can be examined based on the classification accuracy. The authors demonstrate the capability of the proposed approach using several cases ranging from distinct to complex rain/no-rain events and compare the performance with the existing algorithms (e

  14. Gene-Based Multiclass Cancer Diagnosis with Class-Selective Rejections

    Science.gov (United States)

    Jrad, Nisrine; Grall-Maës, Edith; Beauseroy, Pierre

    2009-01-01

    Supervised learning of microarray data is receiving much attention in recent years. Multiclass cancer diagnosis, based on selected gene profiles, are used as adjunct of clinical diagnosis. However, supervised diagnosis may hinder patient care, add expense or confound a result. To avoid this misleading, a multiclass cancer diagnosis with class-selective rejection is proposed. It rejects some patients from one, some, or all classes in order to ensure a higher reliability while reducing time and expense costs. Moreover, this classifier takes into account asymmetric penalties dependant on each class and on each wrong or partially correct decision. It is based on ν-1-SVM coupled with its regularization path and minimizes a general loss function defined in the class-selective rejection scheme. The state of art multiclass algorithms can be considered as a particular case of the proposed algorithm where the number of decisions is given by the classes and the loss function is defined by the Bayesian risk. Two experiments are carried out in the Bayesian and the class selective rejection frameworks. Five genes selected datasets are used to assess the performance of the proposed method. Results are discussed and accuracies are compared with those computed by the Naive Bayes, Nearest Neighbor, Linear Perceptron, Multilayer Perceptron, and Support Vector Machines classifiers. PMID:19584932

  15. Reinforcing Sampling Distributions through a Randomization-Based Activity for Introducing ANOVA

    Science.gov (United States)

    Taylor, Laura; Doehler, Kirsten

    2015-01-01

    This paper examines the use of a randomization-based activity to introduce the ANOVA F-test to students. The two main goals of this activity are to successfully teach students to comprehend ANOVA F-tests and to increase student comprehension of sampling distributions. Four sections of students in an advanced introductory statistics course…

  16. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    Science.gov (United States)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  17. A Robust Service Selection Method Based on Uncertain QoS

    Directory of Open Access Journals (Sweden)

    Yanping Chen

    2016-01-01

    Full Text Available Nowadays, the number of Web services on the Internet is quickly increasing. Meanwhile, different service providers offer numerous services with the similar functions. Quality of Service (QoS has become an important factor used to select the most appropriate service for users. The most prominent QoS-based service selection models only take the certain attributes into account, which is an ideal assumption. In the real world, there are a large number of uncertain factors. In particular, at the runtime, QoS may become very poor or unacceptable. In order to solve the problem, a global service selection model based on uncertain QoS was proposed, including the corresponding normalization and aggregation functions, and then a robust optimization model adopted to transform the model. Experiment results show that the proposed method can effectively select services with high robustness and optimality.

  18. An empirical investigation on factors influencing customer selection of ADSL services

    OpenAIRE

    Naser Azad; K. Darabi

    2013-01-01

    In this paper, we present an empirical investigation on various factors affecting ADSL service selection in city of Tehran, Iran. The proposed model of this paper uses a standard questionnaire and distributes it among randomly selected customers who have some experiences on internet based ADSL products. The study implements factor analysis as well as weighted regression technique to perform the study. There are eight hypotheses associated with the proposed study of this paper, which indicates...

  19. A random network based, node attraction facilitated network evolution method

    Directory of Open Access Journals (Sweden)

    WenJun Zhang

    2016-03-01

    Full Text Available In present study, I present a method of network evolution that based on random network, and facilitated by node attraction. In this method, I assume that the initial network is a random network, or a given initial network. When a node is ready to connect, it tends to link to the node already owning the most connections, which coincides with the general rule (Barabasi and Albert, 1999 of node connecting. In addition, a node may randomly disconnect a connection i.e., the addition of connections in the network is accompanied by the pruning of some connections. The dynamics of network evolution is determined of the attraction factor Lamda of nodes, the probability of node connection, the probability of node disconnection, and the expected initial connectance. The attraction factor of nodes, the probability of node connection, and the probability of node disconnection are time and node varying. Various dynamics can be achieved by adjusting these parameters. Effects of simplified parameters on network evolution are analyzed. The changes of attraction factor Lamda can reflect various effects of the node degree on connection mechanism. Even the changes of Lamda only will generate various networks from the random to the complex. Therefore, the present algorithm can be treated as a general model for network evolution. Modeling results show that to generate a power-law type of network, the likelihood of a node attracting connections is dependent upon the power function of the node's degree with a higher-order power. Matlab codes for simplified version of the method are provided.

  20. A Simple Density with Distance Based Initial Seed Selection Technique for K Means Algorithm

    Directory of Open Access Journals (Sweden)

    Sajidha Syed Azimuddin

    2017-01-01

    Full Text Available Open issues with respect to K means algorithm are identifying the number of clusters, initial seed concept selection, clustering tendency, handling empty clusters, identifying outliers etc. In this paper we propose a novel and a simple technique considering both density and distance of the concepts in a dataset to identify initial seed concepts for clustering. Many authors have proposed different techniques to identify initial seed concepts; but our method ensures that the initial seed concepts are chosen from different clusters that are to be generated by the clustering solution. The hallmark of our algorithm is that it is a single pass algorithm that does not require any extra parameters to be estimated. Further, our seed concepts are one among the actual concepts and not the mean of representative concepts as is the case in many other algorithms. We have implemented our proposed algorithm and compared the results with the interval based technique of Fouad Khan. We see that our method outperforms the interval based method. We have also compared our method with the original random K means and K Means++ algorithms.

  1. Risk-Controlled Multiobjective Portfolio Selection Problem Using a Principle of Compromise

    Directory of Open Access Journals (Sweden)

    Takashi Hasuike

    2014-01-01

    Full Text Available This paper proposes a multiobjective portfolio selection problem with most probable random distribution derived from current market data and other random distributions of boom and recession under the risk-controlled parameters determined by an investor. The current market data and information include not only historical data but also interpretations of economists’ oral and linguistic information, and hence, the boom and recession are often caused by these nonnumeric data. Therefore, investors need to consider several situations from most probable condition to boom and recession and to avoid the risk less than the target return in each situation. Furthermore, it is generally difficult to set random distributions of these cases exactly. Therefore, a robust-based approach for portfolio selection problems using the only mean values and variances of securities is proposed as a multiobjective programming problem. In addition, an exact algorithm is developed to obtain an explicit optimal portfolio using a principle of compromise.

  2. Portfolios with fuzzy returns: Selection strategies based on semi-infinite programming

    Science.gov (United States)

    Vercher, Enriqueta

    2008-08-01

    This paper provides new models for portfolio selection in which the returns on securities are considered fuzzy numbers rather than random variables. The investor's problem is to find the portfolio that minimizes the risk of achieving a return that is not less than the return of a riskless asset. The corresponding optimal portfolio is derived using semi-infinite programming in a soft framework. The return on each asset and their membership functions are described using historical data. The investment risk is approximated by mean intervals which evaluate the downside risk for a given fuzzy portfolio. This approach is illustrated with a numerical example.

  3. A robust random number generator based on differential comparison of chaotic laser signals.

    Science.gov (United States)

    Zhang, Jianzhong; Wang, Yuncai; Liu, Ming; Xue, Lugang; Li, Pu; Wang, Anbang; Zhang, Mingjiang

    2012-03-26

    We experimentally realize a robust real-time random number generator by differentially comparing the signal from a chaotic semiconductor laser and its delayed signal through a 1-bit analog-to-digital converter. The probability density distribution of the output chaotic signal based on the differential comparison method possesses an extremely small coefficient of Pearson's median skewness (1.5 × 10⁻⁶), which can yield a balanced random sequence much easily than the previously reported method that compares the signal from the chaotic laser with a certain threshold value. Moveover, we experimently demonstrate that our method can stably generate good random numbers at rates of 1.44 Gbit/s with excellent immunity from external perturbations while the previously reported method fails.

  4. A Cancer Gene Selection Algorithm Based on the K-S Test and CFS

    Directory of Open Access Journals (Sweden)

    Qiang Su

    2017-01-01

    Full Text Available Background. To address the challenging problem of selecting distinguished genes from cancer gene expression datasets, this paper presents a gene subset selection algorithm based on the Kolmogorov-Smirnov (K-S test and correlation-based feature selection (CFS principles. The algorithm selects distinguished genes first using the K-S test, and then, it uses CFS to select genes from those selected by the K-S test. Results. We adopted support vector machines (SVM as the classification tool and used the criteria of accuracy to evaluate the performance of the classifiers on the selected gene subsets. This approach compared the proposed gene subset selection algorithm with the K-S test, CFS, minimum-redundancy maximum-relevancy (mRMR, and ReliefF algorithms. The average experimental results of the aforementioned gene selection algorithms for 5 gene expression datasets demonstrate that, based on accuracy, the performance of the new K-S and CFS-based algorithm is better than those of the K-S test, CFS, mRMR, and ReliefF algorithms. Conclusions. The experimental results show that the K-S test-CFS gene selection algorithm is a very effective and promising approach compared to the K-S test, CFS, mRMR, and ReliefF algorithms.

  5. Multi-Objective Particle Swarm Optimization Approach for Cost-Based Feature Selection in Classification.

    Science.gov (United States)

    Zhang, Yong; Gong, Dun-Wei; Cheng, Jian

    2017-01-01

    Feature selection is an important data-preprocessing technique in classification problems such as bioinformatics and signal processing. Generally, there are some situations where a user is interested in not only maximizing the classification performance but also minimizing the cost that may be associated with features. This kind of problem is called cost-based feature selection. However, most existing feature selection approaches treat this task as a single-objective optimization problem. This paper presents the first study of multi-objective particle swarm optimization (PSO) for cost-based feature selection problems. The task of this paper is to generate a Pareto front of nondominated solutions, that is, feature subsets, to meet different requirements of decision-makers in real-world applications. In order to enhance the search capability of the proposed algorithm, a probability-based encoding technology and an effective hybrid operator, together with the ideas of the crowding distance, the external archive, and the Pareto domination relationship, are applied to PSO. The proposed PSO-based multi-objective feature selection algorithm is compared with several multi-objective feature selection algorithms on five benchmark datasets. Experimental results show that the proposed algorithm can automatically evolve a set of nondominated solutions, and it is a highly competitive feature selection method for solving cost-based feature selection problems.

  6. Potential effect of physical activity based menu labels on the calorie content of selected fast food meals.

    Science.gov (United States)

    Dowray, Sunaina; Swartz, Jonas J; Braxton, Danielle; Viera, Anthony J

    2013-03-01

    In this study we examined the effect of physical activity based labels on the calorie content of meals selected from a sample fast food menu. Using a web-based survey, participants were randomly assigned to one of four menus which differed only in their labeling schemes (n=802): (1) a menu with no nutritional information, (2) a menu with calorie information, (3) a menu with calorie information and minutes to walk to burn those calories, or (4) a menu with calorie information and miles to walk to burn those calories. There was a significant difference in the mean number of calories ordered based on menu type (p=0.02), with an average of 1020 calories ordered from a menu with no nutritional information, 927 calories ordered from a menu with only calorie information, 916 calories ordered from a menu with both calorie information and minutes to walk to burn those calories, and 826 calories ordered from the menu with calorie information and the number of miles to walk to burn those calories. The menu with calories and the number of miles to walk to burn those calories appeared the most effective in influencing the selection of lower calorie meals (p=0.0007) when compared to the menu with no nutritional information provided. The majority of participants (82%) reported a preference for physical activity based menu labels over labels with calorie information alone and no nutritional information. Whether these labels are effective in real-life scenarios remains to be tested. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. ARTIFICIAL NEURAL NETWORKS BASED GEARS MATERIAL SELECTION HYBRID INTELLIGENT SYSTEM

    Institute of Scientific and Technical Information of China (English)

    X.C. Li; W.X. Zhu; G. Chen; D.S. Mei; J. Zhang; K.M. Chen

    2003-01-01

    An artificial neural networks(ANNs) based gear material selection hybrid intelligent system is established by analyzing the individual advantages and weakness of expert system (ES) and ANNs and the applications in material select of them. The system mainly consists of tow parts: ES and ANNs. By being trained with much data samples,the back propagation (BP) ANN gets the knowledge of gear materials selection, and is able to inference according to user input. The system realizes the complementing of ANNs and ES. Using this system, engineers without materials selection experience can conveniently deal with gear materials selection.

  8. BWIP-RANDOM-SAMPLING, Random Sample Generation for Nuclear Waste Disposal

    International Nuclear Information System (INIS)

    Sagar, B.

    1989-01-01

    1 - Description of program or function: Random samples for different distribution types are generated. Distribution types as required for performance assessment modeling of geologic nuclear waste disposal are provided. These are: - Uniform, - Log-uniform (base 10 or natural), - Normal, - Lognormal (base 10 or natural), - Exponential, - Bernoulli, - User defined continuous distribution. 2 - Method of solution: A linear congruential generator is used for uniform random numbers. A set of functions is used to transform the uniform distribution to the other distributions. Stratified, rather than random, sampling can be chosen. Truncated limits can be specified on many distributions, whose usual definition has an infinite support. 3 - Restrictions on the complexity of the problem: Generation of correlated random variables is not included

  9. Calculating radiotherapy margins based on Bayesian modelling of patient specific random errors

    International Nuclear Information System (INIS)

    Herschtal, A; Te Marvelde, L; Mengersen, K; Foroudi, F; Ball, D; Devereux, T; Pham, D; Greer, P B; Pichler, P; Eade, T; Kneebone, A; Bell, L; Caine, H; Hindson, B; Kron, T; Hosseinifard, Z

    2015-01-01

    Collected real-life clinical target volume (CTV) displacement data show that some patients undergoing external beam radiotherapy (EBRT) demonstrate significantly more fraction-to-fraction variability in their displacement (‘random error’) than others. This contrasts with the common assumption made by historical recipes for margin estimation for EBRT, that the random error is constant across patients. In this work we present statistical models of CTV displacements in which random errors are characterised by an inverse gamma (IG) distribution in order to assess the impact of random error variability on CTV-to-PTV margin widths, for eight real world patient cohorts from four institutions, and for different sites of malignancy. We considered a variety of clinical treatment requirements and penumbral widths. The eight cohorts consisted of a total of 874 patients and 27 391 treatment sessions. Compared to a traditional margin recipe that assumes constant random errors across patients, for a typical 4 mm penumbral width, the IG based margin model mandates that in order to satisfy the common clinical requirement that 90% of patients receive at least 95% of prescribed RT dose to the entire CTV, margins be increased by a median of 10% (range over the eight cohorts −19% to +35%). This substantially reduces the proportion of patients for whom margins are too small to satisfy clinical requirements. (paper)

  10. Post-stratification based on a choice of a randomization device

    Directory of Open Access Journals (Sweden)

    Sarjinder Singh

    2014-06-01

    Full Text Available In this paper, we use the idea of post-stratification based on the respondents’ choice of a particular randomization device in order to estimate the population proportion of a sensitive characteristic. The proposed idea gives full freedom to the respondents and is expected to result in greater cooperation from them as well as to provide some increase in the relative efficiency of the newly proposed estimator.

  11. There is room for selection in a small local pig breed when using optimum contribution selction: A simulation study

    DEFF Research Database (Denmark)

    Gourdine, Jean-Luc; Sørensen, Anders Christian; Rydhmer, Lotta

    2012-01-01

    at random. With optimum contribution selection, genetic progress can be achieved that is almost as great as that with truncation selection based on BLUP breeding values (0.2 to 0.5 vs. 0.3 to 0.5 genetic SD, P

  12. Varying levels of difficulty index of skills-test items randomly selected by examinees on the Korean emergency medical technician licensing examination.

    Science.gov (United States)

    Koh, Bongyeun; Hong, Sunggi; Kim, Soon-Sim; Hyun, Jin-Sook; Baek, Milye; Moon, Jundong; Kwon, Hayran; Kim, Gyoungyong; Min, Seonggi; Kang, Gu-Hyun

    2016-01-01

    The goal of this study was to characterize the difficulty index of the items in the skills test components of the class I and II Korean emergency medical technician licensing examination (KEMTLE), which requires examinees to select items randomly. The results of 1,309 class I KEMTLE examinations and 1,801 class II KEMTLE examinations in 2013 were subjected to analysis. Items from the basic and advanced skills test sections of the KEMTLE were compared to determine whether some were significantly more difficult than others. In the class I KEMTLE, all 4 of the items on the basic skills test showed significant variation in difficulty index (P<0.01), as well as 4 of the 5 items on the advanced skills test (P<0.05). In the class II KEMTLE, 4 of the 5 items on the basic skills test showed significantly different difficulty index (P<0.01), as well as all 3 of the advanced skills test items (P<0.01). In the skills test components of the class I and II KEMTLE, the procedure in which examinees randomly select questions should be revised to require examinees to respond to a set of fixed items in order to improve the reliability of the national licensing examination.

  13. Varying levels of difficulty index of skills-test items randomly selected by examinees on the Korean emergency medical technician licensing examination

    Directory of Open Access Journals (Sweden)

    Bongyeun Koh

    2016-01-01

    Full Text Available Purpose: The goal of this study was to characterize the difficulty index of the items in the skills test components of the class I and II Korean emergency medical technician licensing examination (KEMTLE, which requires examinees to select items randomly. Methods: The results of 1,309 class I KEMTLE examinations and 1,801 class II KEMTLE examinations in 2013 were subjected to analysis. Items from the basic and advanced skills test sections of the KEMTLE were compared to determine whether some were significantly more difficult than others. Results: In the class I KEMTLE, all 4 of the items on the basic skills test showed significant variation in difficulty index (P<0.01, as well as 4 of the 5 items on the advanced skills test (P<0.05. In the class II KEMTLE, 4 of the 5 items on the basic skills test showed significantly different difficulty index (P<0.01, as well as all 3 of the advanced skills test items (P<0.01. Conclusion: In the skills test components of the class I and II KEMTLE, the procedure in which examinees randomly select questions should be revised to require examinees to respond to a set of fixed items in order to improve the reliability of the national licensing examination.

  14. Evaporation rate-based selection of supramolecular chirality.

    Science.gov (United States)

    Hattori, Shingo; Vandendriessche, Stefaan; Koeckelberghs, Guy; Verbiest, Thierry; Ishii, Kazuyuki

    2017-03-09

    We demonstrate the evaporation rate-based selection of supramolecular chirality for the first time. P-type aggregates prepared by fast evaporation, and M-type aggregates prepared by slow evaporation are kinetic and thermodynamic products under dynamic reaction conditions, respectively. These findings provide a novel solution reaction chemistry under the dynamic reaction conditions.

  15. Random Access for Machine-Type Communication based on Bloom Filtering

    DEFF Research Database (Denmark)

    Pratas, Nuno; Stefanovic, Cedomir; Madueño, Germán Corrales

    2016-01-01

    utilizes the system resources more efficiently and achieves similar or lower latency of connection establishment in case of synchronous arrivals, compared to the variant of the LTE-A access protocol that is optimized for MTC traffic. A dividend of the proposed method is that allows the base station (BS......We present a random access method inspired on Bloom filters that is suited for Machine-Type Communications (MTC). Each accessing device sends a signature during the contention process. A signature is constructed using the Bloom filtering method and contains information on the device identity...... and the connection establishment cause. We instantiate the proposed method over the current LTE-A access protocol. However, the method is applicable to a more general class of random access protocols that use preambles or other reservation sequences, as expected to be the case in 5G systems. We show that our method...

  16. Multi-parameter sensor based on random fiber lasers

    Directory of Open Access Journals (Sweden)

    Yanping Xu

    2016-09-01

    Full Text Available We demonstrate a concept of utilizing random fiber lasers to achieve multi-parameter sensing. The proposed random fiber ring laser consists of an erbium-doped fiber as the gain medium and a random fiber grating as the feedback. The random feedback is effectively realized by a large number of reflections from around 50000 femtosecond laser induced refractive index modulation regions over a 10cm standard single mode fiber. Numerous polarization-dependent spectral filters are formed and superimposed to provide multiple lasing lines with high signal-to-noise ratio up to 40dB, which gives an access for a high-fidelity multi-parameter sensing scheme. The number of sensing parameters can be controlled by the number of the lasing lines via input polarizations and wavelength shifts of each peak can be explored for the simultaneous multi-parameter sensing with one sensing probe. In addition, the random grating induced coupling between core and cladding modes can be potentially used for liquid medical sample sensing in medical diagnostics, biology and remote sensing in hostile environments.

  17. Simulation-based camera navigation training in laparoscopy-a randomized trial

    DEFF Research Database (Denmark)

    Nilsson, Cecilia; Sørensen, Jette Led; Konge, Lars

    2017-01-01

    patient safety. The objectives of this trial were to examine how to train laparoscopic camera navigation and to explore the transfer of skills to the operating room. MATERIALS AND METHODS: A randomized, single-center superiority trial with three groups: The first group practiced simulation-based camera...... navigation tasks (camera group), the second group practiced performing a simulation-based cholecystectomy (procedure group), and the third group received no training (control group). Participants were surgical novices without prior laparoscopic experience. The primary outcome was assessment of camera.......033), had a higher score. CONCLUSIONS: Simulation-based training improves the technical skills required for camera navigation, regardless of practicing camera navigation or the procedure itself. Transfer to the clinical setting could, however, not be demonstrated. The control group demonstrated higher...

  18. Rank-based model selection for multiple ions quantum tomography

    International Nuclear Information System (INIS)

    Guţă, Mădălin; Kypraios, Theodore; Dryden, Ian

    2012-01-01

    The statistical analysis of measurement data has become a key component of many quantum engineering experiments. As standard full state tomography becomes unfeasible for large dimensional quantum systems, one needs to exploit prior information and the ‘sparsity’ properties of the experimental state in order to reduce the dimensionality of the estimation problem. In this paper we propose model selection as a general principle for finding the simplest, or most parsimonious explanation of the data, by fitting different models and choosing the estimator with the best trade-off between likelihood fit and model complexity. We apply two well established model selection methods—the Akaike information criterion (AIC) and the Bayesian information criterion (BIC)—two models consisting of states of fixed rank and datasets such as are currently produced in multiple ions experiments. We test the performance of AIC and BIC on randomly chosen low rank states of four ions, and study the dependence of the selected rank with the number of measurement repetitions for one ion states. We then apply the methods to real data from a four ions experiment aimed at creating a Smolin state of rank 4. By applying the two methods together with the Pearson χ 2 test we conclude that the data can be suitably described with a model whose rank is between 7 and 9. Additionally we find that the mean square error of the maximum likelihood estimator for pure states is close to that of the optimal over all possible measurements. (paper)

  19. Random Forest-Based Recognition of Isolated Sign Language Subwords Using Data from Accelerometers and Surface Electromyographic Sensors.

    Science.gov (United States)

    Su, Ruiliang; Chen, Xiang; Cao, Shuai; Zhang, Xu

    2016-01-14

    Sign language recognition (SLR) has been widely used for communication amongst the hearing-impaired and non-verbal community. This paper proposes an accurate and robust SLR framework using an improved decision tree as the base classifier of random forests. This framework was used to recognize Chinese sign language subwords using recordings from a pair of portable devices worn on both arms consisting of accelerometers (ACC) and surface electromyography (sEMG) sensors. The experimental results demonstrated the validity of the proposed random forest-based method for recognition of Chinese sign language (CSL) subwords. With the proposed method, 98.25% average accuracy was obtained for the classification of a list of 121 frequently used CSL subwords. Moreover, the random forests method demonstrated a superior performance in resisting the impact of bad training samples. When the proportion of bad samples in the training set reached 50%, the recognition error rate of the random forest-based method was only 10.67%, while that of a single decision tree adopted in our previous work was almost 27.5%. Our study offers a practical way of realizing a robust and wearable EMG-ACC-based SLR systems.

  20. The Fault Diagnosis of Rolling Bearing Based on Ensemble Empirical Mode Decomposition and Random Forest

    OpenAIRE

    Qin, Xiwen; Li, Qiaoling; Dong, Xiaogang; Lv, Siqi

    2017-01-01

    Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD) and Random Forest (RF) is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs) by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet meth...

  1. Self-Powered Random Number Generator Based on Coupled Triboelectric and Electrostatic Induction Effects at the Liquid-Dielectric Interface.

    Science.gov (United States)

    Yu, Aifang; Chen, Xiangyu; Cui, Haotian; Chen, Libo; Luo, Jianjun; Tang, Wei; Peng, Mingzeng; Zhang, Yang; Zhai, Junyi; Wang, Zhong Lin

    2016-12-27

    Modern cryptography increasingly employs random numbers generated from physical sources in lieu of conventional software-based pseudorandom numbers, primarily owing to the great demand of unpredictable, indecipherable cryptographic keys from true random numbers for information security. Thus, far, the sole demonstration of true random numbers has been generated through thermal noise and/or quantum effects, which suffers from expensive and complex equipment. In this paper, we demonstrate a method for self-powered creation of true random numbers by using triboelectric technology to collect random signals from nature. This random number generator based on coupled triboelectric and electrostatic induction effects at the liquid-dielectric interface includes an elaborately designed triboelectric generator (TENG) with an irregular grating structure, an electronic-optical device, and an optical-electronic device. The random characteristics of raindrops are harvested through TENG and consequently transformed and converted by electronic-optical device and an optical-electronic device with a nonlinear characteristic. The cooperation of the mechanical, electrical, and optical signals ensures that the generator possesses complex nonlinear input-output behavior and contributes to increased randomness. The random number sequences are deduced from final electrical signals received by an optical-electronic device using a familiar algorithm. These obtained random number sequences exhibit good statistical characteristics, unpredictability, and unrepeatability. Our study supplies a simple, practical, and effective method to generate true random numbers, which can be widely used in cryptographic protocols, digital signatures, authentication, identification, and other information security fields.

  2. Selecting the optimum plot size for a California design-based stream and wetland mapping program.

    Science.gov (United States)

    Lackey, Leila G; Stein, Eric D

    2014-04-01

    Accurate estimates of the extent and distribution of wetlands and streams are the foundation of wetland monitoring, management, restoration, and regulatory programs. Traditionally, these estimates have relied on comprehensive mapping. However, this approach is prohibitively resource-intensive over large areas, making it both impractical and statistically unreliable. Probabilistic (design-based) approaches to evaluating status and trends provide a more cost-effective alternative because, compared with comprehensive mapping, overall extent is inferred from mapping a statistically representative, randomly selected subset of the target area. In this type of design, the size of sample plots has a significant impact on program costs and on statistical precision and accuracy; however, no consensus exists on the appropriate plot size for remote monitoring of stream and wetland extent. This study utilized simulated sampling to assess the performance of four plot sizes (1, 4, 9, and 16 km(2)) for three geographic regions of California. Simulation results showed smaller plot sizes (1 and 4 km(2)) were most efficient for achieving desired levels of statistical accuracy and precision. However, larger plot sizes were more likely to contain rare and spatially limited wetland subtypes. Balancing these considerations led to selection of 4 km(2) for the California status and trends program.

  3. ERROR DISTRIBUTION EVALUATION OF THE THIRD VANISHING POINT BASED ON RANDOM STATISTICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    C. Li

    2012-07-01

    Full Text Available POS, integrated by GPS / INS (Inertial Navigation Systems, has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems. However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY. How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  4. Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation

    Science.gov (United States)

    Li, C.

    2012-07-01

    POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  5. Performance analysis of a threshold-based parallel multiple beam selection scheme for WDM-based systems for Gamma-Gamma distributions

    KAUST Repository

    Nam, Sung Sik

    2017-03-02

    In this paper, we statistically analyze the performance of a threshold-based parallel multiple beam selection scheme (TPMBS) for Free-space optical (FSO) based system with wavelength division multiplexing (WDM) in cases where a pointing error has occurred for practical consideration over independent identically distributed (i.i.d.) Gamma-Gamma fading conditions. Specifically, we statistically analyze the characteristics in operation under conventional heterodyne detection (HD) scheme for both adaptive modulation (AM) case in addition to non-AM case (i.e., coherentnon-coherent binary modulation). Then, based on the statistically derived results, we evaluate the outage probability (CDF) of a selected beam, the average spectral efficiency (ASE), the average number of selected beams (ANSB), and the average bit error rate (BER). Some selected results shows that we can obtain the higher spectral efficiency and simultaneously reduce the potential increasing of the complexity of implementation caused by applying the selection based beam selection scheme without a considerable performance loss.

  6. Detailed RIF decomposition with selection : the gender pay gap in Italy

    OpenAIRE

    Töpfer, Marina

    2017-01-01

    In this paper, we estimate the gender pay gap along the wage distribution using a detailed decomposition approach based on unconditional quantile regressions. Non-randomness of the sample leads to biased and inconsistent estimates of the wage equation as well as of the components of the wage gap. Therefore, the method is extended to account for sample selection problems. The decomposition is conducted by using Italian microdata. Accounting for labor market selection may be particularly rele...

  7. Day-roost tree selection by northern long-eared bats - What do non-roost tree comparisons and one year of data really tell us?

    Science.gov (United States)

    Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.

    2015-01-01

    Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.

  8. Day-roost tree selection by northern long-eared bats—What do non-roost tree comparisons and one year of data really tell us?

    Science.gov (United States)

    Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.

    2015-01-01

    Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.

  9. A Unified 3D Mesh Segmentation Framework Based on Markov Random Field

    OpenAIRE

    Z.F. Shi; L.Y. Lu; D. Le; X.M. Niu

    2012-01-01

    3D Mesh segmentation has become an important research field in computer graphics during the past decades. Many geometry based and semantic oriented approaches for 3D mesh segmentation has been presented. In this paper, we present a definition of mesh segmentation according to labeling problem. Inspired by the Markov Random Field (MRF) based image segmentation, we propose a new framework of 3D mesh segmentation based on MRF and use graph cuts to solve it. Any features of 3D mesh can be integra...

  10. Comparison of address-based sampling and random-digit dialing methods for recruiting young men as controls in a case-control study of testicular cancer susceptibility.

    Science.gov (United States)

    Clagett, Bartholt; Nathanson, Katherine L; Ciosek, Stephanie L; McDermoth, Monique; Vaughn, David J; Mitra, Nandita; Weiss, Andrew; Martonik, Rachel; Kanetsky, Peter A

    2013-12-01

    Random-digit dialing (RDD) using landline telephone numbers is the historical gold standard for control recruitment in population-based epidemiologic research. However, increasing cell-phone usage and diminishing response rates suggest that the effectiveness of RDD in recruiting a random sample of the general population, particularly for younger target populations, is decreasing. In this study, we compared landline RDD with alternative methods of control recruitment, including RDD using cell-phone numbers and address-based sampling (ABS), to recruit primarily white men aged 18-55 years into a study of testicular cancer susceptibility conducted in the Philadelphia, Pennsylvania, metropolitan area between 2009 and 2012. With few exceptions, eligible and enrolled controls recruited by means of RDD and ABS were similar with regard to characteristics for which data were collected on the screening survey. While we find ABS to be a comparably effective method of recruiting young males compared with landline RDD, we acknowledge the potential impact that selection bias may have had on our results because of poor overall response rates, which ranged from 11.4% for landline RDD to 1.7% for ABS.

  11. Random survival forests for competing risks

    DEFF Research Database (Denmark)

    Ishwaran, Hemant; Gerds, Thomas A; Kogalur, Udaya B

    2014-01-01

    We introduce a new approach to competing risks using random forests. Our method is fully non-parametric and can be used for selecting event-specific variables and for estimating the cumulative incidence function. We show that the method is highly effective for both prediction and variable selection...

  12. Comparative studies of praseodymium(III) selective sensors based on newly synthesized Schiff's bases

    International Nuclear Information System (INIS)

    Gupta, Vinod K.; Goyal, Rajendra N.; Pal, Manoj K.; Sharma, Ram A.

    2009-01-01

    Praseodymium ion selective polyvinyl chloride (PVC) membrane sensors, based on two new Schiff's bases 1,3-diphenylpropane-1,3-diylidenebis(azan-1-ylidene)diphenol (M 1 ) and N,N'-bis(pyridoxylideneiminato) ethylene (M 2 ) have been developed and studied. The sensor having membrane composition of PVC: o-NPOE: ionophore (M 1 ): NaTPB (w/w; mg) of 150: 300: 8: 5 showed best performances in comparison to M 2 based membranes. The sensor based on (M 1 ) exhibits the working concentration range 1.0 x 10 -8 to 1.0 x 10 -2 M with a detection limit of 5.0 x 10 -9 M and a Nernstian slope 20.0 ± 0.3 mV decade -1 of activity. It exhibited a quick response time as <8 s and its potential responses were pH independent across the range of 3.5-8.5.The influence of the membrane composition and possible interfering ions have also been investigated on the response properties of the electrode. The sensor has been found to work satisfactorily in partially non-aqueous media up to 15% (v/v) content of methanol, ethanol or acetonitrile and could be used for a period of 3 months. The selectivity coefficients determined by using fixed interference method (FIM) indicate high selectivity for praseodymium(III) ions over wide variety of other cations. To asses its analytical applicability the prepared sensor was successfully applied for determination of praseodymium(III) in spiked water samples.

  13. Using Random Numbers in Science Research Activities.

    Science.gov (United States)

    Schlenker, Richard M.; And Others

    1996-01-01

    Discusses the importance of science process skills and describes ways to select sets of random numbers for selection of subjects for a research study in an unbiased manner. Presents an activity appropriate for grades 5-12. (JRH)

  14. Physically transient photonics: random versus distributed feedback lasing based on nanoimprinted DNA.

    Science.gov (United States)

    Camposeo, Andrea; Del Carro, Pompilio; Persano, Luana; Cyprych, Konrad; Szukalski, Adam; Sznitko, Lech; Mysliwiec, Jaroslaw; Pisignano, Dario

    2014-10-28

    Room-temperature nanoimprinted, DNA-based distributed feedback (DFB) laser operation at 605 nm is reported. The laser is made of a pure DNA host matrix doped with gain dyes. At high excitation densities, the emission of the untextured dye-doped DNA films is characterized by a broad emission peak with an overall line width of 12 nm and superimposed narrow peaks, characteristic of random lasing. Moreover, direct patterning of the DNA films is demonstrated with a resolution down to 100 nm, enabling the realization of both surface-emitting and edge-emitting DFB lasers with a typical line width of <0.3 nm. The resulting emission is polarized, with a ratio between the TE- and TM-polarized intensities exceeding 30. In addition, the nanopatterned devices dissolve in water within less than 2 min. These results demonstrate the possibility of realizing various physically transient nanophotonics and laser architectures, including random lasing and nanoimprinted devices, based on natural biopolymers.

  15. Characterization of geometrical random uncertainty distribution for a group of patients in radiotherapy

    International Nuclear Information System (INIS)

    Munoz Montplet, C.; Jurado Bruggeman, D.

    2010-01-01

    Geometrical random uncertainty in radiotherapy is usually characterized by a unique value in each group of patients. We propose a novel approach based on a statistically accurate characterization of the uncertainty distribution, thus reducing the risk of obtaining potentially unsafe results in CTV-PTV margins or in the selection of correction protocols.

  16. Random numbers from vacuum fluctuations

    International Nuclear Information System (INIS)

    Shi, Yicheng; Kurtsiefer, Christian; Chng, Brenda

    2016-01-01

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  17. Random numbers from vacuum fluctuations

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Yicheng; Kurtsiefer, Christian, E-mail: christian.kurtsiefer@gmail.com [Department of Physics, National University of Singapore, 2 Science Drive 3, Singapore 117542 (Singapore); Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543 (Singapore); Chng, Brenda [Center for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore 117543 (Singapore)

    2016-07-25

    We implement a quantum random number generator based on a balanced homodyne measurement of vacuum fluctuations of the electromagnetic field. The digitized signal is directly processed with a fast randomness extraction scheme based on a linear feedback shift register. The random bit stream is continuously read in a computer at a rate of about 480 Mbit/s and passes an extended test suite for random numbers.

  18. FiGS: a filter-based gene selection workbench for microarray data

    Directory of Open Access Journals (Sweden)

    Yun Taegyun

    2010-01-01

    Full Text Available Abstract Background The selection of genes that discriminate disease classes from microarray data is widely used for the identification of diagnostic biomarkers. Although various gene selection methods are currently available and some of them have shown excellent performance, no single method can retain the best performance for all types of microarray datasets. It is desirable to use a comparative approach to find the best gene selection result after rigorous test of different methodological strategies for a given microarray dataset. Results FiGS is a web-based workbench that automatically compares various gene selection procedures and provides the optimal gene selection result for an input microarray dataset. FiGS builds up diverse gene selection procedures by aligning different feature selection techniques and classifiers. In addition to the highly reputed techniques, FiGS diversifies the gene selection procedures by incorporating gene clustering options in the feature selection step and different data pre-processing options in classifier training step. All candidate gene selection procedures are evaluated by the .632+ bootstrap errors and listed with their classification accuracies and selected gene sets. FiGS runs on parallelized computing nodes that capacitate heavy computations. FiGS is freely accessible at http://gexp.kaist.ac.kr/figs. Conclusion FiGS is an web-based application that automates an extensive search for the optimized gene selection analysis for a microarray dataset in a parallel computing environment. FiGS will provide both an efficient and comprehensive means of acquiring optimal gene sets that discriminate disease states from microarray datasets.

  19. Different cortical mechanisms for spatial vs. feature-based attentional selection in visual working memory

    Directory of Open Access Journals (Sweden)

    Anna Heuer

    2016-08-01

    Full Text Available The limited capacity of visual working memory necessitates attentional mechanisms that selectively update and maintain only the most task-relevant content. Psychophysical experiments have shown that the retroactive selection of memory content can be based on visual properties such as location or shape, but the neural basis for such differential selection is unknown. For example, it is not known if there are different cortical modules specialized for spatial versus feature-based mnemonic attention, in the same way that has been demonstrated for attention to perceptual input. Here, we used transcranial magnetic stimulation (TMS to identify areas in human parietal and occipital cortex involved in the selection of objects from memory based on cues to their location (spatial information or their shape (featural information. We found that TMS over the supramarginal gyrus (SMG selectively facilitated spatial selection, whereas TMS over the lateral occipital cortex selectively enhanced feature-based selection for remembered objects in the contralateral visual field. Thus, different cortical regions are responsible for spatial vs. feature-based selection of working memory representations. Since the same regions are involved in attention to external events, these new findings indicate overlapping mechanisms for attentional control over perceptual input and mnemonic representations.

  20. DNA based random key generation and management for OTP encryption.

    Science.gov (United States)

    Zhang, Yunpeng; Liu, Xin; Sun, Manhui

    2017-09-01

    One-time pad (OTP) is a principle of key generation applied to the stream ciphering method which offers total privacy. The OTP encryption scheme has proved to be unbreakable in theory, but difficult to realize in practical applications. Because OTP encryption specially requires the absolute randomness of the key, its development has suffered from dense constraints. DNA cryptography is a new and promising technology in the field of information security. DNA chromosomes storing capabilities can be used as one-time pad structures with pseudo-random number generation and indexing in order to encrypt the plaintext messages. In this paper, we present a feasible solution to the OTP symmetric key generation and transmission problem with DNA at the molecular level. Through recombinant DNA technology, by using only sender-receiver known restriction enzymes to combine the secure key represented by DNA sequence and the T vector, we generate the DNA bio-hiding secure key and then place the recombinant plasmid in implanted bacteria for secure key transmission. The designed bio experiments and simulation results show that the security of the transmission of the key is further improved and the environmental requirements of key transmission are reduced. Analysis has demonstrated that the proposed DNA-based random key generation and management solutions are marked by high security and usability. Published by Elsevier B.V.

  1. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    Science.gov (United States)

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. An effective approach to attenuate random noise based on compressive sensing and curvelet transform

    International Nuclear Information System (INIS)

    Liu, Wei; Cao, Siyuan; Zu, Shaohuan; Chen, Yangkang

    2016-01-01

    Random noise attenuation is an important step in seismic data processing. In this paper, we propose a novel denoising approach based on compressive sensing and the curvelet transform. We formulate the random noise attenuation problem as an L _1 norm regularized optimization problem. We propose to use the curvelet transform as the sparse transform in the optimization problem to regularize the sparse coefficients in order to separate signal and noise and to use the gradient projection for sparse reconstruction (GPSR) algorithm to solve the formulated optimization problem with an easy implementation and a fast convergence. We tested the performance of our proposed approach on both synthetic and field seismic data. Numerical results show that the proposed approach can effectively suppress the distortion near the edge of seismic events during the noise attenuation process and has high computational efficiency compared with the traditional curvelet thresholding and iterative soft thresholding based denoising methods. Besides, compared with f-x deconvolution, the proposed denoising method is capable of eliminating the random noise more effectively while preserving more useful signals. (paper)

  3. Ranking and selection of commercial off-the-shelf using fuzzy distance based approach

    Directory of Open Access Journals (Sweden)

    Rakesh Garg

    2015-06-01

    Full Text Available There is a tremendous growth of the use of the component based software engineering (CBSE approach for the development of software systems. The selection of the best suited COTS components which fulfils the necessary requirement for the development of software(s has become a major challenge for the software developers. The complexity of the optimal selection problem increases with an increase in alternative potential COTS components and the corresponding selection criteria. In this research paper, the problem of ranking and selection of Data Base Management Systems (DBMS components is modeled as a multi-criteria decision making problem. A ‘Fuzzy Distance Based Approach (FDBA’ method is proposed for the optimal ranking and selection of DBMS COTS components of an e-payment system based on 14 selection criteria grouped under three major categories i.e. ‘Vendor Capabilities’, ‘Business Issues’ and ‘Cost’. The results of this method are compared with other Analytical Hierarchy Process (AHP which is termed as a typical multi-criteria decision making approach. The proposed methodology is explained with an illustrated example.

  4. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  5. COMPARISON OF EIGENMODE-BASED AND RANDOM FIELD-BASED IMPERFECTION MODELING FOR THE STOCHASTIC BUCKLING ANALYSIS OF I-SECTION BEAM–COLUMNS

    KAUST Repository

    STAVREV, A.

    2013-03-01

    The uncertainty of geometric imperfections in a series of nominally equal I-beams leads to a variability of corresponding buckling loads. Its analysis requires a stochastic imperfection model, which can be derived either by the simple variation of the critical eigenmode with a scalar random variable, or with the help of the more advanced theory of random fields. The present paper first provides a concise review of the two different modeling approaches, covering theoretical background, assumptions and calibration, and illustrates their integration into commercial finite element software to conduct stochastic buckling analyses with the Monte-Carlo method. The stochastic buckling behavior of an example beam is then simulated with both stochastic models, calibrated from corresponding imperfection measurements. The simulation results show that for different load cases, the response statistics of the buckling load obtained with the eigenmode-based and the random field-based models agree very well. A comparison of our simulation results with corresponding Eurocode 3 limit loads indicates that the design standard is very conservative for compression dominated load cases. © 2013 World Scientific Publishing Company.

  6. Attribute based selection of thermoplastic resin for vacuum infusion process

    DEFF Research Database (Denmark)

    Prabhakaran, R.T. Durai; Lystrup, Aage; Løgstrup Andersen, Tom

    2011-01-01

    The composite industry looks toward a new material system (resins) based on thermoplastic polymers for the vacuum infusion process, similar to the infusion process using thermosetting polymers. A large number of thermoplastics are available in the market with a variety of properties suitable...... for different engineering applications, and few of those are available in a not yet polymerised form suitable for resin infusion. The proper selection of a new resin system among these thermoplastic polymers is a concern for manufactures in the current scenario and a special mathematical tool would...... be beneficial. In this paper, the authors introduce a new decision making tool for resin selection based on significant attributes. This article provides a broad overview of suitable thermoplastic material systems for vacuum infusion process available in today’s market. An illustrative example—resin selection...

  7. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    Science.gov (United States)

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Robot soccer action selection based on Q learning

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper researches robot soccer action selection based on Q learning . The robot learn to activate particular behavior given their current situation and reward signal. We adopt neural network to implementations of Q learning for their generalization properties and limited computer memory requirements

  9. Correlates of smoking with socioeconomic status, leisure time physical activity and alcohol consumption among Polish adults from randomly selected regions.

    Science.gov (United States)

    Woitas-Slubowska, Donata; Hurnik, Elzbieta; Skarpańska-Stejnborn, Anna

    2010-12-01

    To determine the association between smoking status and leisure time physical activity (LTPA), alcohol consumption, and socioeconomic status (SES) among Polish adults. 466 randomly selected men and women (aged 18-66 years) responded to an anonymous questionnaire regarding smoking, alcohol consumption, LTPA, and SES. Multiple logistic regression was used to examine the association of smoking status with six socioeconomic measures, level of LTPA, and frequency and type of alcohol consumed. Smokers were defined as individuals smoking occasionally or daily. The odds of being smoker were 9 times (men) and 27 times (women) higher among respondents who drink alcohol several times/ week or everyday in comparison to non-drinkers (p times higher compared to those with the high educational attainment (p = 0.007). Among women we observed that students were the most frequent smokers. Female students were almost three times more likely to smoke than non-professional women, and two times more likely than physical workers (p = 0.018). The findings of this study indicated that among randomly selected Polish man and women aged 18-66 smoking and alcohol consumption tended to cluster. These results imply that intervention strategies need to target multiple risk factors simultaneously. The highest risk of smoking was observed among low educated men, female students, and both men and women drinking alcohol several times a week or every day. Information on subgroups with the high risk of smoking will help in planning future preventive strategies.

  10. Space-based visual attention: a marker of immature selective attention in toddlers?

    Science.gov (United States)

    Rivière, James; Brisson, Julie

    2014-11-01

    Various studies suggested that attentional difficulties cause toddlers' failure in some spatial search tasks. However, attention is not a unitary construct and this study investigated two attentional mechanisms: location selection (space-based attention) and object selection (object-based attention). We investigated how toddlers' attention is distributed in the visual field during a manual search task for objects moving out of sight, namely the moving boxes task. Results show that 2.5-year-olds who failed this task allocated more attention to the location of the relevant object than to the object itself. These findings suggest that in some manual search tasks the primacy of space-based attention over object-based attention could be a marker of immature selective attention in toddlers. © 2014 Wiley Periodicals, Inc.

  11. Pseudo random number generator based on quantum chaotic map

    Science.gov (United States)

    Akhshani, A.; Akhavan, A.; Mobaraki, A.; Lim, S.-C.; Hassan, Z.

    2014-01-01

    For many years dissipative quantum maps were widely used as informative models of quantum chaos. In this paper, a new scheme for generating good pseudo-random numbers (PRNG), based on quantum logistic map is proposed. Note that the PRNG merely relies on the equations used in the quantum chaotic map. The algorithm is not complex, which does not impose high requirement on computer hardware and thus computation speed is fast. In order to face the challenge of using the proposed PRNG in quantum cryptography and other practical applications, the proposed PRNG is subjected to statistical tests using well-known test suites such as NIST, DIEHARD, ENT and TestU01. The results of the statistical tests were promising, as the proposed PRNG successfully passed all these tests. Moreover, the degree of non-periodicity of the chaotic sequences of the quantum map is investigated through the Scale index technique. The obtained result shows that, the sequence is more non-periodic. From these results it can be concluded that, the new scheme can generate a high percentage of usable pseudo-random numbers for simulation and other applications in scientific computing.

  12. Validation and selection of ODE based systems biology models: how to arrive at more reliable decisions.

    Science.gov (United States)

    Hasdemir, Dicle; Hoefsloot, Huub C J; Smilde, Age K

    2015-07-08

    Most ordinary differential equation (ODE) based modeling studies in systems biology involve a hold-out validation step for model validation. In this framework a pre-determined part of the data is used as validation data and, therefore it is not used for estimating the parameters of the model. The model is assumed to be validated if the model predictions on the validation dataset show good agreement with the data. Model selection between alternative model structures can also be performed in the same setting, based on the predictive power of the model structures on the validation dataset. However, drawbacks associated with this approach are usually under-estimated. We have carried out simulations by using a recently published High Osmolarity Glycerol (HOG) pathway from S.cerevisiae to demonstrate these drawbacks. We have shown that it is very important how the data is partitioned and which part of the data is used for validation purposes. The hold-out validation strategy leads to biased conclusions, since it can lead to different validation and selection decisions when different partitioning schemes are used. Furthermore, finding sensible partitioning schemes that would lead to reliable decisions are heavily dependent on the biology and unknown model parameters which turns the problem into a paradox. This brings the need for alternative validation approaches that offer flexible partitioning of the data. For this purpose, we have introduced a stratified random cross-validation (SRCV) approach that successfully overcomes these limitations. SRCV leads to more stable decisions for both validation and selection which are not biased by underlying biological phenomena. Furthermore, it is less dependent on the specific noise realization in the data. Therefore, it proves to be a promising alternative to the standard hold-out validation strategy.

  13. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition.

    Science.gov (United States)

    Zhang, Jianhai; Chen, Ming; Zhao, Shaokai; Hu, Sanqing; Shi, Zhiguo; Cao, Yu

    2016-09-22

    Electroencephalogram (EEG) signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI) environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP). Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation). Furthermore, support vector machine (SVM) was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels' weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels). In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a contribution to the

  14. ReliefF-Based EEG Sensor Selection Methods for Emotion Recognition

    Directory of Open Access Journals (Sweden)

    Jianhai Zhang

    2016-09-01

    Full Text Available Electroencephalogram (EEG signals recorded from sensor electrodes on the scalp can directly detect the brain dynamics in response to different emotional states. Emotion recognition from EEG signals has attracted broad attention, partly due to the rapid development of wearable computing and the needs of a more immersive human-computer interface (HCI environment. To improve the recognition performance, multi-channel EEG signals are usually used. A large set of EEG sensor channels will add to the computational complexity and cause users inconvenience. ReliefF-based channel selection methods were systematically investigated for EEG-based emotion recognition on a database for emotion analysis using physiological signals (DEAP. Three strategies were employed to select the best channels in classifying four emotional states (joy, fear, sadness and relaxation. Furthermore, support vector machine (SVM was used as a classifier to validate the performance of the channel selection results. The experimental results showed the effectiveness of our methods and the comparison with the similar strategies, based on the F-score, was given. Strategies to evaluate a channel as a unity gave better performance in channel reduction with an acceptable loss of accuracy. In the third strategy, after adjusting channels’ weights according to their contribution to the classification accuracy, the number of channels was reduced to eight with a slight loss of accuracy (58.51% ± 10.05% versus the best classification accuracy 59.13% ± 11.00% using 19 channels. In addition, the study of selecting subject-independent channels, related to emotion processing, was also implemented. The sensors, selected subject-independently from frontal, parietal lobes, have been identified to provide more discriminative information associated with emotion processing, and are distributed symmetrically over the scalp, which is consistent with the existing literature. The results will make a

  15. Highly selective gas sensor arrays based on thermally reduced graphene oxide.

    Science.gov (United States)

    Lipatov, Alexey; Varezhnikov, Alexey; Wilson, Peter; Sysoev, Victor; Kolmakov, Andrei; Sinitskii, Alexander

    2013-06-21

    The electrical properties of reduced graphene oxide (rGO) have been previously shown to be very sensitive to surface adsorbates, thus making rGO a very promising platform for highly sensitive gas sensors. However, poor selectivity of rGO-based gas sensors remains a major problem for their practical use. In this paper, we address the selectivity problem by employing an array of rGO-based integrated sensors instead of focusing on the performance of a single sensing element. Each rGO-based device in such an array has a unique sensor response due to the irregular structure of rGO films at different levels of organization, ranging from nanoscale to macroscale. The resulting rGO-based gas sensing system could reliably recognize analytes of nearly the same chemical nature. In our experiments rGO-based sensor arrays demonstrated a high selectivity that was sufficient to discriminate between different alcohols, such as methanol, ethanol and isopropanol, at a 100% success rate. We also discuss a possible sensing mechanism that provides the basis for analyte differentiation.

  16. Holographic memories with encryption-selectable function

    Science.gov (United States)

    Su, Wei-Chia; Lee, Xuan-Hao

    2006-03-01

    Volume holographic storage has received increasing attention owing to its potential high storage capacity and access rate. In the meanwhile, encrypted holographic memory using random phase encoding technique is attractive for an optical community due to growing demand for protection of information. In this paper, encryption-selectable holographic storage algorithms in LiNbO 3 using angular multiplexing are proposed and demonstrated. Encryption-selectable holographic memory is an advance concept of security storage for content protection. It offers more flexibility to encrypt the data or not optionally during the recording processes. In our system design, the function of encryption and non-encryption storage is switched by a random phase pattern and a uniform phase pattern. Based on a 90-degree geometry, the input patterns including the encryption and non-encryption storage are stored via angular multiplexing with reference plane waves at different incident angles. Image is encrypted optionally by sliding the ground glass into one of the recording waves or removing it away in each exposure. The ground glass is a key for encryption. Besides, it is also an important key available for authorized user to decrypt the encrypted information.

  17. Characterizing the Optical Variability of Bright Blazars: Variability-based Selection of Fermi Active Galactic Nuclei

    Science.gov (United States)

    Ruan, John J.; Anderson, Scott F.; MacLeod, Chelsea L.; Becker, Andrew C.; Burnett, T. H.; Davenport, James R. A.; Ivezić, Željko; Kochanek, Christopher S.; Plotkin, Richard M.; Sesar, Branimir; Stuart, J. Scott

    2012-11-01

    We investigate the use of optical photometric variability to select and identify blazars in large-scale time-domain surveys, in part to aid in the identification of blazar counterparts to the ~30% of γ-ray sources in the Fermi 2FGL catalog still lacking reliable associations. Using data from the optical LINEAR asteroid survey, we characterize the optical variability of blazars by fitting a damped random walk model to individual light curves with two main model parameters, the characteristic timescales of variability τ, and driving amplitudes on short timescales \\hat{\\sigma }. Imposing cuts on minimum τ and \\hat{\\sigma } allows for blazar selection with high efficiency E and completeness C. To test the efficacy of this approach, we apply this method to optically variable LINEAR objects that fall within the several-arcminute error ellipses of γ-ray sources in the Fermi 2FGL catalog. Despite the extreme stellar contamination at the shallow depth of the LINEAR survey, we are able to recover previously associated optical counterparts to Fermi active galactic nuclei with E >= 88% and C = 88% in Fermi 95% confidence error ellipses having semimajor axis r beaming. After correcting for beaming, we estimate that the characteristic timescale of blazar variability is ~3 years in the rest frame of the jet, in contrast with the ~320 day disk flux timescale observed in quasars. The variability-based selection method presented will be useful for blazar identification in time-domain optical surveys and is also a probe of jet physics.

  18. Physics-based simulation modeling and optimization of microstructural changes induced by machining and selective laser melting processes in titanium and nickel based alloys

    Science.gov (United States)

    Arisoy, Yigit Muzaffer

    Manufacturing processes may significantly affect the quality of resultant surfaces and structural integrity of the metal end products. Controlling manufacturing process induced changes to the product's surface integrity may improve the fatigue life and overall reliability of the end product. The goal of this study is to model the phenomena that result in microstructural alterations and improve the surface integrity of the manufactured parts by utilizing physics-based process simulations and other computational methods. Two different (both conventional and advanced) manufacturing processes; i.e. machining of Titanium and Nickel-based alloys and selective laser melting of Nickel-based powder alloys are studied. 3D Finite Element (FE) process simulations are developed and experimental data that validates these process simulation models are generated to compare against predictions. Computational process modeling and optimization have been performed for machining induced microstructure that includes; i) predicting recrystallization and grain size using FE simulations and the Johnson-Mehl-Avrami-Kolmogorov (JMAK) model, ii) predicting microhardness using non-linear regression models and the Random Forests method, and iii) multi-objective machining optimization for minimizing microstructural changes. Experimental analysis and computational process modeling of selective laser melting have been also conducted including; i) microstructural analysis of grain sizes and growth directions using SEM imaging and machine learning algorithms, ii) analysis of thermal imaging for spattering, heating/cooling rates and meltpool size, iii) predicting thermal field, meltpool size, and growth directions via thermal gradients using 3D FE simulations, iv) predicting localized solidification using the Phase Field method. These computational process models and predictive models, once utilized by industry to optimize process parameters, have the ultimate potential to improve performance of

  19. A threshold-based multiple optical signal selection scheme for WDM FSO systems

    KAUST Repository

    Nam, Sung Sik

    2017-07-20

    In this paper, we propose a threshold-based-multiple optical signal selection scheme (TMOS) for free-space optical systems based on wavelength division multiplexing. With the proposed TMOS, we can obtain higher spectral efficiency while reducing the potential increase in complexity of implementation caused by applying a selection-based beam selection scheme without a considerable performance loss. Here, to accurately characterize the performance of the proposed TMOS, we statistically analyze the characteristics with heterodyne detection technique over independent and identically distributed Log-normal turbulence conditions taking into considerations the impact of pointing error. Specifically, we derive exact closed-form expressions for the average bit error rate, and the average spectral efficiency by adopting an adaptive modulation. Some selected results shows that the average spectral efficiency can be increased with TMOS while the system requirement is satisfied.

  20. A threshold-based multiple optical signal selection scheme for WDM FSO systems

    KAUST Repository

    Nam, Sung Sik; Alouini, Mohamed-Slim; Ko, Young-Chai; Cho, Sung Ho

    2017-01-01

    In this paper, we propose a threshold-based-multiple optical signal selection scheme (TMOS) for free-space optical systems based on wavelength division multiplexing. With the proposed TMOS, we can obtain higher spectral efficiency while reducing the potential increase in complexity of implementation caused by applying a selection-based beam selection scheme without a considerable performance loss. Here, to accurately characterize the performance of the proposed TMOS, we statistically analyze the characteristics with heterodyne detection technique over independent and identically distributed Log-normal turbulence conditions taking into considerations the impact of pointing error. Specifically, we derive exact closed-form expressions for the average bit error rate, and the average spectral efficiency by adopting an adaptive modulation. Some selected results shows that the average spectral efficiency can be increased with TMOS while the system requirement is satisfied.

  1. Mapping Deforestation in North Korea Using Phenology-Based Multi-Index and Random Forest

    Directory of Open Access Journals (Sweden)

    Yihua Jin

    2016-12-01

    Full Text Available Phenology-based multi-index with the random forest (RF algorithm can be used to overcome the shortcomings of traditional deforestation mapping that involves pixel-based classification, such as ISODATA or decision trees, and single images. The purpose of this study was to investigate methods to identify specific types of deforestation in North Korea, and to increase the accuracy of classification, using phenological characteristics extracted with multi-index and random forest algorithms. The mapping of deforestation area based on RF was carried out by merging phenology-based multi-indices (i.e., normalized difference vegetation index (NDVI, normalized difference water index (NDWI, and normalized difference soil index (NDSI derived from MODIS (Moderate Resolution Imaging Spectroradiometer products and topographical variables. Our results showed overall classification accuracy of 89.38%, with corresponding kappa coefficients of 0.87. In particular, for forest and farm land categories with similar phenological characteristic (e.g., paddy, plateau vegetation, unstocked forest, hillside field, this approach improved the classification accuracy in comparison with pixel-based methods and other classes. The deforestation types were identified by incorporating point data from high-resolution imagery, outcomes of image classification, and slope data. Our study demonstrated that the proposed methodology could be used for deciding on the restoration priority and monitoring the expansion of deforestation areas.

  2. Reliability-based decision making for selection of ready-mix concrete supply using stochastic superiority and inferiority ranking method

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Ongkowijoyo, Citra Satria

    2015-01-01

    Corporate competitiveness is heavily influenced by the information acquired, processed, utilized and transferred by professional staff involved in the supply chain. This paper develops a decision aid for selecting on-site ready-mix concrete (RMC) unloading type in decision making situations involving multiple stakeholders and evaluation criteria. The uncertainty of criteria weights set by expert judgment can be transformed in random ways based on the probabilistic virtual-scale method within a prioritization matrix. The ranking is performed by grey relational grade systems considering stochastic criteria weight based on individual preference. Application of the decision aiding model in actual RMC case confirms that the method provides a robust and effective tool for facilitating decision making under uncertainty. - Highlights: • This study models decision aiding method to assess ready-mix concrete unloading type. • Applying Monte Carlo simulation to virtual-scale method achieves a reliable process. • Individual preference ranking method enhances the quality of global decision making. • Robust stochastic superiority and inferiority ranking obtains reasonable results

  3. An efficient ERP-based brain-computer interface using random set presentation and face familiarity.

    Directory of Open Access Journals (Sweden)

    Seul-Ki Yeom

    Full Text Available Event-related potential (ERP-based P300 spellers are commonly used in the field of brain-computer interfaces as an alternative channel of communication for people with severe neuro-muscular diseases. This study introduces a novel P300 based brain-computer interface (BCI stimulus paradigm using a random set presentation pattern and exploiting the effects of face familiarity. The effect of face familiarity is widely studied in the cognitive neurosciences and has recently been addressed for the purpose of BCI. In this study we compare P300-based BCI performances of a conventional row-column (RC-based paradigm with our approach that combines a random set presentation paradigm with (non- self-face stimuli. Our experimental results indicate stronger deflections of the ERPs in response to face stimuli, which are further enhanced when using the self-face images, and thereby improving P300-based spelling performance. This lead to a significant reduction of stimulus sequences required for correct character classification. These findings demonstrate a promising new approach for improving the speed and thus fluency of BCI-enhanced communication with the widely used P300-based BCI setup.

  4. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan

    2011-05-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  5. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan; Yilmaz, Ahmet Oǧuz; Alouini, Mohamed-Slim; Kucur, Oǧuz

    2011-01-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  6. SRAM-PUF Based on Selective Power-Up and Non-Destructive Scheme

    OpenAIRE

    Mispan, Mohd; Zwolinski, Mark; Halak, Basel

    2016-01-01

    Abstract. Research in hardware security, particularly on Physical Unclonable Functions (PUF) has attracted a lot of attention in recent years. PUFs provide primitives for implementing encryption/decryption and device fingerprinting. Though a wide range of solutions exists for PUF-based CMOS devices, the most investigated solutions today for weak PUF implementation are based on the use of random start-up values of SRAM, which offers the advantage of reusing memories that already exist in many ...

  7. Feature Selection and Blind Source Separation in an EEG-Based Brain-Computer Interface

    Directory of Open Access Journals (Sweden)

    Michael H. Thaut

    2005-11-01

    Full Text Available Most EEG-based BCI systems make use of well-studied patterns of brain activity. However, those systems involve tasks that indirectly map to simple binary commands such as “yes” or “no” or require many weeks of biofeedback training. We hypothesized that signal processing and machine learning methods can be used to discriminate EEG in a direct “yes”/“no” BCI from a single session. Blind source separation (BSS and spectral transformations of the EEG produced a 180-dimensional feature space. We used a modified genetic algorithm (GA wrapped around a support vector machine (SVM classifier to search the space of feature subsets. The GA-based search found feature subsets that outperform full feature sets and random feature subsets. Also, BSS transformations of the EEG outperformed the original time series, particularly in conjunction with a subset search of both spaces. The results suggest that BSS and feature selection can be used to improve the performance of even a “direct,” single-session BCI.

  8. Effects of one versus two bouts of moderate intensity physical activity on selective attention during a school morning in Dutch primary schoolchildren: A randomized controlled trial.

    Science.gov (United States)

    Altenburg, Teatske M; Chinapaw, Mai J M; Singh, Amika S

    2016-10-01

    Evidence suggests that physical activity is positively related to several aspects of cognitive functioning in children, among which is selective attention. To date, no information is available on the optimal frequency of physical activity on cognitive functioning in children. The current study examined the acute effects of one and two bouts of moderate-intensity physical activity on children's selective attention. Randomized controlled trial (ISRCTN97975679). Thirty boys and twenty-six girls, aged 10-13 years, were randomly assigned to three conditions: (A) sitting all morning working on simulated school tasks; (B) one 20-min physical activity bout after 90min; and (C) two 20-min physical activity bouts, i.e. at the start and after 90min. Selective attention was assessed at five time points during the morning (i.e. at baseline and after 20, 110, 130 and 220min), using the 'Sky Search' subtest of the 'Test of Selective Attention in Children'. We used GEE analysis to examine differences in Sky Search scores between the three experimental conditions, adjusting for school, baseline scores, self-reported screen time and time spent in sports. Children who performed two 20-min bouts of moderate-intensity physical activity had significantly better Sky Search scores compared to children who performed one physical activity bout or remained seated the whole morning (B=-0.26; 95% CI=[-0.52; -0.00]). Our findings support the importance of repeated physical activity during the school day for beneficial effects on selective attention in children. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  9. An Improved Fast Compressive Tracking Algorithm Based on Online Random Forest Classifier

    Directory of Open Access Journals (Sweden)

    Xiong Jintao

    2016-01-01

    Full Text Available The fast compressive tracking (FCT algorithm is a simple and efficient algorithm, which is proposed in recent years. But, it is difficult to deal with the factors such as occlusion, appearance changes, pose variation, etc in processing. The reasons are that, Firstly, even if the naive Bayes classifier is fast in training, it is not robust concerning the noise. Secondly, the parameters are required to vary with the unique environment for accurate tracking. In this paper, we propose an improved fast compressive tracking algorithm based on online random forest (FCT-ORF for robust visual tracking. Firstly, we combine ideas with the adaptive compressive sensing theory regarding the weighted random projection to exploit both local and discriminative information of the object. The second reason is the online random forest classifier for online tracking which is demonstrated with more robust to the noise adaptively and high computational efficiency. The experimental results show that the algorithm we have proposed has a better performance in the field of occlusion, appearance changes, and pose variation than the fast compressive tracking algorithm’s contribution.

  10. Auditory detection of an increment in the rate of a random process

    International Nuclear Information System (INIS)

    Brown, W.S.; Emmerich, D.S.

    1994-01-01

    Recent experiments have presented listeners with complex tonal stimuli consisting of components with values (i.e., intensities or frequencies) randomly sampled from probability distributions [e.g., R. A. Lutfi, J. Acoust. Soc. Am. 86, 934--944 (1989)]. In the present experiment, brief tones were presented at intervals corresponding to the intensity of a random process. Specifically, the intervals between tones were randomly selected from exponential probability functions. Listeners were asked to decide whether tones presented during a defined observation interval represented a ''noise'' process alone or the ''noise'' with a ''signal'' process added to it. The number of tones occurring in any observation interval is a Poisson variable; receiver operating characteristics (ROCs) arising from Poisson processes have been considered by Egan [Signal Detection Theory and ROC Analysis (Academic, New York, 1975)]. Several sets of noise and signal intensities and observation interval durations were selected which were expected to yield equivalent performance. Rating ROCs were generated based on subjects' responses in a single-interval, yes--no task. The performance levels achieved by listeners and the effects of intensity and duration are compared to those predicted for an ideal observer

  11. Visual search in scenes involves selective and non-selective pathways

    Science.gov (United States)

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  12. Classification of high resolution remote sensing image based on geo-ontology and conditional random fields

    Science.gov (United States)

    Hong, Liang

    2013-10-01

    The availability of high spatial resolution remote sensing data provides new opportunities for urban land-cover classification. More geometric details can be observed in the high resolution remote sensing image, Also Ground objects in the high resolution remote sensing image have displayed rich texture, structure, shape and hierarchical semantic characters. More landscape elements are represented by a small group of pixels. Recently years, the an object-based remote sensing analysis methodology is widely accepted and applied in high resolution remote sensing image processing. The classification method based on Geo-ontology and conditional random fields is presented in this paper. The proposed method is made up of four blocks: (1) the hierarchical ground objects semantic framework is constructed based on geoontology; (2) segmentation by mean-shift algorithm, which image objects are generated. And the mean-shift method is to get boundary preserved and spectrally homogeneous over-segmentation regions ;(3) the relations between the hierarchical ground objects semantic and over-segmentation regions are defined based on conditional random fields framework ;(4) the hierarchical classification results are obtained based on geo-ontology and conditional random fields. Finally, high-resolution remote sensed image data -GeoEye, is used to testify the performance of the presented method. And the experimental results have shown the superiority of this method to the eCognition method both on the effectively and accuracy, which implies it is suitable for the classification of high resolution remote sensing image.

  13. Optimistic Selection Rule Better Than Majority Voting System

    Science.gov (United States)

    Sugiyama, Takuya; Obata, Takuya; Hoki, Kunihito; Ito, Takeshi

    A recently proposed ensemble approach to game-tree search has attracted a great deal of attention. The ensemble system consists of M computer players, where each player uses a different series of pseudo-random numbers. A combination of multiple players under the majority voting system would improve the performance of a Shogi-playing computer. We present a new strategy of move selection based on the search values of a number of players. The move decision is made by selecting one player from all M players. Each move is selected by referring to the evaluation value of the tree search of each player. The performance and mechanism of the strategy are examined. We show that the optimistic selection rule, which selects the player that yields the highest evaluation value, outperforms the majority voting system. By grouping 16 or more computer players straightforwardly, the winning rates of the strongest Shogi programs increase from 50 to 60% or even higher.

  14. Mirnacle: machine learning with SMOTE and random forest for improving selectivity in pre-miRNA ab initio prediction.

    Science.gov (United States)

    Marques, Yuri Bento; de Paiva Oliveira, Alcione; Ribeiro Vasconcelos, Ana Tereza; Cerqueira, Fabio Ribeiro

    2016-12-15

    MicroRNAs (miRNAs) are key gene expression regulators in plants and animals. Therefore, miRNAs are involved in several biological processes, making the study of these molecules one of the most relevant topics of molecular biology nowadays. However, characterizing miRNAs in vivo is still a complex task. As a consequence, in silico methods have been developed to predict miRNA loci. A common ab initio strategy to find miRNAs in genomic data is to search for sequences that can fold into the typical hairpin structure of miRNA precursors (pre-miRNAs). The current ab initio approaches, however, have selectivity issues, i.e., a high number of false positives is reported, which can lead to laborious and costly attempts to provide biological validation. This study presents an extension of the ab initio method miRNAFold, with the aim of improving selectivity through machine learning techniques, namely, random forest combined with the SMOTE procedure that copes with imbalance datasets. By comparing our method, termed Mirnacle, with other important approaches in the literature, we demonstrate that Mirnacle substantially improves selectivity without compromising sensitivity. For the three datasets used in our experiments, our method achieved at least 97% of sensitivity and could deliver a two-fold, 20-fold, and 6-fold increase in selectivity, respectively, compared with the best results of current computational tools. The extension of miRNAFold by the introduction of machine learning techniques, significantly increases selectivity in pre-miRNA ab initio prediction, which optimally contributes to advanced studies on miRNAs, as the need of biological validations is diminished. Hopefully, new research, such as studies of severe diseases caused by miRNA malfunction, will benefit from the proposed computational tool.

  15. A mindfulness-based stress prevention training for medical students (MediMind): study protocol for a randomized controlled trial.

    Science.gov (United States)

    Kuhlmann, Sophie Merle; Bürger, Arne; Esser, Günter; Hammerle, Florian

    2015-02-08

    Medical training is very demanding and associated with a high prevalence of psychological distress. Compared to the general population, medical students are at a greater risk of developing a psychological disorder. Various attempts of stress management training in medical school have achieved positive results on minimizing psychological distress; however, there are often limitations. Therefore, the use of a rigorous scientific method is needed. The present study protocol describes a randomized controlled trial to examine the effectiveness of a specifically developed mindfulness-based stress prevention training for medical students that includes selected elements of cognitive behavioral strategies (MediMind). This study protocol presents a prospective randomized controlled trial, involving four assessment time points: baseline, post-intervention, one-year follow-up and five-year follow-up. The aims include evaluating the effect on stress, coping, psychological morbidity and personality traits with validated measures. Participants are allocated randomly to one of three conditions: MediMind, Autogenic Training or control group. Eligible participants are medical or dental students in the second or eighth semester of a German university. They form a population of approximately 420 students in each academic term. A final total sample size of 126 (at five-year follow-up) is targeted. The trainings (MediMind and Autogenic Training) comprise five weekly sessions lasting 90 minutes each. MediMind will be offered to participants of the control group once the five-year follow-up is completed. The allotment is randomized with a stratified allocation ratio by course of studies, semester, and gender. After descriptive statistics have been evaluated, inferential statistical analysis will be carried out with a repeated measures ANOVA-design with interactions between time and group. Effect sizes will be calculated using partial η-square values. Potential limitations of this study

  16. Randomized controlled trial comparing tailoring methods of multimedia-based fall prevention education for community-dwelling older adults.

    Science.gov (United States)

    Schepens, Stacey L; Panzer, Victoria; Goldberg, Allon

    2011-01-01

    We attempted to determine whether multimedia fall prevention education using different instructional strategies increases older adults' knowledge of fall threats and their fall prevention behaviors. Fifty-three community-dwelling older adults were randomized to iwo educational groups or a control group. Multimedia-based educational interventions to increase fall threats knowledge and encourage fall prevention behaviors had two tailoring strategies: (1) improve content realism for individual learners (authenticity group) and (2) highlight program goals and benefits while using participants' content selections (motivation group). Knowledge was measured at baseline and 1-mo follow-up. Participants recorded prevention behaviors for 1 mo. Intervention group participants showed greater knowledge gains and posttest knowledge than did control group participants. The motivation group engaged in more prevention behaviors over 1 mo than did the other groups. Tailoring fall prevention education by addressing authenticity and motivation successfully improved fall threats knowledge. Combining motivational strategies with multimedia education increased the effectiveness of the intervention in encouraging fall prevention behaviors.

  17. Web-Based and Mobile Stress Management Intervention for Employees: A Randomized Controlled Trial

    OpenAIRE

    Heber, Elena; Lehr, Dirk; Ebert, David Daniel; Berking, Matthias; Riper, Heleen

    2016-01-01

    Background: Work-related stress is highly prevalent among employees and is associated with adverse mental health consequences. Web-based interventions offer the opportunity to deliver effective solutions on a large scale; however, the evidence is limited and the results conflicting. Objective: This randomized controlled trial evaluated the efficacy of guided Web-and mobile-based stress management training for employees. Methods: A total of 264 employees with elevated symptoms of stress (Perce...

  18. The basic science and mathematics of random mutation and natural selection.

    Science.gov (United States)

    Kleinman, Alan

    2014-12-20

    The mutation and natural selection phenomenon can and often does cause the failure of antimicrobial, herbicidal, pesticide and cancer treatments selection pressures. This phenomenon operates in a mathematically predictable behavior, which when understood leads to approaches to reduce and prevent the failure of the use of these selection pressures. The mathematical behavior of mutation and selection is derived using the principles given by probability theory. The derivation of the equations describing the mutation and selection phenomenon is carried out in the context of an empirical example. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Development of radiopharmaceuticals based on aptamers: selection and characterization of DNA aptamers for CEA

    International Nuclear Information System (INIS)

    Correa, C.R.; Andrade, A.S.R.; Augusto-Pinto, L.; Goes, A.M.

    2011-01-01

    Colorectal cancer is among the top four causes of cancer deaths worldwide. Carcinoembryonic antigen (CEA) is a complex intracellular glycoprotein produced by about 90% of colorectal cancers. CEA has been identified as an attractive target for cancer research because of its pattern of expression in the surface cell and its likely functional role in tumorigenesis. Research on the rapid selection of ligands based on the SELEX (systematic evolution of ligands by exponential enrichment) forms the basis for the development of high affinity and high specificity molecules, which can bind to surface determinants of tumour cells, like CEA. The oligonucleotides ligands generated in this technique are called aptamers. Aptamers can potentially find applications as therapeutic or diagnostic tools for many kind of diseases, like a tumor. Aptamers offer low immunogenicity, good tumour penetration, rapid uptake and fast systemic clearance, which favour their application as effective vehicles for radiotherapy. In addition aptamers can be labeled with different radioactive isotopes. The aim of this work was select aptamers binding to the CEA tumor marker. The aptamers are obtained through by SELEX, in which aptamers are selected from a library of random sequences of synthetic DNA by repetitive binding of the oligonucleotides to target molecule (CEA). Analyses of the secondary structure of the aptamers were determined using the m fold toll. Three aptamers were selected to binding assay with target cells. These aptamers were confirmed to have affinity and specific binding for T84 cell line (target cell), showed by confocal imaging. We are currently studying the potential efficacy of these aptamers as targeted radiopharmaceuticals, for use as imaging agents or therapeutic applications. The development of aptamers specific to CEA open new perspectives for colorectal cancer diagnosis and treatment. Acknowledgments: This investigation was supported by the Centro de Desenvolvimento da

  20. Development of radiopharmaceuticals based on aptamers: selection and characterization of DNA aptamers for CEA

    Energy Technology Data Exchange (ETDEWEB)

    Correa, C.R.; Andrade, A.S.R., E-mail: antero@cdtn.br [Centro de Desenvolvimento da Tecnologia Nuclear (CDTN/CNEN-MG), Belo Horizonte, MG (Brazil); Augusto-Pinto, L. [BioAptus, Belo Horizonte, MG (Brazil); Goes, A.M., E-mail: goes@icb.ufmg.br [Departamento de Imunologia e Bioquimica. Instituto de Ciencias Biologicas. Universidade Federal de Minas Gerais. Belo Horizonte, MG (Brazil)

    2011-07-01

    Colorectal cancer is among the top four causes of cancer deaths worldwide. Carcinoembryonic antigen (CEA) is a complex intracellular glycoprotein produced by about 90% of colorectal cancers. CEA has been identified as an attractive target for cancer research because of its pattern of expression in the surface cell and its likely functional role in tumorigenesis. Research on the rapid selection of ligands based on the SELEX (systematic evolution of ligands by exponential enrichment) forms the basis for the development of high affinity and high specificity molecules, which can bind to surface determinants of tumour cells, like CEA. The oligonucleotides ligands generated in this technique are called aptamers. Aptamers can potentially find applications as therapeutic or diagnostic tools for many kind of diseases, like a tumor. Aptamers offer low immunogenicity, good tumour penetration, rapid uptake and fast systemic clearance, which favour their application as effective vehicles for radiotherapy. In addition aptamers can be labeled with different radioactive isotopes. The aim of this work was select aptamers binding to the CEA tumor marker. The aptamers are obtained through by SELEX, in which aptamers are selected from a library of random sequences of synthetic DNA by repetitive binding of the oligonucleotides to target molecule (CEA). Analyses of the secondary structure of the aptamers were determined using the m fold toll. Three aptamers were selected to binding assay with target cells. These aptamers were confirmed to have affinity and specific binding for T84 cell line (target cell), showed by confocal imaging. We are currently studying the potential efficacy of these aptamers as targeted radiopharmaceuticals, for use as imaging agents or therapeutic applications. The development of aptamers specific to CEA open new perspectives for colorectal cancer diagnosis and treatment. Acknowledgments: This investigation was supported by the Centro de Desenvolvimento da

  1. An opinion formation based binary optimization approach for feature selection

    Science.gov (United States)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  2. Trust-Enhanced Cloud Service Selection Model Based on QoS Analysis.

    Science.gov (United States)

    Pan, Yuchen; Ding, Shuai; Fan, Wenjuan; Li, Jing; Yang, Shanlin

    2015-01-01

    Cloud computing technology plays a very important role in many areas, such as in the construction and development of the smart city. Meanwhile, numerous cloud services appear on the cloud-based platform. Therefore how to how to select trustworthy cloud services remains a significant problem in such platforms, and extensively investigated owing to the ever-growing needs of users. However, trust relationship in social network has not been taken into account in existing methods of cloud service selection and recommendation. In this paper, we propose a cloud service selection model based on the trust-enhanced similarity. Firstly, the direct, indirect, and hybrid trust degrees are measured based on the interaction frequencies among users. Secondly, we estimate the overall similarity by combining the experience usability measured based on Jaccard's Coefficient and the numerical distance computed by Pearson Correlation Coefficient. Then through using the trust degree to modify the basic similarity, we obtain a trust-enhanced similarity. Finally, we utilize the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS values as the basis of cloud service selection and recommendation. The experimental results show that our approach is able to obtain optimal results via adjusting parameters and exhibits high effectiveness. The cloud services ranking by our model also have better QoS properties than other methods in the comparison experiments.

  3. RANDOM WALK HYPOTHESIS IN FINANCIAL MARKETS

    Directory of Open Access Journals (Sweden)

    Nicolae-Marius JULA

    2017-05-01

    Full Text Available Random walk hypothesis states that the stock market prices do not follow a predictable trajectory, but are simply random. If you are trying to predict a random set of data, one should test for randomness, because, despite the power and complexity of the used models, the results cannot be trustworthy. There are several methods for testing these hypotheses and the use of computational power provided by the R environment makes the work of the researcher easier and with a cost-effective approach. The increasing power of computing and the continuous development of econometric tests should give the potential investors new tools in selecting commodities and investing in efficient markets.

  4. Selective epidemic vaccination under the performant routing algorithms

    Science.gov (United States)

    Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.

    2018-04-01

    Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.

  5. A web-based appointment system to reduce waiting for outpatients: a retrospective study.

    Science.gov (United States)

    Cao, Wenjun; Wan, Yi; Tu, Haibo; Shang, Fujun; Liu, Danhong; Tan, Zhijun; Sun, Caihong; Ye, Qing; Xu, Yongyong

    2011-11-22

    Long waiting times for registration to see a doctor is problematic in China, especially in tertiary hospitals. To address this issue, a web-based appointment system was developed for the Xijing hospital. The aim of this study was to investigate the efficacy of the web-based appointment system in the registration service for outpatients. Data from the web-based appointment system in Xijing hospital from January to December 2010 were collected using a stratified random sampling method, from which participants were randomly selected for a telephone interview asking for detailed information on using the system. Patients who registered through registration windows were randomly selected as a comparison group, and completed a questionnaire on-site. A total of 5641 patients using the online booking service were available for data analysis. Of them, 500 were randomly selected, and 369 (73.8%) completed a telephone interview. Of the 500 patients using the usual queuing method who were randomly selected for inclusion in the study, responses were obtained from 463, a response rate of 92.6%. Between the two registration methods, there were significant differences in age, degree of satisfaction, and total waiting time (P0.05). Being ignorant of online registration, not trusting the internet, and a lack of ability to use a computer were three main reasons given for not using the web-based appointment system. The overall proportion of non-attendance was 14.4% for those using the web-based appointment system, and the non-attendance rate was significantly different among different hospital departments, day of the week, and time of the day (Pweb-based appointment system could significantly increase patient's satisfaction with registration and reduce total waiting time effectively. However, further improvements are needed for broad use of the system.

  6. Dynamic Output Feedback Control for Nonlinear Networked Control Systems with Random Packet Dropout and Random Delay

    Directory of Open Access Journals (Sweden)

    Shuiqing Yu

    2013-01-01

    Full Text Available This paper investigates the dynamic output feedback control for nonlinear networked control systems with both random packet dropout and random delay. Random packet dropout and random delay are modeled as two independent random variables. An observer-based dynamic output feedback controller is designed based upon the Lyapunov theory. The quantitative relationship of the dropout rate, transition probability matrix, and nonlinear level is derived by solving a set of linear matrix inequalities. Finally, an example is presented to illustrate the effectiveness of the proposed method.

  7. Innovation in values based public health nursing student selection: A qualitative evaluation of candidate and selection panel member perspectives.

    Science.gov (United States)

    McGraw, Caroline; Abbott, Stephen; Brook, Judy

    2018-02-19

    Values based recruitment emerges from the premise that a high degree of value congruence, or the extent to which an individual's values are similar to those of the health organization in which they work, leads to organizational effectiveness. The aim of this evaluation was to explore how candidates and selection panel members experienced and perceived innovative methods of values based public health nursing student selection. The evaluation was framed by a qualitative exploratory design involving semi-structured interviews and a group exercise. Data were thematically analyzed. Eight semi-structured interviews were conducted with selection panel members. Twenty-two successful candidates took part in a group exercise. The use of photo elicitation interviews and situational judgment questions in the context of selection to a university-run public health nursing educational program was explored. While candidates were ambivalent about the use of photo elicitation interviews, with some misunderstanding the task, selection panel members saw the benefits for improving candidate expression and reducing gaming and deception. Situational interview questions were endorsed by candidates and selection panel members due to their fidelity to real-life problems and the ability of panel members to discern value congruence from candidates' responses. Both techniques offered innovative solutions to candidate selection for entry to the public health nursing education program. © 2018 Wiley Periodicals, Inc.

  8. Assessing the Effectiveness of Case-Based Collaborative Learning via Randomized Controlled Trial.

    Science.gov (United States)

    Krupat, Edward; Richards, Jeremy B; Sullivan, Amy M; Fleenor, Thomas J; Schwartzstein, Richard M

    2016-05-01

    Case-based collaborative learning (CBCL) is a novel small-group approach that borrows from team-based learning principles and incorporates elements of problem-based learning (PBL) and case-based learning. CBCL includes a preclass readiness assurance process and case-based in-class activities in which students respond to focused, open-ended questions individually, discuss their answers in groups of 4, and then reach consensus in larger groups of 16. This study introduces CBCL and assesses its effectiveness in one course at Harvard Medical School. In a 2013 randomized controlled trial, 64 medical and dental student volunteers were assigned randomly to one of four 8-person PBL tutorial groups (control; n = 32) or one of two 16-person CBCL tutorial groups (experimental condition; n = 32) as part of a required first-year physiology course. Outcomes for the PBL and CBCL groups were compared using final exam scores, student responses to a postcourse survey, and behavioral coding of portions of video-recorded class sessions. Overall, the course final exam scores for CBCL and PBL students were not significantly different. However, CBCL students whose mean exam performance in prior courses was below the participant median scored significantly higher than their PBL counterparts on the physiology course final exam. The most common adjectives students used to describe CBCL were "engaging," "fun," and "thought-provoking." Coding of observed behaviors indicated that individual affect was significantly higher in the CBCL groups than in the PBL groups. CBCL is a viable, engaging, active learning method. It may particularly benefit students with lower academic performance.

  9. The Goodness of Covariance Selection Problem from AUC Bounds

    OpenAIRE

    Khajavi, Navid Tafaghodi; Kuh, Anthony

    2016-01-01

    We conduct a study of graphical models and discuss the quality of model selection approximation by formulating the problem as a detection problem and examining the area under the curve (AUC). We are specifically looking at the model selection problem for jointly Gaussian random vectors. For Gaussian random vectors, this problem simplifies to the covariance selection problem which is widely discussed in literature by Dempster [1]. In this paper, we give the definition for the correlation appro...

  10. Data Visualization and Feature Selection Methods in Gel-based Proteomics

    DEFF Research Database (Denmark)

    Silva, Tomé Santos; Richard, Nadege; Dias, Jorge P.

    2014-01-01

    -based proteomics, summarizing the current state of research within this field. Particular focus is given on discussing the usefulness of available multivariate analysis tools both for data visualization and feature selection purposes. Visual examples are given using a real gel-based proteomic dataset as basis....

  11. Design Guidelines for a Content-Based Image Retrieval Color-Selection Interface

    NARCIS (Netherlands)

    Eggen, Berry; van den Broek, Egon; van der Veer, Gerrit C.; Kisters, Peter M.F.; Willems, Rob; Vuurpijl, Louis G.

    2004-01-01

    In Content-Based Image Retrieval (CBIR) two query-methods exist: query-by-example and query-by-memory. The user either selects an example image or selects image features retrieved from memory (such as color, texture, spatial attributes, and shape) to define his query. Hitherto, research on CBIR

  12. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    Science.gov (United States)

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  13. A cluster randomized controlled trial testing the effectiveness of Houvast: A strengths-based intervention for homeless young adults

    NARCIS (Netherlands)

    Krabbenborg, M.A.M.; Boersma, S.N.; Veld, W.M. van der; Hulst, B. van; Vollebergh, W.A.M.; Wolf, J.R.L.M.

    2017-01-01

    Objective: To test the effectiveness of Houvast: a strengths-based intervention for homeless young adults. Method: A cluster randomized controlled trial was conducted with 10 Dutch shelter facilities randomly allocated to an intervention and a control group. Homeless young adults were interviewed

  14. Inverse probability weighting for covariate adjustment in randomized studies.

    Science.gov (United States)

    Shen, Changyu; Li, Xiaochun; Li, Lingling

    2014-02-20

    Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Adaptation and micro-structure of Co-Cr alloy maxillary complete denture base plates fabricated by selective laser melting technique.

    Science.gov (United States)

    Ye, Ye; Jiao, Ting; Zhu, Jiarui; Sun, Jian

    2018-01-24

    The purpose of the study was to evaluate the adaptation and micro-structure of Co-Cr alloy maxillary complete denture base plates fabricated by the selective laser melting (SLM) technique. Twenty pairs of edentulous casts were randomly and evenly divided into two groups, and manufacturing of the Co-Cr alloy maxillary complete denture base was conducted either by the SLM technique or by the conventional method. The base-cast sets were transversally sectioned into three sections at the distal canines, mesial of the first molars and the posterior palatal zone. The gap between the metal base and cast was measured in these three sections with a stereoscopic microscope, and the data were analysed using t tests. A total of five specimens of 5 mm diameter were fabricated with the Co-Cr alloy by SLM and the traditional casting technology. A scanning electron microscope (SEM) was used to evaluate the differences in microstructure between these specimens. There was no statistical difference between the three sections in all four groups (P > 0.05). At the region of the canines, the clearance value for the SLM Co-Cr alloy group was larger than that of the conventional method group (P  0.05). The SLM Co-Cr alloy has a denser microstructure behaviour and less casting defect than the cast Co-Cr alloy. The SLM technique showed initial feasibility for the manufacture of dental bases of complete dentures, but large sample studies are needed to prove its reliability in clinical applications. The mechanical properties and microstructure of the denture frameworks prepared by selective laser melting indicate that these dentures are appropriate for clinical use.

  16. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. The sequence relay selection strategy based on stochastic dynamic programming

    Science.gov (United States)

    Zhu, Rui; Chen, Xihao; Huang, Yangchao

    2017-07-01

    Relay-assisted (RA) network with relay node selection is a kind of effective method to improve the channel capacity and convergence performance. However, most of the existing researches about the relay selection did not consider the statically channel state information and the selection cost. This shortage limited the performance and application of RA network in practical scenarios. In order to overcome this drawback, a sequence relay selection strategy (SRSS) was proposed. And the performance upper bound of SRSS was also analyzed in this paper. Furthermore, in order to make SRSS more practical, a novel threshold determination algorithm based on the stochastic dynamic program (SDP) was given to work with SRSS. Numerical results are also presented to exhibit the performance of SRSS with SDP.

  18. Misuse of randomization

    DEFF Research Database (Denmark)

    Liu, Jianping; Kjaergard, Lise Lotte; Gluud, Christian

    2002-01-01

    The quality of randomization of Chinese randomized trials on herbal medicines for hepatitis B was assessed. Search strategy and inclusion criteria were based on the published protocol. One hundred and seventy-six randomized clinical trials (RCTs) involving 20,452 patients with chronic hepatitis B...... virus (HBV) infection were identified that tested Chinese medicinal herbs. They were published in 49 Chinese journals. Only 10% (18/176) of the studies reported the method by which they randomized patients. Only two reported allocation concealment and were considered as adequate. Twenty percent (30...

  19. Cooperative Technique Based on Sensor Selection in Wireless Sensor Network

    Directory of Open Access Journals (Sweden)

    ISLAM, M. R.

    2009-02-01

    Full Text Available An energy efficient cooperative technique is proposed for the IEEE 1451 based Wireless Sensor Networks. Selected numbers of Wireless Transducer Interface Modules (WTIMs are used to form a Multiple Input Single Output (MISO structure wirelessly connected with a Network Capable Application Processor (NCAP. Energy efficiency and delay of the proposed architecture are derived for different combination of cluster size and selected number of WTIMs. Optimized constellation parameters are used for evaluating derived parameters. The results show that the selected MISO structure outperforms the unselected MISO structure and it shows energy efficient performance than SISO structure after a certain distance.

  20. An Entropy-based gene selection method for cancer classification using microarray data

    Directory of Open Access Journals (Sweden)

    Krishnan Arun

    2005-03-01

    Full Text Available Abstract Background Accurate diagnosis of cancer subtypes remains a challenging problem. Building classifiers based on gene expression data is a promising approach; yet the selection of non-redundant but relevant genes is difficult. The selected gene set should be small enough to allow diagnosis even in regular clinical laboratories and ideally identify genes involved in cancer-specific regulatory pathways. Here an entropy-based method is proposed that selects genes related to the different cancer classes while at the same time reducing the redundancy among the genes. Results The present study identifies a subset of features by maximizing the relevance and minimizing the redundancy of the selected genes. A merit called normalized mutual information is employed to measure the relevance and the redundancy of the genes. In order to find a more representative subset of features, an iterative procedure is adopted that incorporates an initial clustering followed by data partitioning and the application of the algorithm to each of the partitions. A leave-one-out approach then selects the most commonly selected genes across all the different runs and the gene selection algorithm is applied again to pare down the list of selected genes until a minimal subset is obtained that gives a satisfactory accuracy of classification. The algorithm was applied to three different data sets and the results obtained were compared to work done by others using the same data sets Conclusion This study presents an entropy-based iterative algorithm for selecting genes from microarray data that are able to classify various cancer sub-types with high accuracy. In addition, the feature set obtained is very compact, that is, the redundancy between genes is reduced to a large extent. This implies that classifiers can be built with a smaller subset of genes.

  1. Interaction-based evolution: how natural selection and nonrandom mutation work together

    Science.gov (United States)

    2013-01-01

    Background The modern evolutionary synthesis leaves unresolved some of the most fundamental, long-standing questions in evolutionary biology: What is the role of sex in evolution? How does complex adaptation evolve? How can selection operate effectively on genetic interactions? More recently, the molecular biology and genomics revolutions have raised a host of critical new questions, through empirical findings that the modern synthesis fails to explain: for example, the discovery of de novo genes; the immense constructive role of transposable elements in evolution; genetic variance and biochemical activity that go far beyond what traditional natural selection can maintain; perplexing cases of molecular parallelism; and more. Presentation of the hypothesis Here I address these questions from a unified perspective, by means of a new mechanistic view of evolution that offers a novel connection between selection on the phenotype and genetic evolutionary change (while relying, like the traditional theory, on natural selection as the only source of feedback on the fit between an organism and its environment). I hypothesize that the mutation that is of relevance for the evolution of complex adaptation—while not Lamarckian, or “directed” to increase fitness—is not random, but is instead the outcome of a complex and continually evolving biological process that combines information from multiple loci into one. This allows selection on a fleeting combination of interacting alleles at different loci to have a hereditary effect according to the combination’s fitness. Testing and implications of the hypothesis This proposed mechanism addresses the problem of how beneficial genetic interactions can evolve under selection, and also offers an intuitive explanation for the role of sex in evolution, which focuses on sex as the generator of genetic combinations. Importantly, it also implies that genetic variation that has appeared neutral through the lens of traditional

  2. A Bayesian way to solve the pdf selection problem an application in geotechnical analysis

    NARCIS (Netherlands)

    Kerstens, J.G.M.

    2007-01-01

    Single probability density functions (pdf) are often selected based on a best-fit approach or the minimum description length (MDL) theorem that finds its description by the theory of complexity by Kolmogorov. However, it often occurs that random data cannot be accurately described by any of the

  3. Allowing Physicians to Choose the Value of Compensation for Participation in a Web-Based Survey: Randomized Controlled Trial.

    Science.gov (United States)

    Turnbull, Alison E; O'Connor, Cristi L; Lau, Bryan; Halpern, Scott D; Needham, Dale M

    2015-07-29

    Survey response rates among physicians are declining, and determining an appropriate level of compensation to motivate participation poses a major challenge. To estimate the effect of permitting intensive care physicians to select their preferred level of compensation for completing a short Web-based survey on physician (1) response rate, (2) survey completion rate, (3) time to response, and (4) time spent completing the survey. A total of 1850 US intensivists from an existing database were randomized to receive a survey invitation email with or without an Amazon.com incentive available to the first 100 respondents. The incentive could be instantly redeemed for an amount chosen by the respondent, up to a maximum of US $50. The overall response rate was 35.90% (630/1755). Among the 35.4% (111/314) of eligible participants choosing the incentive, 80.2% (89/111) selected the maximum value. Among intensivists offered an incentive, the response was 6.0% higher (95% CI 1.5-10.5, P=.01), survey completion was marginally greater (807/859, 94.0% vs 892/991, 90.0%; P=.06), and the median number of days to survey response was shorter (0.8, interquartile range [IQR] 0.2-14.4 vs 6.6, IQR 0.3-22.3; P=.001), with no difference in time spent completing the survey. Permitting intensive care physicians to determine compensation level for completing a short Web-based survey modestly increased response rate and substantially decreased response time without decreasing the time spent on survey completion.

  4. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  5. Design and usability evaluation of user-centered and visual-based aids for dietary food measurement on mobile devices in a randomized controlled trial.

    Science.gov (United States)

    Liu, Ying-Chieh; Chen, Chien-Hung; Lee, Chien-Wei; Lin, Yu-Sheng; Chen, Hsin-Yun; Yeh, Jou-Yin; Chiu, Sherry Yueh-Hsia

    2016-12-01

    We designed and developed two interactive apps interfaces for dietary food measurements on mobile devices. The user-centered designs of both the IPI (interactive photo interface) and the SBI (sketching-based interface) were evaluated. Four types of outcomes were assessed to evaluate the usability of mobile devices for dietary measurements, including accuracy, absolute weight differences, and the response time to determine the efficacy of food measurements. The IPI presented users with images of pre-determined portion sizes of a specific food and allowed users to scan and then select the most representative image matching the food that they were measuring. The SBI required users to relate the food shape to a readily available comparator (e.g., credit card) and scribble to shade in the appropriate area. A randomized controlled trial was conducted to evaluate their usability. A total of 108 participants were randomly assigned into the following three groups: the IPI (n=36) and SBI (n=38) experimental groups and the traditional life-size photo (TLP) group as the control. A total of 18 types of food items with 3-4 different weights were randomly selected for assessment by each type. The independent Chi-square test and t-test were performed for the dichotomous and continuous variable analyses, respectively. The total accuracy rates were 66.98%, 44.15%, and 72.06% for the IPI, SBI, and TLP, respectively. No significant difference was observed between the IPI and TLP, regardless of the accuracy proportion or weight differences. The SBI accuracy rates were significantly lower than the IPI and TLP accuracy rates, especially for several spooned, square cube, and sliced pie food items. The time needed to complete the operation assessment by the user was significantly lower for the IPI than for the SBI. Our study corroborates that the user-centered visual-based design of the IPI on a mobile device is comparable the TLP in terms of the usability for dietary food measurements

  6. Random-walk simulation of selected aspects of dissipative collisions

    International Nuclear Information System (INIS)

    Toeke, J.; Gobbi, A.; Matulewicz, T.

    1984-11-01

    Internuclear thermal equilibrium effects and shell structure effects in dissipative collisions are studied numerically within the framework of the model of stochastic exchanges by applying the random-walk technique. Effective blocking of the drift through the mass flux induced by the temperature difference, while leaving the variances of the mass distributions unaltered is found possible, provided an internuclear potential barrier is present. Presence of the shell structure is found to lead to characteristic correlations between the consecutive exchanges. Experimental evidence for the predicted effects is discussed. (orig.)

  7. Random number generation and creativity.

    Science.gov (United States)

    Bains, William

    2008-01-01

    A previous paper suggested that humans can generate genuinely random numbers. I tested this hypothesis by repeating the experiment with a larger number of highly numerate subjects, asking them to call out a sequence of digits selected from 0 through 9. The resulting sequences were substantially non-random, with an excess of sequential pairs of numbers and a deficit of repeats of the same number, in line with previous literature. However, the previous literature suggests that humans generate random numbers with substantial conscious effort, and distractions which reduce that effort reduce the randomness of the numbers. I reduced my subjects' concentration by asking them to call out in another language, and with alcohol - neither affected the randomness of their responses. This suggests that the ability to generate random numbers is a 'basic' function of the human mind, even if those numbers are not mathematically 'random'. I hypothesise that there is a 'creativity' mechanism, while not truly random, provides novelty as part of the mind's defence against closed programming loops, and that testing for the effects seen here in people more or less familiar with numbers or with spontaneous creativity could identify more features of this process. It is possible that training to perform better at simple random generation tasks could help to increase creativity, through training people to reduce the conscious mind's suppression of the 'spontaneous', creative response to new questions.

  8. Mathematical Optimization Algorithm for Minimizing the Cost Function of GHG Emission in AS/RS Using Positive Selection Based Clonal Selection Principle

    Science.gov (United States)

    Mahalakshmi; Murugesan, R.

    2018-04-01

    This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.

  9. Subsampling-based compression and flow visualization

    Energy Technology Data Exchange (ETDEWEB)

    Agranovsky, Alexy; Camp, David; Joy, I; Childs, Hank

    2016-01-19

    As computational capabilities increasingly outpace disk speeds on leading supercomputers, scientists will, in turn, be increasingly unable to save their simulation data at its native resolution. One solution to this problem is to compress these data sets as they are generated and visualize the compressed results afterwards. We explore this approach, specifically subsampling velocity data and the resulting errors for particle advection-based flow visualization. We compare three techniques: random selection of subsamples, selection at regular locations corresponding to multi-resolution reduction, and introduce a novel technique for informed selection of subsamples. Furthermore, we explore an adaptive system which exchanges the subsampling budget over parallel tasks, to ensure that subsampling occurs at the highest rate in the areas that need it most. We perform supercomputing runs to measure the effectiveness of the selection and adaptation techniques. Overall, we find that adaptation is very effective, and, among selection techniques, our informed selection provides the most accurate results, followed by the multi-resolution selection, and with the worst accuracy coming from random subsamples.

  10. A comparison of the conditional inference survival forest model to random survival forests based on a simulation study as well as on two applications with time-to-event data.

    Science.gov (United States)

    Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia

    2017-07-28

    Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.

  11. A comparison of the conditional inference survival forest model to random survival forests based on a simulation study as well as on two applications with time-to-event data

    Directory of Open Access Journals (Sweden)

    Justine B. Nasejje

    2017-07-01

    Full Text Available Abstract Background Random survival forest (RSF models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. Methods In this study, we compare the random survival forest model to the conditional inference model (CIF using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points. The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB which consists of mainly categorical covariates with two levels (few split-points. Results The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Conclusion Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.

  12. Selection and characterization of DNA aptamers

    NARCIS (Netherlands)

    Ruigrok, V.J.B.

    2013-01-01

    This thesis focusses on the selection and characterisation of DNA aptamers and the various aspects related to their selection from large pools of randomized oligonucleotides. Aptamers are affinity tools that can specifically recognize and bind predefined target molecules; this ability, however,

  13. Infrared face recognition based on LBP histogram and KW feature selection

    Science.gov (United States)

    Xie, Zhihua

    2014-07-01

    The conventional LBP-based feature as represented by the local binary pattern (LBP) histogram still has room for performance improvements. This paper focuses on the dimension reduction of LBP micro-patterns and proposes an improved infrared face recognition method based on LBP histogram representation. To extract the local robust features in infrared face images, LBP is chosen to get the composition of micro-patterns of sub-blocks. Based on statistical test theory, Kruskal-Wallis (KW) feature selection method is proposed to get the LBP patterns which are suitable for infrared face recognition. The experimental results show combination of LBP and KW features selection improves the performance of infrared face recognition, the proposed method outperforms the traditional methods based on LBP histogram, discrete cosine transform(DCT) or principal component analysis(PCA).

  14. Threshold-Based Random Charging Scheme for Decentralized PEV Charging Operation in a Smart Grid.

    Science.gov (United States)

    Kwon, Ojin; Kim, Pilkee; Yoon, Yong-Jin

    2016-12-26

    Smart grids have been introduced to replace conventional power distribution systems without real time monitoring for accommodating the future market penetration of plug-in electric vehicles (PEVs). When a large number of PEVs require simultaneous battery charging, charging coordination techniques have become one of the most critical factors to optimize the PEV charging performance and the conventional distribution system. In this case, considerable computational complexity of a central controller and exchange of real time information among PEVs may occur. To alleviate these problems, a novel threshold-based random charging (TBRC) operation for a decentralized charging system is proposed. Using PEV charging thresholds and random access rates, the PEVs themselves can participate in the charging requests. As PEVs with a high battery state do not transmit the charging requests to the central controller, the complexity of the central controller decreases due to the reduction of the charging requests. In addition, both the charging threshold and the random access rate are statistically calculated based on the average of supply power of the PEV charging system that do not require a real time update. By using the proposed TBRC with a tolerable PEV charging degradation, a 51% reduction of the PEV charging requests is achieved.

  15. Combining rational and random strategies in β-glucosidase Zm-p60.1 protein library construction.

    Directory of Open Access Journals (Sweden)

    Dušan Turek

    Full Text Available Saturation mutagenesis is a cornerstone technique in protein engineering because of its utility (in conjunction with appropriate analytical techniques for assessing effects of varying residues at selected positions on proteins' structures and functions. Site-directed mutagenesis with degenerate primers is the simplest and most rapid saturation mutagenesis technique. Thus, it is highly appropriate for assessing whether or not variation at certain sites is permissible, but not necessarily the most time- and cost-effective technique for detailed assessment of variations' effects. Thus, in the presented study we applied the technique to randomize position W373 in β-glucosidase Zm-p60.1, which is highly conserved among β-glucosidases. Unexpectedly, β-glucosidase activity screening of the generated variants showed that most variants were active, although they generally had significantly lower activity than the wild type enzyme. Further characterization of the library led us to conclude that a carefully selected combination of randomized codon-based saturation mutagenesis and site-directed mutagenesis may be most efficient, particularly when constructing and investigating randomized libraries with high fractions of positive hits.

  16. Combining rational and random strategies in β-glucosidase Zm-p60.1 protein library construction.

    Science.gov (United States)

    Turek, Dušan; Klimeš, Pavel; Mazura, Pavel; Brzobohatý, Břetislav

    2014-01-01

    Saturation mutagenesis is a cornerstone technique in protein engineering because of its utility (in conjunction with appropriate analytical techniques) for assessing effects of varying residues at selected positions on proteins' structures and functions. Site-directed mutagenesis with degenerate primers is the simplest and most rapid saturation mutagenesis technique. Thus, it is highly appropriate for assessing whether or not variation at certain sites is permissible, but not necessarily the most time- and cost-effective technique for detailed assessment of variations' effects. Thus, in the presented study we applied the technique to randomize position W373 in β-glucosidase Zm-p60.1, which is highly conserved among β-glucosidases. Unexpectedly, β-glucosidase activity screening of the generated variants showed that most variants were active, although they generally had significantly lower activity than the wild type enzyme. Further characterization of the library led us to conclude that a carefully selected combination of randomized codon-based saturation mutagenesis and site-directed mutagenesis may be most efficient, particularly when constructing and investigating randomized libraries with high fractions of positive hits.

  17. PLS-based and regularization-based methods for the selection of relevant variables in non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Renata Bujak

    2016-07-01

    Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings

  18. Case studies combined with or without concept maps improve critical thinking in hospital-based nurses: a randomized-controlled trial.

    Science.gov (United States)

    Huang, Yu-Chuan; Chen, Hsing-Hsia; Yeh, Mei-Ling; Chung, Yu-Chu

    2012-06-01

    Critical thinking (CT) is essential to the exercise of professional judgment. As nurses face increasingly complex health-care situations, critical thinking can promote appropriate clinical decision-making and improve the quality of nursing care. This study aimed to evaluate the effects of a program of case studies, alone (CS) or combined with concept maps (CSCM), on improving CT in clinical nurses. The study was a randomized controlled trial. The experimental group participated in a 16-week CSCM program, whereas the control group participated in a CS program of equal duration. A randomized-controlled trial with a multistage randomization process was used to select and to assign participants, ultimately resulting in 67 nurses in each group. Data were collected before and after the program using the California Critical Thinking Skill Test (CCTST) and the California Critical Thinking Disposition Inventory (CCTDI). After the programs, there were significant differences between the two groups in the critical thinking skills of analysis, evaluation, inference, deduction, and induction. There was also an overall significant difference, and a significant difference in the specific disposition of open-mindedness. This study supports the application of case studies combined with concept maps as a hospital-based teaching strategy to promote development of critical thinking skills and encourage dispositions for nurses. The CSCM resulted in greater improvements in all critical thinking skills of as well as the overall and open-minded affective dispositions toward critical thinking, compared with the case studies alone. An obvious improvement in the CSCM participants was the analytic skill and disposition. Further longitudinal studies and data collection from multisite evaluations in a range of geographic locales are warranted. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.

    Science.gov (United States)

    Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku

    2017-07-01

    Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Pseudo-Random Number Generators

    Science.gov (United States)

    Howell, L. W.; Rheinfurth, M. H.

    1984-01-01

    Package features comprehensive selection of probabilistic distributions. Monte Carlo simulations resorted to whenever systems studied not amenable to deterministic analyses or when direct experimentation not feasible. Random numbers having certain specified distribution characteristic integral part of simulations. Package consists of collector of "pseudorandom" number generators for use in Monte Carlo simulations.

  1. Two-stage atlas subset selection in multi-atlas based image segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2015-06-15

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  2. Two-stage atlas subset selection in multi-atlas based image segmentation.

    Science.gov (United States)

    Zhao, Tingting; Ruan, Dan

    2015-06-01

    Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas

  3. Two-stage atlas subset selection in multi-atlas based image segmentation

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2015-01-01

    Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors

  4. Index Fund Selections with GAs and Classifications Based on Turnover

    Science.gov (United States)

    Orito, Yukiko; Motoyama, Takaaki; Yamazaki, Genji

    It is well known that index fund selections are important for the risk hedge of investment in a stock market. The`selection’means that for`stock index futures’, n companies of all ones in the market are selected. For index fund selections, Orito et al.(6) proposed a method consisting of the following two steps : Step 1 is to select N companies in the market with a heuristic rule based on the coefficient of determination between the return rate of each company in the market and the increasing rate of the stock price index. Step 2 is to construct a group of n companies by applying genetic algorithms to the set of N companies. We note that the rule of Step 1 is not unique. The accuracy of the results using their method depends on the length of time data (price data) in the experiments. The main purpose of this paper is to introduce a more`effective rule’for Step 1. The rule is based on turnover. The method consisting of Step 1 based on turnover and Step 2 is examined with numerical experiments for the 1st Section of Tokyo Stock Exchange. The results show that with our method, it is possible to construct the more effective index fund than the results of Orito et al.(6). The accuracy of the results using our method depends little on the length of time data (turnover data). The method especially works well when the increasing rate of the stock price index over a period can be viewed as a linear time series data.

  5. Modal Analysis Based on the Random Decrement Transform

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune; Ibrahim, S. R.

    of this paper is to present a state-of-the-art description of the Random Decrement technique where the statistical theory is outlined and examples are given. But also new results such as estimation of frequency response functions and quality assessment are introduced. Special attention is given......During the last years several papers utilizing the Random Decrement transform as a basis for extraction of modal parameters from the response of linear systems subjected to unknown ambient loads have been presented. Although the Random Decrement technique was developed in a decade starting from...... the introduktion in 1968 the technique seems still to be attractive. This is probably due to the simplicity and the speed of the algorithm and the fact that the theory of the technique has been extended by introducing statistical measures such as correlation functions or spectral densities. The purpose...

  6. Dissociating action-effect activation and effect-based response selection.

    Science.gov (United States)

    Schwarz, Katharina A; Pfister, Roland; Wirth, Robert; Kunde, Wilfried

    2018-05-25

    Anticipated action effects have been shown to govern action selection and initiation, as described in ideomotor theory, and they have also been demonstrated to determine crosstalk between different tasks in multitasking studies. Such effect-based crosstalk was observed not only in a forward manner (with a first task influencing performance in a following second task) but also in a backward manner (the second task influencing the preceding first task), suggesting that action effect codes can become activated prior to a capacity-limited processing stage often denoted as response selection. The process of effect-based response production, by contrast, has been proposed to be capacity-limited. These observations jointly suggest that effect code activation can occur independently of effect-based response production, though this theoretical implication has not been tested directly at present. We tested this hypothesis by employing a dual-task set-up in which we manipulated the ease of effect-based response production (via response-effect compatibility) in an experimental design that allows for observing forward and backward crosstalk. We observed robust crosstalk effects and response-effect compatibility effects alike, but no interaction between both effects. These results indicate that effect activation can occur in parallel for several tasks, independently of effect-based response production, which is confined to one task at a time. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Excellent nonlinearity of a selection device based on anti-series connected Zener diodes for ultrahigh-density bipolar RRAM arrays

    International Nuclear Information System (INIS)

    Li, Yingtao; Li, Rongrong; Wang, Yang; Tao, Chunlan; Fu, Liping; Gao, Xiaoping

    2015-01-01

    A crossbar array is usually used for the high-density application of a resistive random access memory (RRAM) device. However, the cross-talk interference limits the increase in the integration density. In this paper, anti-series connected Zener diodes as a selection device are proposed for bipolar RRAM arrays. Simulation results show that, by using the anti-series connected Zener diodes as a selection device, the readout margin is sufficiently improved compared to that obtained without a selection device or with anti-parallel connected diodes as the selection device. The maximum size of the crossbar arrays with anti-series connected Zener diodes as a selection device over 1 TB is estimated by theoretical simulation. In addition, the feasibility of using the anti-series connected Zener diodes as a selection device for bipolar RRAM is demonstrated experimentally. These results indicate that anti-series connected Zener diodes as a selection device opens up great opportunities to realize ultrahigh-density bipolar RRAM arrays. (paper)

  8. BIM-Based Decision Support System for Material Selection Based on Supplier Rating

    Directory of Open Access Journals (Sweden)

    Abiola Akanmu

    2015-12-01

    Full Text Available Material selection is a delicate process, typically hinged on a number of factors which can be either cost or environmental related. This process becomes more complicated when designers are faced with several material options of building elements and each option can be supplied by different suppliers whose selection criteria may affect the budgetary and environmental requirements of the project. This paper presents the development of a decision support system based on the integration of building information models, a modified harmony search algorithm and supplier performance rating. The system is capable of producing the cost and environmental implications of different material combinations or building designs. A case study is presented to illustrate the functionality of the developed system.

  9. The adverse effect of selective cyclooxygenase-2 inhibitor on random skin flap survival in rats.

    Directory of Open Access Journals (Sweden)

    Haiyong Ren

    Full Text Available BACKGROUND: Cyclooxygenase-2(COX-2 inhibitors provide desired analgesic effects after injury or surgery, but evidences suggested they also attenuate wound healing. The study is to investigate the effect of COX-2 inhibitor on random skin flap survival. METHODS: The McFarlane flap model was established in 40 rats and evaluated within two groups, each group gave the same volume of Parecoxib and saline injection for 7 days. The necrotic area of the flap was measured, the specimens of the flap were stained with haematoxylin-eosin(HE for histologic analysis. Immunohistochemical staining was performed to analyse the level of VEGF and COX-2 . RESULTS: 7 days after operation, the flap necrotic area ratio in study group (66.65 ± 2.81% was significantly enlarged than that of the control group(48.81 ± 2.33%(P <0.01. Histological analysis demonstrated angiogenesis with mean vessel density per mm(2 being lower in study group (15.4 ± 4.4 than in control group (27.2 ± 4.1 (P <0.05. To evaluate the expression of COX-2 and VEGF protein in the intermediate area II in the two groups by immunohistochemistry test .The expression of COX-2 in study group was (1022.45 ± 153.1, and in control group was (2638.05 ± 132.2 (P <0.01. The expression of VEGF in the study and control groups were (2779.45 ± 472.0 vs (4938.05 ± 123.6(P <0.01.In the COX-2 inhibitor group, the expressions of COX-2 and VEGF protein were remarkably down-regulated as compared with the control group. CONCLUSION: Selective COX-2 inhibitor had adverse effect on random skin flap survival. Suppression of neovascularization induced by low level of VEGF was supposed to be the biological mechanism.

  10. Designing a smoking cessation intervention for the unique needs of homeless persons: a community-based randomized clinical trial.

    Science.gov (United States)

    Goldade, Kate; Whembolua, Guy-Lucien; Thomas, Janet; Eischen, Sara; Guo, Hongfei; Connett, John; Des Jarlais, Don; Resnicow, Ken; Gelberg, Lillian; Owen, Greg; Grant, Jon; Ahluwalia, Jasjit S; Okuyemi, Kolawole S

    2011-12-01

    Although smoking prevalence remains strikingly high in homeless populations (~70% and three times the US national average), smoking cessation studies usually exclude homeless persons. Novel evidence-based interventions are needed for this high-risk subpopulation of smokers. To describe the aims and design of a first-ever smoking cessation clinical trial in the homeless population. The study was a two-group randomized community-based trial that enrolled participants (n = 430) residing across eight homeless shelters and transitional housing units in Minnesota. The study objective was to test the efficacy of motivational interviewing (MI) for enhancing adherence to nicotine replacement therapy (NRT; nicotine patch) and smoking cessation outcomes. Participants were randomized to one of the two groups: active (8 weeks of NRT + 6 sessions of MI) or control (NRT + standard care). Participants attended six in-person assessment sessions and eight retention visits at a location of their choice over 6 months. Nicotine patch in 2-week doses was administered at four visits over the first 8 weeks of the 26-week trial. The primary outcome was cotinine-verified 7-day point-prevalence abstinence at 6 months. Secondary outcomes included adherence to nicotine patch assessed through direct observation and patch counts. Other outcomes included the mediating and/or moderating effects of comorbid psychiatric and substance abuse disorders. Lessons learned from the community-based cessation randomized trial for improving recruitment and retention in a mobile and vulnerable population included: (1) the importance of engaging the perspectives of shelter leadership by forming and convening a Community Advisory Board; (2) locating the study at the shelters for more visibility and easier access for participants; (3) minimizing exclusion criteria to allow enrollment of participants with stable psychiatric comorbid conditions; (4) delaying the baseline visit from the eligibility visit by a week

  11. Web app based patient education in psoriasis - a randomized controlled trial.

    Science.gov (United States)

    Hawkins, Spencer D; Barilla, Steven; Feldman, Steven R

    2017-04-15

    Patients report wanting more information about psoriasis and clear expectations from the onset of therapy. Dermatologists do not think patients receive or internalize adequate information. There isa need for further explanation of treatment regimens to increase knowledge, compliance, and patient satisfaction. Recent advancements in web technology have the potential to improve these psoriasis outcomes. A web based application was created to educate psoriasis patients using video, graphics, and textual information. An investigator blinded, randomized, controlled study evaluated the website's efficacy in 50 psoriasis patients at Wake Forest Baptist Health Dermatology. Patients were randomized into two groups: Group 1 received a link to the educational web app and a survey following their visit; Group 2 received a link to the survey with no educational web app. The survey assessed patient knowledge, self reported adherence to medication, and adequacy of addressing concerns. Twenty two patients completed the study. Patients in the web app group scored an average of 11/14 on the psoriasis knowledge quiz, whereas patients in the control group scored an average of 9/14 for an improvement of roughly 18% (p=0.008, n=22). Web app based education via DermPatientEd.Com is an efficient way to improve knowledge, but we did not demonstrate improvements in self-reported medication adherence or the ability to address concerns of psoriasis patients.

  12. Analysis of creative mathematic thinking ability in problem based learning model based on self-regulation learning

    Science.gov (United States)

    Munahefi, D. N.; Waluya, S. B.; Rochmad

    2018-03-01

    The purpose of this research identified the effectiveness of Problem Based Learning (PBL) models based on Self Regulation Leaning (SRL) on the ability of mathematical creative thinking and analyzed the ability of mathematical creative thinking of high school students in solving mathematical problems. The population of this study was students of grade X SMA N 3 Klaten. The research method used in this research was sequential explanatory. Quantitative stages with simple random sampling technique, where two classes were selected randomly as experimental class was taught with the PBL model based on SRL and control class was taught with expository model. The selection of samples at the qualitative stage was non-probability sampling technique in which each selected 3 students were high, medium, and low academic levels. PBL model with SRL approach effectived to students’ mathematical creative thinking ability. The ability of mathematical creative thinking of low academic level students with PBL model approach of SRL were achieving the aspect of fluency and flexibility. Students of academic level were achieving fluency and flexibility aspects well. But the originality of students at the academic level was not yet well structured. Students of high academic level could reach the aspect of originality.

  13. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  14. Bias in random forest variable importance measures: Illustrations, sources and a solution

    Directory of Open Access Journals (Sweden)

    Hothorn Torsten

    2007-01-01

    Full Text Available Abstract Background Variable importance measures for random forests have been receiving increased attention as a means of variable selection in many classification tasks in bioinformatics and related scientific fields, for instance to select a subset of genetic markers relevant for the prediction of a certain disease. We show that random forest variable importance measures are a sensible means for variable selection in many applications, but are not reliable in situations where potential predictor variables vary in their scale of measurement or their number of categories. This is particularly important in genomics and computational biology, where predictors often include variables of different types, for example when predictors include both sequence data and continuous variables such as folding energy, or when amino acid sequence data show different numbers of categories. Results Simulation studies are presented illustrating that, when random forest variable importance measures are used with data of varying types, the results are misleading because suboptimal predictor variables may be artificially preferred in variable selection. The two mechanisms underlying this deficiency are biased variable selection in the individual classification trees used to build the random forest on one hand, and effects induced by bootstrap sampling with replacement on the other hand. Conclusion We propose to employ an alternative implementation of random forests, that provides unbiased variable selection in the individual classification trees. When this method is applied using subsampling without replacement, the resulting variable importance measures can be used reliably for variable selection even in situations where the potential predictor variables vary in their scale of measurement or their number of categories. The usage of both random forest algorithms and their variable importance measures in the R system for statistical computing is illustrated and

  15. Study on Stationarity of Random Load Spectrum Based on the Special Road

    Science.gov (United States)

    Yan, Huawen; Zhang, Weigong; Wang, Dong

    2017-09-01

    In the special road quality assessment method, there is a method using a wheel force sensor, the essence of this method is collecting the load spectrum of the car to reflect the quality of road. According to the definition of stochastic process, it is easy to find that the load spectrum is a stochastic process. However, the analysis method and application range of different random processes are very different, especially in engineering practice, which will directly affect the design and development of the experiment. Therefore, determining the type of a random process has important practical significance. Based on the analysis of the digital characteristics of road load spectrum, this paper determines that the road load spectrum in this experiment belongs to a stationary stochastic process, paving the way for the follow-up modeling and feature extraction of the special road.

  16. Synchronization of random bit generators based on coupled chaotic lasers and application to cryptography.

    Science.gov (United States)

    Kanter, Ido; Butkovski, Maria; Peleg, Yitzhak; Zigzag, Meital; Aviad, Yaara; Reidler, Igor; Rosenbluh, Michael; Kinzel, Wolfgang

    2010-08-16

    Random bit generators (RBGs) constitute an important tool in cryptography, stochastic simulations and secure communications. The later in particular has some difficult requirements: high generation rate of unpredictable bit strings and secure key-exchange protocols over public channels. Deterministic algorithms generate pseudo-random number sequences at high rates, however, their unpredictability is limited by the very nature of their deterministic origin. Recently, physical RBGs based on chaotic semiconductor lasers were shown to exceed Gbit/s rates. Whether secure synchronization of two high rate physical RBGs is possible remains an open question. Here we propose a method, whereby two fast RBGs based on mutually coupled chaotic lasers, are synchronized. Using information theoretic analysis we demonstrate security against a powerful computational eavesdropper, capable of noiseless amplification, where all parameters are publicly known. The method is also extended to secure synchronization of a small network of three RBGs.

  17. Model selection with multiple regression on distance matrices leads to incorrect inferences.

    Directory of Open Access Journals (Sweden)

    Ryan P Franckowiak

    Full Text Available In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC, its small-sample correction (AICc, and the Bayesian information criterion (BIC to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.

  18. Improved Frame Mode Selection for AMR-WB+ Based on Decision Tree

    Science.gov (United States)

    Kim, Jong Kyu; Kim, Nam Soo

    In this letter, we propose a coding mode selection method for the AMR-WB+ audio coder based on a decision tree. In order to reduce computation while maintaining good performance, decision tree classifier is adopted with the closed loop mode selection results as the target classification labels. The size of the decision tree is controlled by pruning, so the proposed method does not increase the memory requirement significantly. Through an evaluation test on a database covering both speech and music materials, the proposed method is found to achieve a much better mode selection accuracy compared with the open loop mode selection module in the AMR-WB+.

  19. Channel and delay estimation for base-station–based cooperative communications in frequency-selective fading channels

    Directory of Open Access Journals (Sweden)

    Hongjun Xu

    2011-07-01

    Full Text Available A channel and delay estimation algorithm for both positive and negative delay, based on the distributed Alamouti scheme, has been recently discussed for base-station–based asynchronous cooperative systems in frequency-flat fading channels. This paper extends the algorithm, the maximum likelihood estimator, to work in frequency-selective fading channels. The minimum mean square error (MMSE performance of channel estimation for both packet schemes and normal schemes is discussed in this paper. The symbol error rate (SER performance of equalisation and detection for both time-reversal space-time block code (STBC and single-carrier STBC is also discussed in this paper. The MMSE simulation results demonstrated the superior performance of the packet scheme over the normal scheme with an improvement in performance of up to 6 dB when feedback was used in the frequency-selective channel at a MSE of 3 x 10–2. The SER simulation results showed that, although both the normal and packet schemes achieved similar diversity orders, the packet scheme demonstrated a 1 dB coding gain over the normal scheme at a SER of 10–5. Finally, the SER simulations showed that the frequency-selective fading system outperformed the frequency-flat fading system.

  20. Multi-Label Learning via Random Label Selection for Protein Subcellular Multi-Locations Prediction.

    Science.gov (United States)

    Wang, Xiao; Li, Guo-Zheng

    2013-03-12

    Prediction of protein subcellular localization is an important but challenging problem, particularly when proteins may simultaneously exist at, or move between, two or more different subcellular location sites. Most of the existing protein subcellular localization methods are only used to deal with the single-location proteins. In the past few years, only a few methods have been proposed to tackle proteins with multiple locations. However, they only adopt a simple strategy, that is, transforming the multi-location proteins to multiple proteins with single location, which doesn't take correlations among different subcellular locations into account. In this paper, a novel method named RALS (multi-label learning via RAndom Label Selection), is proposed to learn from multi-location proteins in an effective and efficient way. Through five-fold cross validation test on a benchmark dataset, we demonstrate our proposed method with consideration of label correlations obviously outperforms the baseline BR method without consideration of label correlations, indicating correlations among different subcellular locations really exist and contribute to improvement of prediction performance. Experimental results on two benchmark datasets also show that our proposed methods achieve significantly higher performance than some other state-of-the-art methods in predicting subcellular multi-locations of proteins. The prediction web server is available at http://levis.tongji.edu.cn:8080/bioinfo/MLPred-Euk/ for the public usage.