WorldWideScience

Sample records for randomly selection method

  1. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology

    Science.gov (United States)

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...

  2. An efficient method of wavelength interval selection based on random frog for multivariate spectral calibration

    Science.gov (United States)

    Yun, Yong-Huan; Li, Hong-Dong; Wood, Leslie R. E.; Fan, Wei; Wang, Jia-Jun; Cao, Dong-Sheng; Xu, Qing-Song; Liang, Yi-Zeng

    2013-07-01

    Wavelength selection is a critical step for producing better prediction performance when applied to spectral data. Considering the fact that the vibrational and rotational spectra have continuous features of spectral bands, we propose a novel method of wavelength interval selection based on random frog, called interval random frog (iRF). To obtain all the possible continuous intervals, spectra are first divided into intervals by moving window of a fix width over the whole spectra. These overlapping intervals are ranked applying random frog coupled with PLS and the optimal ones are chosen. This method has been applied to two near-infrared spectral datasets displaying higher efficiency in wavelength interval selection than others. The source code of iRF can be freely downloaded for academy research at the website: http://code.google.com/p/multivariate-calibration/downloads/list.

  3. Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method

    Science.gov (United States)

    Shamsoddini, A.; Aboodi, M. R.; Karami, J.

    2017-09-01

    Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  4. TEHRAN AIR POLLUTANTS PREDICTION BASED ON RANDOM FOREST FEATURE SELECTION METHOD

    Directory of Open Access Journals (Sweden)

    A. Shamsoddini

    2017-09-01

    Full Text Available Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.

  5. Assessing the accuracy and stability of variable selection methods for random forest modeling in ecology.

    Science.gov (United States)

    Fox, Eric W; Hill, Ryan A; Leibowitz, Scott G; Olsen, Anthony R; Thornbrugh, Darren J; Weber, Marc H

    2017-07-01

    Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological data sets, there is limited guidance on variable selection methods for RF modeling. Typically, either a preselected set of predictor variables are used or stepwise procedures are employed which iteratively remove variables according to their importance measures. This paper investigates the application of variable selection methods to RF models for predicting probable biological stream condition. Our motivating data set consists of the good/poor condition of n = 1365 stream survey sites from the 2008/2009 National Rivers and Stream Assessment, and a large set (p = 212) of landscape features from the StreamCat data set as potential predictors. We compare two types of RF models: a full variable set model with all 212 predictors and a reduced variable set model selected using a backward elimination approach. We assess model accuracy using RF's internal out-of-bag estimate, and a cross-validation procedure with validation folds external to the variable selection process. We also assess the stability of the spatial predictions generated by the RF models to changes in the number of predictors and argue that model selection needs to consider both accuracy and stability. The results suggest that RF modeling is robust to the inclusion of many variables of moderate to low importance. We found no substantial improvement in cross-validated accuracy as a result of variable reduction. Moreover, the backward elimination procedure tended to select too few variables and exhibited numerous issues such as upwardly biased out-of-bag accuracy estimates and instabilities in the spatial predictions. We use simulations to further support and generalize results from the analysis of real data. A main purpose of this work is to elucidate issues of model selection bias and instability to ecologists interested in

  6. Ethnopharmacological versus random plant selection methods for the evaluation of the antimycobacterial activity

    Directory of Open Access Journals (Sweden)

    Danilo R. Oliveira

    2011-05-01

    Full Text Available The municipality of Oriximiná, Brazil, has 33 quilombola communities in remote areas, endowed with wide experience in the use of medicinal plants. An ethnobotanical survey was carried out in five of these communities. A free-listing method directed for the survey of species locally indicated against Tuberculosis and lung problems was also applied. Data were analyzed by quantitative techniques: saliency index and major use agreement. Thirty four informants related 254 ethnospecies. Among these, 43 were surveyed for possible antimycobacterial activity. As a result of those informations, ten species obtained from the ethnodirected approach (ETHNO and eighteen species obtained from the random approach (RANDOM were assayed against Mycobacterium tuberculosis by the microdilution method, using resazurin as an indicator of cell viability. The best results for antimycobacterial activity were obtained of some plants selected by the ethnopharmacological approach (50% ETHNO x 16,7% RANDOM. These results can be even more significant if we consider that the therapeutic success obtained among the quilombola practice is complex, being the use of some plants acting as fortifying agents, depurative, vomitory, purgative and bitter remedy, especially to infectious diseases, of great importance to the communities in the curing or recovering of health as a whole.

  7. Active classifier selection for RGB-D object categorization using a Markov random field ensemble method

    Science.gov (United States)

    Durner, Maximilian; Márton, Zoltán.; Hillenbrand, Ulrich; Ali, Haider; Kleinsteuber, Martin

    2017-03-01

    In this work, a new ensemble method for the task of category recognition in different environments is presented. The focus is on service robotic perception in an open environment, where the robot's task is to recognize previously unseen objects of predefined categories, based on training on a public dataset. We propose an ensemble learning approach to be able to flexibly combine complementary sources of information (different state-of-the-art descriptors computed on color and depth images), based on a Markov Random Field (MRF). By exploiting its specific characteristics, the MRF ensemble method can also be executed as a Dynamic Classifier Selection (DCS) system. In the experiments, the committee- and topology-dependent performance boost of our ensemble is shown. Despite reduced computational costs and using less information, our strategy performs on the same level as common ensemble approaches. Finally, the impact of large differences between datasets is analyzed.

  8. Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling

    Science.gov (United States)

    Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah

    2014-01-01

    Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…

  9. Radiographic methods used before removal of mandibular third molars among randomly selected general dental clinics.

    Science.gov (United States)

    Matzen, Louise H; Petersen, Lars B; Wenzel, Ann

    2016-01-01

    To assess radiographic methods and diagnostically sufficient images used before removal of mandibular third molars among randomly selected general dental clinics. Furthermore, to assess factors predisposing for an additional radiographic examination. 2 observers visited 18 randomly selected clinics in Denmark and studied patient files, including radiographs of patients who had their mandibular third molar(s) removed. The radiographic unit and type of receptor were registered. A diagnostically sufficient image was defined as the whole tooth and mandibular canal were displayed in the radiograph (yes/no). Overprojection between the tooth and mandibular canal (yes/no) and patient-reported inferior alveolar nerve sensory disturbances (yes/no) were recorded. Regression analyses tested if overprojection between the third molar and the mandibular canal and an insufficient intraoral image predisposed for additional radiographic examination(s). 1500 mandibular third molars had been removed; 1090 had intraoral, 468 had panoramic and 67 had CBCT examination. 1000 teeth were removed after an intraoral examination alone, 433 after panoramic examination and 67 after CBCT examination. 90 teeth had an additional examination after intraoral. Overprojection between the tooth and mandibular canal was a significant factor (p < 0.001, odds ratio = 3.56) for an additional examination. 63.7% of the intraoral images were sufficient and 36.3% were insufficient, with no significant difference between images performed with phosphor plates and solid-state sensors (p = 0.6). An insufficient image predisposed for an additional examination (p = 0.008, odds ratio = 1.8) but was only performed in 11% of the cases. Most mandibular third molars were removed based on an intraoral examination although 36.3% were insufficient.

  10. Pseudo cluster randomization: a treatment allocation method to minimize contamination and selection bias.

    NARCIS (Netherlands)

    Borm, G.F.; Melis, R.J.F.; Teerenstra, S.; Peer, P.G.M.

    2005-01-01

    In some clinical trials, treatment allocation on a patient level is not feasible, and whole groups or clusters of patients are allocated to the same treatment. If, for example, a clinical trial is investigating the efficacy of various patient coaching methods and randomization is done on a patient

  11. Blocked Randomization with Randomly Selected Block Sizes

    Directory of Open Access Journals (Sweden)

    Jimmy Efird

    2010-12-01

    Full Text Available When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  12. Blocked randomization with randomly selected block sizes.

    Science.gov (United States)

    Efird, Jimmy

    2011-01-01

    When planning a randomized clinical trial, careful consideration must be given to how participants are selected for various arms of a study. Selection and accidental bias may occur when participants are not assigned to study groups with equal probability. A simple random allocation scheme is a process by which each participant has equal likelihood of being assigned to treatment versus referent groups. However, by chance an unequal number of individuals may be assigned to each arm of the study and thus decrease the power to detect statistically significant differences between groups. Block randomization is a commonly used technique in clinical trial design to reduce bias and achieve balance in the allocation of participants to treatment arms, especially when the sample size is small. This method increases the probability that each arm will contain an equal number of individuals by sequencing participant assignments by block. Yet still, the allocation process may be predictable, for example, when the investigator is not blind and the block size is fixed. This paper provides an overview of blocked randomization and illustrates how to avoid selection bias by using random block sizes.

  13. A Permutation Importance-Based Feature Selection Method for Short-Term Electricity Load Forecasting Using Random Forest

    Directory of Open Access Journals (Sweden)

    Nantian Huang

    2016-09-01

    Full Text Available The prediction accuracy of short-term load forecast (STLF depends on prediction model choice and feature selection result. In this paper, a novel random forest (RF-based feature selection method for STLF is proposed. First, 243 related features were extracted from historical load data and the time information of prediction points to form the original feature set. Subsequently, the original feature set was used to train an RF as the original model. After the training process, the prediction error of the original model on the test set was recorded and the permutation importance (PI value of each feature was obtained. Then, an improved sequential backward search method was used to select the optimal forecasting feature subset based on the PI value of each feature. Finally, the optimal forecasting feature subset was used to train a new RF model as the final prediction model. Experiments showed that the prediction accuracy of RF trained by the optimal forecasting feature subset was higher than that of the original model and comparative models based on support vector regression and artificial neural network.

  14. Randomized selection on the GPU

    Energy Technology Data Exchange (ETDEWEB)

    Monroe, Laura Marie [Los Alamos National Laboratory; Wendelberger, Joanne R [Los Alamos National Laboratory; Michalak, Sarah E [Los Alamos National Laboratory

    2011-01-13

    We implement here a fast and memory-sparing probabilistic top N selection algorithm on the GPU. To our knowledge, this is the first direct selection in the literature for the GPU. The algorithm proceeds via a probabilistic-guess-and-chcck process searching for the Nth element. It always gives a correct result and always terminates. The use of randomization reduces the amount of data that needs heavy processing, and so reduces the average time required for the algorithm. Probabilistic Las Vegas algorithms of this kind are a form of stochastic optimization and can be well suited to more general parallel processors with limited amounts of fast memory.

  15. The random projection method

    CERN Document Server

    Vempala, Santosh S

    2005-01-01

    Random projection is a simple geometric technique for reducing the dimensionality of a set of points in Euclidean space while preserving pairwise distances approximately. The technique plays a key role in several breakthrough developments in the field of algorithms. In other cases, it provides elegant alternative proofs. The book begins with an elementary description of the technique and its basic properties. Then it develops the method in the context of applications, which are divided into three groups. The first group consists of combinatorial optimization problems such as maxcut, graph coloring, minimum multicut, graph bandwidth and VLSI layout. Presented in this context is the theory of Euclidean embeddings of graphs. The next group is machine learning problems, specifically, learning intersections of halfspaces and learning large margin hypotheses. The projection method is further refined for the latter application. The last set consists of problems inspired by information retrieval, namely, nearest neig...

  16. Improving randomness characterization through Bayesian model selection.

    Science.gov (United States)

    Díaz Hernández Rojas, Rafael; Solís, Aldo; Angulo Martínez, Alí M; U'Ren, Alfred B; Hirsch, Jorge G; Marsili, Matteo; Pérez Castillo, Isaac

    2017-06-08

    Random number generation plays an essential role in technology with important applications in areas ranging from cryptography to Monte Carlo methods, and other probabilistic algorithms. All such applications require high-quality sources of random numbers, yet effective methods for assessing whether a source produce truly random sequences are still missing. Current methods either do not rely on a formal description of randomness (NIST test suite) on the one hand, or are inapplicable in principle (the characterization derived from the Algorithmic Theory of Information), on the other, for they require testing all the possible computer programs that could produce the sequence to be analysed. Here we present a rigorous method that overcomes these problems based on Bayesian model selection. We derive analytic expressions for a model's likelihood which is then used to compute its posterior distribution. Our method proves to be more rigorous than NIST's suite and Borel-Normality criterion and its implementation is straightforward. We applied our method to an experimental device based on the process of spontaneous parametric downconversion to confirm it behaves as a genuine quantum random number generator. As our approach relies on Bayesian inference our scheme transcends individual sequence analysis, leading to a characterization of the source itself.

  17. Species selection and random drift in macroevolution.

    Science.gov (United States)

    Chevin, Luis-Miguel

    2016-03-01

    Species selection resulting from trait-dependent speciation and extinction is increasingly recognized as an important mechanism of phenotypic macroevolution. However, the recent bloom in statistical methods quantifying this process faces a scarcity of dynamical theory for their interpretation, notably regarding the relative contributions of deterministic versus stochastic evolutionary forces. I use simple diffusion approximations of birth-death processes to investigate how the expected and random components of macroevolutionary change depend on phenotype-dependent speciation and extinction rates, as can be estimated empirically. I show that the species selection coefficient for a binary trait, and selection differential for a quantitative trait, depend not only on differences in net diversification rates (speciation minus extinction), but also on differences in species turnover rates (speciation plus extinction), especially in small clades. The randomness in speciation and extinction events also produces a species-level equivalent to random genetic drift, which is stronger for higher turnover rates. I then show how microevolutionary processes including mutation, organismic selection, and random genetic drift cause state transitions at the species level, allowing comparison of evolutionary forces across levels. A key parameter that would be needed to apply this theory is the distribution and rate of origination of new optimum phenotypes along a phylogeny. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  18. Randomized response methods

    NARCIS (Netherlands)

    van der Heijden, P.G.M.; Cruyff, Maarten; Bockenholt, U.

    2014-01-01

    In survey research it is often problematic to ask people sensitive questions because they may refuse to answer or they may provide a socially desirable answer that does not reveal their true status on the sensitive question. To solve this problem Warner (1965) proposed randomized response (RR). Here

  19. Random selection of Borel sets

    Directory of Open Access Journals (Sweden)

    Bernd Günther

    2010-10-01

    Full Text Available A theory of random Borel sets is presented, based on dyadic resolutions of compact metric spaces. The conditional expectation of the intersection of two independent random Borel sets is investigated. An example based on an embedding of Sierpinski’s universal curve into the space of Borel sets is given.

  20. Forecasting Using Random Subspace Methods

    NARCIS (Netherlands)

    T. Boot (Tom); D. Nibbering (Didier)

    2016-01-01

    textabstractRandom subspace methods are a novel approach to obtain accurate forecasts in high-dimensional regression settings. We provide a theoretical justification of the use of random subspace methods and show their usefulness when forecasting monthly macroeconomic variables. We focus on two

  1. Methods and analysis of realizing randomized grouping.

    Science.gov (United States)

    Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi

    2011-07-01

    Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.

  2. Randomized Block Cubic Newton Method

    KAUST Repository

    Doikov, Nikita

    2018-02-12

    We study the problem of minimizing the sum of three convex functions: a differentiable, twice-differentiable and a non-smooth term in a high dimensional setting. To this effect we propose and analyze a randomized block cubic Newton (RBCN) method, which in each iteration builds a model of the objective function formed as the sum of the natural models of its three components: a linear model with a quadratic regularizer for the differentiable term, a quadratic model with a cubic regularizer for the twice differentiable term, and perfect (proximal) model for the nonsmooth term. Our method in each iteration minimizes the model over a random subset of blocks of the search variable. RBCN is the first algorithm with these properties, generalizing several existing methods, matching the best known bounds in all special cases. We establish ${\\\\cal O}(1/\\\\epsilon)$, ${\\\\cal O}(1/\\\\sqrt{\\\\epsilon})$ and ${\\\\cal O}(\\\\log (1/\\\\epsilon))$ rates under different assumptions on the component functions. Lastly, we show numerically that our method outperforms the state-of-the-art on a variety of machine learning problems, including cubically regularized least-squares, logistic regression with constraints, and Poisson regression.

  3. 32 CFR 1624.1 - Random selection procedures for induction.

    Science.gov (United States)

    2010-07-01

    ... 32 National Defense 6 2010-07-01 2010-07-01 false Random selection procedures for induction. 1624... SYSTEM INDUCTIONS § 1624.1 Random selection procedures for induction. (a) The Director of Selective Service shall from time to time establish a random selection sequence for induction by a drawing to be...

  4. Selection Method for COTS Systems

    DEFF Research Database (Denmark)

    Hedman, Jonas; Andersson, Bo

    2014-01-01

    requires new skills and methods supporting the process of evaluating and selecting information systems. This paper presents a method for selecting COTS systems. The method includes the following phases: problem framing, requirements and appraisal, and selection of systems. The idea and distinguishing...... feature behind the method is that improved understanding of organizational ‘ends’ or goals should govern the selection of a COTS system. This can also be expressed as a match or fit between ‘ends’ (e.g. improved organizational effectiveness) and ‘means’ (e.g. implementing COTS systems). This way...... of approaching the selection of COTS systems as viewing them as a ‘means’ to reach organizational ‘ends’ is different from the mainstream views of information systems development, namely the view that sees information systems development as a problem-solving process, and the underlying ontological view in other...

  5. Local randomization in neighbor selection improves PRM roadmap quality

    KAUST Repository

    McMahon, Troy

    2012-10-01

    Probabilistic Roadmap Methods (PRMs) are one of the most used classes of motion planning methods. These sampling-based methods generate robot configurations (nodes) and then connect them to form a graph (roadmap) containing representative feasible pathways. A key step in PRM roadmap construction involves identifying a set of candidate neighbors for each node. Traditionally, these candidates are chosen to be the k-closest nodes based on a given distance metric. In this paper, we propose a new neighbor selection policy called LocalRand(k,K\\'), that first computes the K\\' closest nodes to a specified node and then selects k of those nodes at random. Intuitively, LocalRand attempts to benefit from random sampling while maintaining the higher levels of local planner success inherent to selecting more local neighbors. We provide a methodology for selecting the parameters k and K\\'. We perform an experimental comparison which shows that for both rigid and articulated robots, LocalRand results in roadmaps that are better connected than the traditional k-closest policy or a purely random neighbor selection policy. The cost required to achieve these results is shown to be comparable to k-closest. © 2012 IEEE.

  6. Optimization methods for activities selection problems

    Science.gov (United States)

    Mahad, Nor Faradilah; Alias, Suriana; Yaakop, Siti Zulaika; Arshad, Norul Amanina Mohd; Mazni, Elis Sofia

    2017-08-01

    Co-curriculum activities must be joined by every student in Malaysia and these activities bring a lot of benefits to the students. By joining these activities, the students can learn about the time management and they can developing many useful skills. This project focuses on the selection of co-curriculum activities in secondary school using the optimization methods which are the Analytic Hierarchy Process (AHP) and Zero-One Goal Programming (ZOGP). A secondary school in Negeri Sembilan, Malaysia was chosen as a case study. A set of questionnaires were distributed randomly to calculate the weighted for each activity based on the 3 chosen criteria which are soft skills, interesting activities and performances. The weighted was calculated by using AHP and the results showed that the most important criteria is soft skills. Then, the ZOGP model will be analyzed by using LINGO Software version 15.0. There are two priorities to be considered. The first priority which is to minimize the budget for the activities is achieved since the total budget can be reduced by RM233.00. Therefore, the total budget to implement the selected activities is RM11,195.00. The second priority which is to select the co-curriculum activities is also achieved. The results showed that 9 out of 15 activities were selected. Thus, it can concluded that AHP and ZOGP approach can be used as the optimization methods for activities selection problem.

  7. A selective integrated tempering method.

    Science.gov (United States)

    Yang, Lijiang; Qin Gao, Yi

    2009-12-07

    In this paper, based on the integrated tempering sampling we introduce a selective integrated tempering sampling (SITS) method for the efficient conformation sampling and thermodynamics calculations for a subsystem in a large one, such as biomolecules solvated in aqueous solutions. By introducing a potential surface scaled with temperature, the sampling over the configuration space of interest (e.g., the solvated biomolecule) is selectively enhanced but the rest of the system (e.g., the solvent) stays largely unperturbed. The applications of this method to biomolecular systems allow highly efficient sampling over both energy and configuration spaces of interest. Comparing to the popular and powerful replica exchange molecular dynamics (REMD), the method presented in this paper is significantly more efficient in yielding relevant thermodynamics quantities (such as the potential of mean force for biomolecular conformational changes in aqueous solutions). It is more important that SITS but not REMD yielded results that are consistent with the traditional umbrella sampling free energy calculations when explicit solvent model is used since SITS avoids the sampling of the irrelevant phase space (such as the boiling water at high temperatures).

  8. The Random Material Point Method

    NARCIS (Netherlands)

    Wang, B.; Vardon, P.J.; Hicks, M.A.

    2017-01-01

    The material point method is a finite element variant which allows the material, represented by a point-wise discretization, to move through the background mesh. This means that large deformations, such as those observed post slope failure, can be computed. By coupling this material level

  9. Unbiased split variable selection for random survival forests using maximally selected rank statistics.

    Science.gov (United States)

    Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas

    2017-04-15

    The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. A random spatial sampling method in a rural developing nation.

    Science.gov (United States)

    Kondo, Michelle C; Bream, Kent D W; Barg, Frances K; Branas, Charles C

    2014-04-10

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method using geographical information system (GIS) software and global positioning system (GPS) technology for application in a health survey in a rural region of Guatemala, as well as a qualitative study of the enumeration process. This method offers an alternative sampling technique that could reduce opportunities for bias in household selection compared to cluster methods. However, its use is subject to issues surrounding survey preparation, technological limitations and in-the-field household selection. Application of this method in remote areas will raise challenges surrounding the boundary delineation process, use and translation of satellite imagery between GIS and GPS, and household selection at each survey point in varying field conditions. This method favors household selection in denser urban areas and in new residential developments. Random spatial sampling methodology can be used to survey a random sample of population in a remote region of a developing nation. Although this method should be further validated and compared with more established methods to determine its utility in social survey applications, it shows promise for use in developing nations with resource-challenged environments where detailed geographic and human census data are less available.

  11. Preasymptotic convergence of randomized Kaczmarz method

    Science.gov (United States)

    Jiao, Yuling; Jin, Bangti; Lu, Xiliang

    2017-12-01

    Kaczmarz method is one popular iterative method for solving inverse problems, especially in computed tomography. Recently, it was established that a randomized version of the method enjoys an exponential convergence for well-posed problems, and the convergence rate is determined by a variant of the condition number. In this work, we analyze the preasymptotic convergence behavior of the randomized Kaczmarz method, and show that the low-frequency error (with respect to the right singular vectors) decays faster during first iterations than the high-frequency error. Under the assumption that the initial error is smooth (e.g. sourcewise representation), the result explains the fast empirical convergence behavior, thereby shedding new insights into the excellent performance of the randomized Kaczmarz method in practice. Further, we propose a simple strategy to stabilize the asymptotic convergence of the iteration by means of variance reduction. We provide extensive numerical experiments to confirm the analysis and to elucidate the behavior of the algorithms.

  12. In-Place Randomized Slope Selection

    DEFF Research Database (Denmark)

    Blunck, Henrik; Vahrenhold, Jan

    2006-01-01

    Slope selection is a well-known algorithmic tool used in the context of computing robust estimators for fitting a line to a collection P of n points in the plane. We demonstrate that it is possible to perform slope selection in expected O(nlogn) time using only constant extra space in addition to...

  13. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest ...

  14. Study on MAX-MIN Ant System with Random Selection in Quadratic Assignment Problem

    Science.gov (United States)

    Iimura, Ichiro; Yoshida, Kenji; Ishibashi, Ken; Nakayama, Shigeru

    Ant Colony Optimization (ACO), which is a type of swarm intelligence inspired by ants' foraging behavior, has been studied extensively and its effectiveness has been shown by many researchers. The previous studies have reported that MAX-MIN Ant System (MMAS) is one of effective ACO algorithms. The MMAS maintains the balance of intensification and diversification concerning pheromone by limiting the quantity of pheromone to the range of minimum and maximum values. In this paper, we propose MAX-MIN Ant System with Random Selection (MMASRS) for improving the search performance even further. The MMASRS is a new ACO algorithm that is MMAS into which random selection was newly introduced. The random selection is one of the edgechoosing methods by agents (ants). In our experimental evaluation using ten quadratic assignment problems, we have proved that the proposed MMASRS with the random selection is superior to the conventional MMAS without the random selection in the viewpoint of the search performance.

  15. Sequential selection of random vectors under a sum constraint

    OpenAIRE

    Stanke, Mario

    2004-01-01

    We observe a sequence X1,X2,...,Xn of independent and identically distributed coordinatewise nonnegative d-dimensional random vectors. When a vector is observed it can either be selected or rejected but once made this decision is final. In each coordinate the sum of the selected vectors must not exceed a given constant. The problem is to find a selection policy that maximizes the expected number of selected vectors. For a general absolutely continuous distribution of t...

  16. Blind Measurement Selection: A Random Matrix Theory Approach

    KAUST Repository

    Elkhalil, Khalil

    2016-12-14

    This paper considers the problem of selecting a set of $k$ measurements from $n$ available sensor observations. The selected measurements should minimize a certain error function assessing the error in estimating a certain $m$ dimensional parameter vector. The exhaustive search inspecting each of the $n\\\\choose k$ possible choices would require a very high computational complexity and as such is not practical for large $n$ and $k$. Alternative methods with low complexity have recently been investigated but their main drawbacks are that 1) they require perfect knowledge of the measurement matrix and 2) they need to be applied at the pace of change of the measurement matrix. To overcome these issues, we consider the asymptotic regime in which $k$, $n$ and $m$ grow large at the same pace. Tools from random matrix theory are then used to approximate in closed-form the most important error measures that are commonly used. The asymptotic approximations are then leveraged to select properly $k$ measurements exhibiting low values for the asymptotic error measures. Two heuristic algorithms are proposed: the first one merely consists in applying the convex optimization artifice to the asymptotic error measure. The second algorithm is a low-complexity greedy algorithm that attempts to look for a sufficiently good solution for the original minimization problem. The greedy algorithm can be applied to both the exact and the asymptotic error measures and can be thus implemented in blind and channel-aware fashions. We present two potential applications where the proposed algorithms can be used, namely antenna selection for uplink transmissions in large scale multi-user systems and sensor selection for wireless sensor networks. Numerical results are also presented and sustain the efficiency of the proposed blind methods in reaching the performances of channel-aware algorithms.

  17. Improving methods of resident selection.

    Science.gov (United States)

    Prager, Jeremy D; Myer, Charles M; Hayes, Kay M; Myer, Charles M; Pensak, Myles L

    2010-12-01

    Applying the concept of the ACGME general competencies, it is possible to define the essential job objectives and competencies of a junior otolaryngology resident. The objective of this study is to incorporate commercially available tools of business in the identification of competencies specific to the junior otolaryngology resident and develop behavioral-based interview questions and techniques designed to identify these qualities in candidates for residency. Institution of a pilot program involving a focus group within an otolaryngology department, a professional development consultant, commercial business software for occupational analysis and personnel selection, and an interview technique training seminar for faculty and residents. In coordination with a university-based professional development consultant, a formal job analysis was conducted to define the job objectives and competencies of a junior otolaryngology resident. These results were used to generate behavioral-based interview questions for use in the resident selection process. All interviewing faculty and residents were trained in behavioral-based interviewing. Occupational objectives for the junior resident position specific to a particular university department of otolaryngology were identified. Additionally, the essential skills, areas of knowledge, and competencies were identified. Behavioral-based questions specific to the competencies were created and incorporated into the current resident selection interview. Using tools of occupational analysis and personnel selection, a list of job objectives and competencies for the junior otolaryngology resident can be created. Using these results, behavioral-based interviews may be implemented to complement traditional interviews with the ultimate goal of improving candidate selection.

  18. Selectivity and sparseness in randomly connected balanced networks.

    Directory of Open Access Journals (Sweden)

    Cengiz Pehlevan

    Full Text Available Neurons in sensory cortex show stimulus selectivity and sparse population response, even in cases where no strong functionally specific structure in connectivity can be detected. This raises the question whether selectivity and sparseness can be generated and maintained in randomly connected networks. We consider a recurrent network of excitatory and inhibitory spiking neurons with random connectivity, driven by random projections from an input layer of stimulus selective neurons. In this architecture, the stimulus-to-stimulus and neuron-to-neuron modulation of total synaptic input is weak compared to the mean input. Surprisingly, we show that in the balanced state the network can still support high stimulus selectivity and sparse population response. In the balanced state, strong synapses amplify the variation in synaptic input and recurrent inhibition cancels the mean. Functional specificity in connectivity emerges due to the inhomogeneity caused by the generative statistical rule used to build the network. We further elucidate the mechanism behind and evaluate the effects of model parameters on population sparseness and stimulus selectivity. Network response to mixtures of stimuli is investigated. It is shown that a balanced state with unselective inhibition can be achieved with densely connected input to inhibitory population. Balanced networks exhibit the "paradoxical" effect: an increase in excitatory drive to inhibition leads to decreased inhibitory population firing rate. We compare and contrast selectivity and sparseness generated by the balanced network to randomly connected unbalanced networks. Finally, we discuss our results in light of experiments.

  19. Event selection with a Random Forest in IceCube

    Energy Technology Data Exchange (ETDEWEB)

    Ruhe, Tim [TU, Dortmund (Germany); Collaboration: IceCube-Collaboration

    2011-07-01

    The Random Forest method is a multivariate algorithm that can be used for classification and regression respectively. The Random Forest implemented in the RapidMiner learning environment has been used for training and validation on data and Monte Carlo simulations of the IceCube neutrino telescope. Latest results are presented.

  20. A Selection Method for COTS Systems

    DEFF Research Database (Denmark)

    Hedman, Jonas

    new skills and methods supporting the process of evaluating and selecting information systems. This paper presents a method for selecting COTS systems. The method includes the following phases: problem framing, requirements and appraisal, and selection of systems. The idea and distinguishing feature...... behind the method is that improved understanding of organizational' ends' or goals should govern the selection of a COTS system. This can also be expressed as a match or fit between ‘ends' (e.g. improved organizational effectiveness) and ‘means' (e.g. implementing COTS systems). This way of approaching...... the selection of COTS systems as viewing COTS systems as a ‘mean' to reach organizational ‘ends' is different from the mainstream view of information systems development, which view information systems development as a problem solving process, and the underlying ontological view in other COTS selection methods...

  1. Slope failure analysis using the random material point method

    NARCIS (Netherlands)

    Wang, B.; Hicks, M.A.; Vardon, P.J.

    2016-01-01

    The random material point method (RMPM), which combines random field theory and the material point method (MPM), is proposed. It differs from the random finite-element method (RFEM), by assigning random field (cell) values to material points that are free to move relative to the computational grid

  2. SELECTED GEARS CUTTING METHODS DEVELOPMENT

    Directory of Open Access Journals (Sweden)

    Stanisław PŁONKA

    2015-06-01

    Full Text Available The paper presents a somewhat extended approach to the cutting tools classification. After that authors present results of the research concerning the form – grinding of spur gears with helical teeth. The modern gear – grinding machine Rapid 900 of HOEFLER is described and results of the accuracy and surface roughness researches are given. Then an example is given, based on references, of the bevel spherical involute, gears cutting, illustrating the development from the cutting of very large straight bevel gears based on the double tracing method suitable for old template – type bevel gear planers to the modern approach using CNC milling machine (3 or 4axis controlled, the principle of Free Form Surfaces cutting.

  3. Stochastic finite element method with simple random elements

    OpenAIRE

    Starkloff, Hans-Jörg

    2008-01-01

    We propose a variant of the stochastic finite element method, where the random elements occuring in the problem formulation are approximated by simple random elements, i.e. random elements with only a finite number of possible values.

  4. Fast, Randomized Join-Order Selection - Why Use Transformations?

    NARCIS (Netherlands)

    C.A. Galindo-Legaria; A.J. Pellenkoft (Jan); M.L. Kersten (Martin)

    1994-01-01

    textabstractWe study the effectiveness of probabilistic selection of join-query evaluation plans, without reliance on tree transformation rules. Instead, each candidate plan is chosen uniformly at random from the space of valid evaluation orders. This leads to a transformation-free strategy where a

  5. The reliability of randomly selected final year pharmacy students in ...

    African Journals Online (AJOL)

    Employing ANOVA, factorial experimental analysis, and the theory of error, reliability studies were conducted on the assessment of the drug product chloroquine phosphate tablets. The G–Study employed equal numbers of the factors for uniform control, and involved three analysts (randomly selected final year Pharmacy ...

  6. Randomized Oversampling for Generalized Multiscale Finite Element Methods

    KAUST Repository

    Calo, Victor M.

    2016-03-23

    In this paper, we develop efficient multiscale methods for flows in heterogeneous media. We use the generalized multiscale finite element (GMsFEM) framework. GMsFEM approximates the solution space locally using a few multiscale basis functions. This approximation selects an appropriate snapshot space and a local spectral decomposition, e.g., the use of oversampled regions, in order to achieve an efficient model reduction. However, the successful construction of snapshot spaces may be costly if too many local problems need to be solved in order to obtain these spaces. We use a moderate quantity of local solutions (or snapshot vectors) with random boundary conditions on oversampled regions with zero forcing to deliver an efficient methodology. Motivated by the randomized algorithm presented in [P. G. Martinsson, V. Rokhlin, and M. Tygert, A Randomized Algorithm for the approximation of Matrices, YALEU/DCS/TR-1361, Yale University, 2006], we consider a snapshot space which consists of harmonic extensions of random boundary conditions defined in a domain larger than the target region. Furthermore, we perform an eigenvalue decomposition in this small space. We study the application of randomized sampling for GMsFEM in conjunction with adaptivity, where local multiscale spaces are adaptively enriched. Convergence analysis is provided. We present representative numerical results to validate the method proposed.

  7. Implementing multifactorial psychotherapy research in online virtual environments (IMPROVE-2: study protocol for a phase III trial of the MOST randomized component selection method for internet cognitive-behavioural therapy for depression

    Directory of Open Access Journals (Sweden)

    Edward Watkins

    2016-10-01

    Full Text Available Abstract Background Depression is a global health challenge. Although there are effective psychological and pharmaceutical interventions, our best treatments achieve remission rates less than 1/3 and limited sustained recovery. Underpinning this efficacy gap is limited understanding of how complex psychological interventions for depression work. Recent reviews have argued that the active ingredients of therapy need to be identified so that therapy can be made briefer, more potent, and to improve scalability. This in turn requires the use of rigorous study designs that test the presence or absence of individual therapeutic elements, rather than standard comparative randomised controlled trials. One such approach is the Multiphase Optimization Strategy, which uses efficient experimentation such as factorial designs to identify active factors in complex interventions. This approach has been successfully applied to behavioural health but not yet to mental health interventions. Methods/Design A Phase III randomised, single-blind balanced fractional factorial trial, based in England and conducted on the internet, randomized at the level of the patient, will investigate the active ingredients of internet cognitive-behavioural therapy (CBT for depression. Adults with depression (operationalized as PHQ-9 score ≥ 10, recruited directly from the internet and from an UK National Health Service Improving Access to Psychological Therapies service, will be randomized across seven experimental factors, each reflecting the presence versus absence of specific treatment components (activity scheduling, functional analysis, thought challenging, relaxation, concreteness training, absorption, self-compassion training using a 32-condition balanced fractional factorial design (2IV 7-2. The primary outcome is symptoms of depression (PHQ-9 at 12 weeks. Secondary outcomes include symptoms of anxiety and process measures related to hypothesized mechanisms

  8. Selecting a phoneme-to-grapheme mapping: Random or weighted selection?

    Directory of Open Access Journals (Sweden)

    Binna Lee

    2015-05-01

    Our findings demonstrate that random selection underestimates MOA’s PG correspondences whereas weighted selection predicts higher PG correspondences than he produces. To explain his intermediate spelling performance on PPEs, we will test additional approaches to weighing the relative probability of PG mappings, including using log frequencies, separating consonant and vowel status, and considering the number of grapheme options in each phoneme.

  9. SNP selection and classification of genome-wide SNP data using stratified sampling random forests.

    Science.gov (United States)

    Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K

    2012-09-01

    For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.

  10. Selection for altruism through random drift in variable size populations.

    Science.gov (United States)

    Houchmandzadeh, Bahram; Vallade, Marcel

    2012-05-10

    Altruistic behavior is defined as helping others at a cost to oneself and a lowered fitness. The lower fitness implies that altruists should be selected against, which is in contradiction with their widespread presence is nature. Present models of selection for altruism (kin or multilevel) show that altruistic behaviors can have 'hidden' advantages if the 'common good' produced by altruists is restricted to some related or unrelated groups. These models are mostly deterministic, or assume a frequency dependent fitness. Evolutionary dynamics is a competition between deterministic selection pressure and stochastic events due to random sampling from one generation to the next. We show here that an altruistic allele extending the carrying capacity of the habitat can win by increasing the random drift of "selfish" alleles. In other terms, the fixation probability of altruistic genes can be higher than those of a selfish ones, even though altruists have a smaller fitness. Moreover when populations are geographically structured, the altruists advantage can be highly amplified and the fixation probability of selfish genes can tend toward zero. The above results are obtained both by numerical and analytical calculations. Analytical results are obtained in the limit of large populations. The theory we present does not involve kin or multilevel selection, but is based on the existence of random drift in variable size populations. The model is a generalization of the original Fisher-Wright and Moran models where the carrying capacity depends on the number of altruists.

  11. Selection for altruism through random drift in variable size populations

    Directory of Open Access Journals (Sweden)

    Houchmandzadeh Bahram

    2012-05-01

    Full Text Available Abstract Background Altruistic behavior is defined as helping others at a cost to oneself and a lowered fitness. The lower fitness implies that altruists should be selected against, which is in contradiction with their widespread presence is nature. Present models of selection for altruism (kin or multilevel show that altruistic behaviors can have ‘hidden’ advantages if the ‘common good’ produced by altruists is restricted to some related or unrelated groups. These models are mostly deterministic, or assume a frequency dependent fitness. Results Evolutionary dynamics is a competition between deterministic selection pressure and stochastic events due to random sampling from one generation to the next. We show here that an altruistic allele extending the carrying capacity of the habitat can win by increasing the random drift of “selfish” alleles. In other terms, the fixation probability of altruistic genes can be higher than those of a selfish ones, even though altruists have a smaller fitness. Moreover when populations are geographically structured, the altruists advantage can be highly amplified and the fixation probability of selfish genes can tend toward zero. The above results are obtained both by numerical and analytical calculations. Analytical results are obtained in the limit of large populations. Conclusions The theory we present does not involve kin or multilevel selection, but is based on the existence of random drift in variable size populations. The model is a generalization of the original Fisher-Wright and Moran models where the carrying capacity depends on the number of altruists.

  12. Method for selective CMP of polysilicon

    Science.gov (United States)

    Babu, Suryadevara V. (Inventor); Natarajan, Anita (Inventor); Hegde, Sharath (Inventor)

    2010-01-01

    A method of removing polysilicon in preference to silicon dioxide and/or silicon nitride by chemical mechanical polishing. The method removes polysilicon from a surface at a high removal rate while maintaining a high selectivity of polysilicon to silicon dioxide and/or a polysilicon to silicon nitride. The method is particularly suitable for use in the fabrication of MEMS devices.

  13. Performance of variable selection methods using stability-based selection.

    Science.gov (United States)

    Lu, Danny; Weljie, Aalim; de Leon, Alexander R; McConnell, Yarrow; Bathe, Oliver F; Kopciuk, Karen

    2017-04-04

    Variable selection is frequently carried out during the analysis of many types of high-dimensional data, including those in metabolomics. This study compared the predictive performance of four variable selection methods using stability-based selection, a new secondary selection method that is implemented in the R package BioMark. Two of these methods were evaluated using the more well-known false discovery rate (FDR) as well. Simulation studies varied factors relevant to biological data studies, with results based on the median values of 200 partial area under the receiver operating characteristic curve. There was no single top performing method across all factor settings, but the student t test based on stability selection or with FDR adjustment and the variable importance in projection (VIP) scores from partial least squares regression models obtained using a stability-based approach tended to perform well in most settings. Similar results were found with a real spiked-in metabolomics dataset. Group sample size, group effect size, number of significant variables and correlation structure were the most important factors whereas the percentage of significant variables was the least important. Researchers can improve prediction scores for their study data by choosing VIP scores based on stability variable selection over the other approaches when the number of variables is small to modest and by increasing the number of samples even moderately. When the number of variables is high and there is block correlation amongst the significant variables (i.e., true biomarkers), the FDR-adjusted student t test performed best. The R package BioMark is an easy-to-use open-source program for variable selection that had excellent performance characteristics for the purposes of this study.

  14. MUSCLE MRI SEGMENTATION USING RANDOM WALKER METHOD

    Directory of Open Access Journals (Sweden)

    A. V. Shukelovich

    2013-01-01

    Full Text Available A technique of marker set construction for muscle MRI segmentation using random walker approach is introduced. The possibility of clinician’s manual labor amount reduction and random walker algorithm optimization is studied.

  15. Equipment Selection by using Fuzzy TOPSIS Method

    Science.gov (United States)

    Yavuz, Mahmut

    2016-10-01

    In this study, Fuzzy TOPSIS method was performed for the selection of open pit truck and the optimal solution of the problem was investigated. Data from Turkish Coal Enterprises was used in the application of the method. This paper explains the Fuzzy TOPSIS approaches with group decision-making application in an open pit coal mine in Turkey. An algorithm of the multi-person multi-criteria decision making with fuzzy set approach was applied an equipment selection problem. It was found that Fuzzy TOPSIS with a group decision making is a method that may help decision-makers in solving different decision-making problems in mining.

  16. Two-year Randomized Clinical Trial Of Self-etching Adhesives And Selective Enamel Etching

    OpenAIRE

    Pena, MR; Rodrigues CE; JA; Ely; Giannini, C.; Reis, M; AF

    2016-01-01

    Objective: The aim of this randomized, controlled prospective clinical trial was to evaluate the clinical effectiveness of restoring noncarious cervical lesions with two self-etching adhesive systems applied with or without selective enamel etching. Methods: A one-step self-etching adhesive (Xeno V+) and a two-step self-etching system (Clearfil SE Bond) were used. The effectiveness of phosphoric acid selective etching of enamel margins was also evaluated. Fifty-six cavities were restored with...

  17. Genetic algorithms as global random search methods

    Science.gov (United States)

    Peck, Charles C.; Dhawan, Atam P.

    1995-01-01

    Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.

  18. Interference-aware random beam selection for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed M.

    2012-09-01

    Spectrum sharing systems have been introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this paper, we develop interference-aware random beam selection schemes that provide enhanced throughput for the secondary link under the condition that the interference observed at the primary link is within a predetermined acceptable value. For a secondary transmitter equipped with multiple antennas, our schemes select a random beam, among a set of power- optimized orthogonal random beams, that maximizes the capacity of the secondary link while satisfying the interference constraint at the primary receiver for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the signal-to-noise and interference ratio (SINR) statistics as well as the capacity of the secondary link. Finally, we present numerical results that study the effect of system parameters including number of beams and the maximum transmission power on the capacity of the secondary link attained using the proposed schemes. © 2012 IEEE.

  19. A Mixed Feature Selection Method Considering Interaction

    OpenAIRE

    Zilin Zeng; Hongjun Zhang; Rui Zhang; Youliang Zhang

    2015-01-01

    Feature interaction has gained considerable attention recently. However, many feature selection methods considering interaction are only designed for categorical features. This paper proposes a mixed feature selection algorithm based on neighborhood rough sets that can be used to search for interacting features. In this paper, feature relevance, feature redundancy, and feature interaction are defined in the framework of neighborhood rough sets, the neighborhood interaction weight factor refle...

  20. Individual Differences Methods for Randomized Experiments

    Science.gov (United States)

    Tucker-Drob, Elliot M.

    2011-01-01

    Experiments allow researchers to randomly vary the key manipulation, the instruments of measurement, and the sequences of the measurements and manipulations across participants. To date, however, the advantages of randomized experiments to manipulate both the aspects of interest and the aspects that threaten internal validity have been primarily…

  1. Variable selection by lasso-type methods

    Directory of Open Access Journals (Sweden)

    Sohail Chand

    2011-09-01

    Full Text Available Variable selection is an important property of shrinkage methods. The adaptive lasso is an oracle procedure and can do consistent variable selection. In this paper, we provide an explanation that how use of adaptive weights make it possible for the adaptive lasso to satisfy the necessary and almost sufcient condition for consistent variable selection. We suggest a novel algorithm and give an important result that for the adaptive lasso if predictors are normalised after the introduction of adaptive weights, it makes the adaptive lasso performance identical to the lasso.

  2. Personnel Selection Based on Fuzzy Methods

    Directory of Open Access Journals (Sweden)

    Lourdes Cañós

    2011-03-01

    Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.

  3. Exploring Several Methods of Groundwater Model Selection

    Science.gov (United States)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  4. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  5. A review of methods supporting supplier selection

    NARCIS (Netherlands)

    de Boer, L.; Labro, Eva; Morlacchi, Pierangela

    2001-01-01

    this paper we present a review of decision methods reported in the literature for supporting the supplier selection process. The review is based on an extensive search in the academic literature. We position the contributions in a framework that takes the diversity of procurement situations in terms

  6. Selecting the drainage method for agricultural land

    NARCIS (Netherlands)

    Bos, M.G.

    2001-01-01

    To facilitate crop growth excess water should be drained from the rooting zone to allow root development of the crop and from the soil surface to facilitate access to the field. Basically, there are three drainage methods from which the designer can select being; surface drains, pumped tube wells

  7. The signature of positive selection at randomly chosen loci.

    Science.gov (United States)

    Przeworski, Molly

    2002-03-01

    In Drosophila and humans, there are accumulating examples of loci with a significant excess of high-frequency-derived alleles or high levels of linkage disequilibrium, relative to a neutral model of a random-mating population of constant size. These are features expected after a recent selective sweep. Their prevalence suggests that positive directional selection may be widespread in both species. However, as I show here, these features do not persist long after the sweep ends: The high-frequency alleles drift to fixation and no longer contribute to polymorphism, while linkage disequilibrium is broken down by recombination. As a result, loci chosen without independent evidence of recent selection are not expected to exhibit either of these features, even if they have been affected by numerous sweeps in their genealogical history. How then can we explain the patterns in the data? One possibility is population structure, with unequal sampling from different subpopulations. Alternatively, positive selection may not operate as is commonly modeled. In particular, the rate of fixation of advantageous mutations may have increased in the recent past.

  8. Convergence of a random walk method for the Burgers equation

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, S.

    1985-10-01

    In this paper we consider a random walk algorithm for the solution of Burgers' equation. The algorithm uses the method of fractional steps. The non-linear advection term of the equation is solved by advecting ''fluid'' particles in a velocity field induced by the particles. The diffusion term of the equation is approximated by adding an appropriate random perturbation to the positions of the particles. Though the algorithm is inefficient as a method for solving Burgers' equation, it does model a similar method, the random vortex method, which has been used extensively to solve the incompressible Navier-Stokes equations. The purpose of this paper is to demonstrate the strong convergence of our random walk method and so provide a model for the proof of convergence for more complex random walk algorithms; for instance, the random vortex method without boundaries.

  9. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection.

    Science.gov (United States)

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-06-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.

  10. Pediatric selective mutism therapy: a randomized controlled trial.

    Science.gov (United States)

    Esposito, Maria; Gimigliano, Francesca; Barillari, Maria R; Precenzano, Francesco; Ruberto, Maria; Sepe, Joseph; Barillari, Umberto; Gimigliano, Raffaele; Militerni, Roberto; Messina, Giovanni; Carotenuto, Marco

    2017-10-01

    Selective mutism (SM) is a rare disease in children coded by DSM-5 as an anxiety disorder. Despite the disabling nature of the disease, there is still no specific treatment. The aims of this study were to verify the efficacy of six-month standard psychomotor treatment and the positive changes in lifestyle, in a population of children affected by SM. Randomized controlled trial registered in the European Clinical Trials Registry (EuDract 2015-001161-36). University third level Centre (Child and Adolescent Neuropsychiatry Clinic). Study population was composed by 67 children in group A (psychomotricity treatment) (35 M, mean age 7.84±1.15) and 71 children in group B (behavioral and educational counseling) (37 M, mean age 7.75±1.36). Psychomotor treatment was administered by trained child therapists in residential settings three times per week. Each child was treated for the whole period by the same therapist and all the therapists shared the same protocol. The standard psychomotor session length is of 45 minutes. At T0 and after 6 months (T1) of treatments, patients underwent a behavioral and SM severity assessment. To verify the effects of the psychomotor management, the Child Behavior Checklist questionnaire (CBCL) and Selective Mutism Questionnaire (SMQ) were administered to the parents. After 6 months of psychomotor treatment SM children showed a significant reduction among CBCL scores such as in social relations, anxious/depressed, social problems and total problems (Ppsychomotricity a safe and efficacy therapy for pediatric selective mutism.

  11. Supplier Selection Using Weighted Utility Additive Method

    Science.gov (United States)

    Karande, Prasad; Chakraborty, Shankar

    2015-10-01

    Supplier selection is a multi-criteria decision-making (MCDM) problem which mainly involves evaluating a number of available suppliers according to a set of common criteria for choosing the best one to meet the organizational needs. For any manufacturing or service organization, selecting the right upstream suppliers is a key success factor that will significantly reduce purchasing cost, increase downstream customer satisfaction and improve competitive ability. The past researchers have attempted to solve the supplier selection problem employing different MCDM techniques which involve active participation of the decision makers in the decision-making process. This paper deals with the application of weighted utility additive (WUTA) method for solving supplier selection problems. The WUTA method, an extension of utility additive approach, is based on ordinal regression and consists of building a piece-wise linear additive decision model from a preference structure using linear programming (LP). It adopts preference disaggregation principle and addresses the decision-making activities through operational models which need implicit preferences in the form of a preorder of reference alternatives or a subset of these alternatives present in the process. The preferential preorder provided by the decision maker is used as a restriction of a LP problem, which has its own objective function, minimization of the sum of the errors associated with the ranking of each alternative. Based on a given reference ranking of alternatives, one or more additive utility functions are derived. Using these utility functions, the weighted utilities for individual criterion values are combined into an overall weighted utility for a given alternative. It is observed that WUTA method, having a sound mathematical background, can provide accurate ranking to the candidate suppliers and choose the best one to fulfill the organizational requirements. Two real time examples are illustrated to prove

  12. Selective spectroscopic methods for water analysis

    Energy Technology Data Exchange (ETDEWEB)

    Vaidya, Bikas [Iowa State Univ., Ames, IA (United States)

    1997-06-24

    This dissertation explores in large part the development of a few types of spectroscopic methods in the analysis of water. Methods for the determination of some of the most important properties of water like pH, metal ion content, and chemical oxygen demand are investigated in detail. This report contains a general introduction to the subject and the conclusions. Four chapters and an appendix have been processed separately. They are: chromogenic and fluorogenic crown ether compounds for the selective extraction and determination of Hg(II); selective determination of cadmium in water using a chromogenic crown ether in a mixed micellar solution; reduction of chloride interference in chemical oxygen demand determination without using mercury salts; structural orientation patterns for a series of anthraquinone sulfonates adsorbed at an aminophenol thiolate monolayer chemisorbed at gold; and the role of chemically modified surfaces in the construction of miniaturized analytical instrumentation.

  13. Solution Methods for Structures with Random Properties Subject to Random Excitation

    DEFF Research Database (Denmark)

    Köylüoglu, H. U.; Nielsen, Søren R. K.; Cakmak, A. S.

    This paper deals with the lower order statistical moments of the response of structures with random stiffness and random damping properties subject to random excitation. The arising stochastic differential equations (SDE) with random coefficients are solved by two methods, a second order...... perturbation approach and a Markovian method. The second order perturbation approach is grounded on the total probability theorem and can be compactly written. Moreover, the problem to be solved is independent of the dimension of the random variables involved. The Markovian approach suggests transforming...... the SDE with random coefficients with deterministic initial conditions to an equivalent nonlinear SDE with deterministic coefficient and random initial conditions. In both methods, the statistical moment equations are used. Hierarchy of statistical moments in the markovian approach is closed...

  14. Optimizing Event Selection with the Random Grid Search

    Energy Technology Data Exchange (ETDEWEB)

    Bhat, Pushpalatha C. [Fermilab; Prosper, Harrison B. [Florida State U.; Sekmen, Sezen [Kyungpook Natl. U.; Stewart, Chip [Broad Inst., Cambridge

    2017-06-29

    The random grid search (RGS) is a simple, but efficient, stochastic algorithm to find optimal cuts that was developed in the context of the search for the top quark at Fermilab in the mid-1990s. The algorithm, and associated code, have been enhanced recently with the introduction of two new cut types, one of which has been successfully used in searches for supersymmetry at the Large Hadron Collider. The RGS optimization algorithm is described along with the recent developments, which are illustrated with two examples from particle physics. One explores the optimization of the selection of vector boson fusion events in the four-lepton decay mode of the Higgs boson and the other optimizes SUSY searches using boosted objects and the razor variables.

  15. Empirical evaluation suggests Copas selection model preferable to trim-and-fill method for selection bias in meta-analysis.

    Science.gov (United States)

    Schwarzer, Guido; Carpenter, James; Rücker, Gerta

    2010-03-01

    Meta-analysis yields a biased result if published studies represent a biased selection of the evidence. Copas proposed a selection model to assess the sensitivity of meta-analysis conclusions to possible selection bias. An alternative proposal is the trim-and-fill method. This article reports an empirical comparison of the two methods. We took 157 meta-analyses with binary outcomes, analyzed each one using both methods, then performed an automated comparison of the results. We compared the treatment estimates, standard errors, associated P-values, and number of missing studies estimated by both methods. Both methods give similar point estimates, but standard errors and P-values are systematically larger for the trim-and-fill method. Furthermore, P-values from the trim-and-fill method are typically larger than those from the usual random effects model when no selection bias is detected. By contrast, P-values from the Copas selection model and the usual random effects model are similar in this setting. The trim-and-fill method reports more missing studies than the Copas selection model, unless selection bias is detected when the position is reversed. The assumption that the most extreme studies are missing leads to excessively conservative inference in practice for the trim-and-fill method. The Copas selection model appears to be the preferable approach.

  16. What role for qualitative methods in randomized experiments?

    DEFF Research Database (Denmark)

    Prowse, Martin; Camfield, Laura

    2009-01-01

    The vibrant debate on randomized experiments within international development has been slow to accept a role for qualitative methods within research designs. Whilst there are examples of how 'field visits' or descriptive analyses of context can play a complementary, but secondary, role...... to quantitative methods, little attention has been paid to the possibility of randomized experiments that allow a primary role to qualitative methods. This paper assesses whether a range of qualitative methods compromise the internal and external validity criteria of randomized experiments. It suggests that life...... history interviews have advantages over other qualitative methods, and offers one alternative to the conventional survey tool....

  17. Classification of epileptic EEG signals based on simple random sampling and sequential feature selection

    OpenAIRE

    Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui

    2016-01-01

    Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential fea...

  18. Non-hermitian random matrix theory: Method of hermitian reduction

    Energy Technology Data Exchange (ETDEWEB)

    Feinberg, J. [California Univ., Santa Barbara, CA (United States). Inst. for Theoretical Physics; Zee, A. [California Univ., Santa Barbara, CA (United States). Inst. for Theoretical Physics]|[Institute for Advanced Study, Olden Lane, Princeton, NJ 08540 (United States)

    1997-11-03

    We consider random non-hermitian matrices in the large-N limit. The power of analytic function theory cannot be brought to bear directly to analyze non-hermitian random matrices, in contrast to hermitian random matrices. To overcome this difficulty, we show that associated to each ensemble of non-hermitian matrices there is an auxiliary ensemble of random hermitian matrices which can be analyzed by the usual methods. We then extract the Green function and the density of eigenvalues of the non-hermitian ensemble from those of the auxiliary ensemble. We apply this ``method of hermitization`` to several examples, and discuss a number of related issues. (orig.). 25 refs.

  19. [Evaluation of using statistical methods in selected national medical journals].

    Science.gov (United States)

    Sych, Z

    1996-01-01

    The paper covers the performed evaluation of frequency with which the statistical methods were applied in analyzed works having been published in six selected, national medical journals in the years 1988-1992. For analysis the following journals were chosen, namely: Klinika Oczna, Medycyna Pracy, Pediatria Polska, Polski Tygodnik Lekarski, Roczniki Państwowego Zakładu Higieny, Zdrowie Publiczne. Appropriate number of works up to the average in the remaining medical journals was randomly selected from respective volumes of Pol. Tyg. Lek. The studies did not include works wherein the statistical analysis was not implemented, which referred both to national and international publications. That exemption was also extended to review papers, casuistic ones, reviews of books, handbooks, monographies, reports from scientific congresses, as well as papers on historical topics. The number of works was defined in each volume. Next, analysis was performed to establish the mode of finding out a suitable sample in respective studies, differentiating two categories: random and target selections. Attention was also paid to the presence of control sample in the individual works. In the analysis attention was also focussed on the existence of sample characteristics, setting up three categories: complete, partial and lacking. In evaluating the analyzed works an effort was made to present the results of studies in tables and figures (Tab. 1, 3). Analysis was accomplished with regard to the rate of employing statistical methods in analyzed works in relevant volumes of six selected, national medical journals for the years 1988-1992, simultaneously determining the number of works, in which no statistical methods were used. Concurrently the frequency of applying the individual statistical methods was analyzed in the scrutinized works. Prominence was given to fundamental statistical methods in the field of descriptive statistics (measures of position, measures of dispersion) as well as

  20. The frequency of drugs in randomly selected drivers in Denmark

    DEFF Research Database (Denmark)

    Simonsen, Kirsten Wiese; Steentoft, Anni; Hels, Tove

    Introduction Driving under the influence of alcohol and drugs is a global problem. In Denmark as well as in other countries there is an increasing focus on impaired driving. Little is known about the occurrence of psychoactive drugs in the general traffic. Therefore the European commission...... initiated the DRUID project. This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Methods Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme...... stratified by time, season, and road type. The oral fluid samples were screened for 29 illegal and legal psychoactive substances and metabolites as well as ethanol. Results Fourteen (0.5%) drivers were positive for ethanol (alone or in combination with drugs) at concentrations above 0.53 g/l, which...

  1. Factors of Selection of the Stock Allocation Method

    Directory of Open Access Journals (Sweden)

    Rohov Heorhii K.

    2014-03-01

    Full Text Available The article describes results of the author’s study of factors of making strategic decisions on selection of methods of stock allocation by public joint stock companies in Ukraine. The author used the Random forest mathematical apparatus of classification trees building and also informal methods. The article analyses the reasons that restrain public allocation of stock. It shows significant influence upon selection of a method of stock allocation of such factors as capital concentration, balance rate of corporate rights, sector of economy and significant participation of the institutes of common investment or the state in the authorised capital. The built hierarchical model of classification of factors of the issuing policy of joint stock companies finds logical justification in specific features of the institutional environment, however, it does not fit into the framework of the classical concept of the market economy. The model could be used both for formation of goals of corporate financial strategies and in the process of improvement of state regulation of activity of securities issuers. The prospect of further studies in this direction is identification of transformation of factors of selection of the stock allocation method under conditions of revival of the stock market.

  2. Diffusion method in random matrix theory

    Science.gov (United States)

    Grela, Jacek

    2016-01-01

    We introduce a calculational tool useful in computing ratios and products of characteristic polynomials averaged over Gaussian measures with an external source. The method is based on Dyson’s Brownian motion and Grassmann/complex integration formulas for determinants. The resulting formulas are exact for finite matrix size N and form integral representations convenient for large N asymptotics. Quantities obtained by the method are interpreted as averages over standard matrix models. We provide several explicit and novel calculations with special emphasis on the β =2 Girko-Ginibre ensembles.

  3. A random spatial sampling method in a rural developing nation

    Science.gov (United States)

    Michelle C. Kondo; Kent D.W. Bream; Frances K. Barg; Charles C. Branas

    2014-01-01

    Nonrandom sampling of populations in developing nations has limitations and can inaccurately estimate health phenomena, especially among hard-to-reach populations such as rural residents. However, random sampling of rural populations in developing nations can be challenged by incomplete enumeration of the base population. We describe a stratified random sampling method...

  4. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Modal generation of the design parameters of an elastic spacecraft by the random search method

    Science.gov (United States)

    Titov, B. A.

    A method for the modal generation of the dynamic properties of an elastic spacecraft is proposed which is based on algorithms of random search in the space of design parameters. A practical implementation of this approach is illustrated by an example. It is shown that the modal parameter generation procedure based on the random search solves the problem of parameter selection. However, as in any other method, the accuracy of the computation of matrix elements is largely determined by the initial set of permissible values and the number of random samples in determining the subgradient of the objective function.

  6. Efficient Training Methods for Conditional Random Fields

    Science.gov (United States)

    2008-02-01

    Learning (ICML), 2007. [63] Bruce G. Lindsay. Composite likelihood methods. Contemporary Mathematics, pages 221–239, 1988. 189 [64] Yan Liu , Jaime...graphical models: Approximate MCMC algorithms. In Conference on Uncertainty in Artificial Intelligence (UAI), 2004. [86] Ara V. Nefian, Luhong Liang, Xiaobo ...Pi, Liu Xiaoxiang, Crusoe Mao, and Kevin Murphy. A coupled HMM for audio-visual speech recognition. In IEEE Int’l Conference on Acoustics, Speech and

  7. DETERMINANTS FACTORS OF VASECTOMY METHOD SELECTION

    Directory of Open Access Journals (Sweden)

    Esti Yunitasari

    2017-06-01

    Full Text Available Introduction: The level of male participation in family planning by choosing vasectomy in East Lampung region Pekalongan health centers is still low, although the success rate of vasectomy as family planning is very high. This study aimed to explain the factors related to the men’s choice of vasectomy in the Pekalongan health center East Lampung. Methods: This study used an analytical study design with a cross-sectional approach. Samples were 117 men in reproductive age gathered by using purposive sampling. The independent variables were knowledge, attitudes, parity, age, availability of health resources and infrastructure, health education, attitude and behavior of health care workers and family support. The dependent variable was the men’s participation in vasectomy as family planning. Data were retrieved using questionnaires and statistically analyzed using Chi-Square test. Results: Factors affecting the selection of vasectomy as family planning in men with reproductive age were an attitude (p=0,020, parity (p=0.022, age (p=0,021, the availability of health resources and health infrastructure (p=0.018, and family support (p=0.011. However, the knowledge, health education, and the attitudes and behavior of health workers did not affect the selection of vasectomy as family planning. Discussion: Public Health Centres are expected to build a family planning services, especially for vasectomies, such as the provision of vasectomy facilities which can reach the community and the establishment of cadres for male birth control.

  8. Methods for sample size determination in cluster randomized trials.

    Science.gov (United States)

    Rutterford, Clare; Copas, Andrew; Eldridge, Sandra

    2015-06-01

    The use of cluster randomized trials (CRTs) is increasing, along with the variety in their design and analysis. The simplest approach for their sample size calculation is to calculate the sample size assuming individual randomization and inflate this by a design effect to account for randomization by cluster. The assumptions of a simple design effect may not always be met; alternative or more complicated approaches are required. We summarise a wide range of sample size methods available for cluster randomized trials. For those familiar with sample size calculations for individually randomized trials but with less experience in the clustered case, this manuscript provides formulae for a wide range of scenarios with associated explanation and recommendations. For those with more experience, comprehensive summaries are provided that allow quick identification of methods for a given design, outcome and analysis method. We present first those methods applicable to the simplest two-arm, parallel group, completely randomized design followed by methods that incorporate deviations from this design such as: variability in cluster sizes; attrition; non-compliance; or the inclusion of baseline covariates or repeated measures. The paper concludes with methods for alternative designs. There is a large amount of methodology available for sample size calculations in CRTs. This paper gives the most comprehensive description of published methodology for sample size calculation and provides an important resource for those designing these trials. © The Author 2015. Published by Oxford University Press on behalf of the International Epidemiological Association.

  9. Acceptance sampling using judgmental and randomly selected samples

    Energy Technology Data Exchange (ETDEWEB)

    Sego, Landon H.; Shulman, Stanley A.; Anderson, Kevin K.; Wilson, John E.; Pulsipher, Brent A.; Sieber, W. Karl

    2010-09-01

    We present a Bayesian model for acceptance sampling where the population consists of two groups, each with different levels of risk of containing unacceptable items. Expert opinion, or judgment, may be required to distinguish between the high and low-risk groups. Hence, high-risk items are likely to be identifed (and sampled) using expert judgment, while the remaining low-risk items are sampled randomly. We focus on the situation where all observed samples must be acceptable. Consequently, the objective of the statistical inference is to quantify the probability that a large percentage of the unsampled items in the population are also acceptable. We demonstrate that traditional (frequentist) acceptance sampling and simpler Bayesian formulations of the problem are essentially special cases of the proposed model. We explore the properties of the model in detail, and discuss the conditions necessary to ensure that required samples sizes are non-decreasing function of the population size. The method is applicable to a variety of acceptance sampling problems, and, in particular, to environmental sampling where the objective is to demonstrate the safety of reoccupying a remediated facility that has been contaminated with a lethal agent.

  10. Effect of non-random mating on genomic and BLUP selection schemes

    Directory of Open Access Journals (Sweden)

    Nirea Kahsay G

    2012-04-01

    Full Text Available Abstract Background The risk of long-term unequal contribution of mating pairs to the gene pool is that deleterious recessive genes can be expressed. Such consequences could be alleviated by appropriately designing and optimizing breeding schemes i.e. by improving selection and mating procedures. Methods We studied the effect of mating designs, random, minimum coancestry and minimum covariance of ancestral contributions on rate of inbreeding and genetic gain for schemes with different information sources, i.e. sib test or own performance records, different genetic evaluation methods, i.e. BLUP or genomic selection, and different family structures, i.e. factorial or pair-wise. Results Results showed that substantial differences in rates of inbreeding due to mating design were present under schemes with a pair-wise family structure, for which minimum coancestry turned out to be more effective to generate lower rates of inbreeding. Specifically, substantial reductions in rates of inbreeding were observed in schemes using sib test records and BLUP evaluation. However, with a factorial family structure, differences in rates of inbreeding due mating designs were minor. Moreover, non-random mating had only a small effect in breeding schemes that used genomic evaluation, regardless of the information source. Conclusions It was concluded that minimum coancestry remains an efficient mating design when BLUP is used for genetic evaluation or when the size of the population is small, whereas the effect of non-random mating is smaller in schemes using genomic evaluation.

  11. SELECTION OF METHOD FOR RESTORATION OF PARTS

    Directory of Open Access Journals (Sweden)

    V. P. Ivanov

    2016-01-01

    Full Text Available The paper contains definitions for a process and a methodology for restoration of parts on the basis of the analysis of the known methods and their selection. Geometric parameters and operational properties that should be provided for restoration of parts have been determined in the paper. A process for selection of the required method for restoration parts has been improved and it makes it possible to synthesize an optimal process of the restoration according to a criterion of industrial resource consumption with due account of quality, productivity and security limits. A justification on measures that meet the required limits has been presented in the paper. The paper shows a direction of technical solutions that ensure complete use of residual life of repair fund parts. These solutions can be achieved through close application of all repair sizes of work-pieces with revision of their values, uniform removal of the allowance while cutting the work-pieces at optimum locating, application of coating processes only in technically justified cases, application of straightening with thermal fixing of its results or all-around compression of deformable elements. The paper proposes to limit a number of overhauls for the units together with restoration of basic and fundamental parts by two repairs for the whole period of their lifetime. Number of shaft journal building-up should be limited by one building-up operation throughout the whole life cycle of the part with the purpose to preserve its length within the prescribed limits. It has been recommended to expand an application area of volumetric plastic deformation of material in the form of thermoplastic distribution or reduction of repair work-pieces representing class of rotation bodies with holes that ensures an allowance to machine external and internal surfaces for nominal dimensions without coating. A structure of the coating material with fine inclusions of carbides or nitrides of metals and

  12. Statistical inference of selection and divergence from a time-dependent Poisson random field model.

    Directory of Open Access Journals (Sweden)

    Amei Amei

    Full Text Available We apply a recently developed time-dependent Poisson random field model to aligned DNA sequences from two related biological species to estimate selection coefficients and divergence time. We use Markov chain Monte Carlo methods to estimate species divergence time and selection coefficients for each locus. The model assumes that the selective effects of non-synonymous mutations are normally distributed across genetic loci but constant within loci, and synonymous mutations are selectively neutral. In contrast with previous models, we do not assume that the individual species are at population equilibrium after divergence. Using a data set of 91 genes in two Drosophila species, D. melanogaster and D. simulans, we estimate the species divergence time t(div = 2.16 N(e (or 1.68 million years, assuming the haploid effective population size N(e = 6.45 x 10(5 years and a mean selection coefficient per generation μ(γ = 1.98/N(e. Although the average selection coefficient is positive, the magnitude of the selection is quite small. Results from numerical simulations are also presented as an accuracy check for the time-dependent model.

  13. Performance Comparison of Feature Selection Methods

    Directory of Open Access Journals (Sweden)

    Phyu Thu Zar

    2016-01-01

    Full Text Available Feature Subset Selection is an essential pre-processing task in Data Mining. Feature selection process refers to choosing subset of attributes from the set of original attributes. This technique attempts to identify and remove as much irrelevant and redundant information as possible. In this paper, a new feature subset selection algorithm based on conditional mutual information approach is proposed to select the effective feature subset. The effectiveness of the proposed algorithm is evaluated by comparing with the other well-known existing feature selection algorithms using standard datasets from UC Iravine and WEKA (Waikato Environment for Knowledge Analysis. The performance of the proposed algorithm is evaluated by multi-criteria that take into account not only the classification accuracy but also number of selected features.

  14. Proposal of the Methodology for Analysing the Structural Relationship in the System of Random Process Using the Data Mining Methods

    National Research Council Canada - National Science Library

    German Michaľčonok; Michaela Horalová Kalinová; Martin Németh

    2014-01-01

    .... In this paper, we will approach the area of the random processes, present the process of structural analysis and select suitable circuit data mining methods applicable to the area of structural analysis...

  15. Selection bias and subject refusal in a cluster-randomized controlled trial

    Directory of Open Access Journals (Sweden)

    Rochelle Yang

    2017-07-01

    Full Text Available Abstract Background Selection bias and non-participation bias are major methodological concerns which impact external validity. Cluster-randomized controlled trials are especially prone to selection bias as it is impractical to blind clusters to their allocation into intervention or control. This study assessed the impact of selection bias in a large cluster-randomized controlled trial. Methods The Improved Cardiovascular Risk Reduction to Enhance Rural Primary Care (ICARE study examined the impact of a remote pharmacist-led intervention in twelve medical offices. To assess eligibility, a standardized form containing patient demographics and medical information was completed for each screened patient. Eligible patients were approached by the study coordinator for recruitment. Both the study coordinator and the patient were aware of the site’s allocation prior to consent. Patients who consented or declined to participate were compared across control and intervention arms for differing characteristics. Statistical significance was determined using a two-tailed, equal variance t-test and a chi-square test with adjusted Bonferroni p-values. Results were adjusted for random cluster variation. Results There were 2749 completed screening forms returned to research staff with 461 subjects who had either consented or declined participation. Patients with poorly controlled diabetes were found to be significantly more likely to decline participation in intervention sites compared to those in control sites. A higher mean diastolic blood pressure was seen in patients with uncontrolled hypertension who declined in the control sites compared to those who declined in the intervention sites. However, these findings were no longer significant after adjustment for random variation among the sites. After this adjustment, females were now found to be significantly more likely to consent than males (odds ratio = 1.41; 95% confidence interval = 1.03, 1

  16. Multi-Agent Methods for the Configuration of Random Nanocomputers

    Science.gov (United States)

    Lawson, John W.

    2004-01-01

    As computational devices continue to shrink, the cost of manufacturing such devices is expected to grow exponentially. One alternative to the costly, detailed design and assembly of conventional computers is to place the nano-electronic components randomly on a chip. The price for such a trivial assembly process is that the resulting chip would not be programmable by conventional means. In this work, we show that such random nanocomputers can be adaptively programmed using multi-agent methods. This is accomplished through the optimization of an associated high dimensional error function. By representing each of the independent variables as a reinforcement learning agent, we are able to achieve convergence must faster than with other methods, including simulated annealing. Standard combinational logic circuits such as adders and multipliers are implemented in a straightforward manner. In addition, we show that the intrinsic flexibility of these adaptive methods allows the random computers to be reconfigured easily, making them reusable. Recovery from faults is also demonstrated.

  17. Rapid selection of accessible and cleavable sites in RNA by Escherichia coli RNase P and random external guide sequences.

    Science.gov (United States)

    Lundblad, Eirik W; Xiao, Gaoping; Ko, Jae-Hyeong; Altman, Sidney

    2008-02-19

    A method of inhibiting the expression of particular genes by using external guide sequences (EGSs) has been improved in its rapidity and specificity. Random EGSs that have 14-nt random sequences are used in the selection procedure for an EGS that attacks the mRNA for a gene in a particular location. A mixture of the random EGSs, the particular target RNA, and RNase P is used in the diagnostic procedure, which, after completion, is analyzed in a gel with suitable control lanes. Within a few hours, the procedure is complete. The action of EGSs designed by an older method is compared with EGSs designed by the random EGS method on mRNAs from two bacterial pathogens.

  18. Rapid selection of accessible and cleavable sites in RNA by Escherichia coli RNase P and random external guide sequences

    OpenAIRE

    Lundblad, Eirik W.; Xiao, Gaoping; Ko, Jae-hyeong; Altman, Sidney

    2008-01-01

    A method of inhibiting the expression of particular genes by using external guide sequences (EGSs) has been improved in its rapidity and specificity. Random EGSs that have 14-nt random sequences are used in the selection procedure for an EGS that attacks the mRNA for a gene in a particular location. A mixture of the random EGSs, the particular target RNA, and RNase P is used in the diagnostic procedure, which, after completion, is analyzed in a gel with suitable control lanes. Within a few ho...

  19. Emulsion PCR: a high efficient way of PCR amplification of random DNA libraries in aptamer selection.

    Directory of Open Access Journals (Sweden)

    Keke Shao

    Full Text Available Aptamers are short RNA or DNA oligonucleotides which can bind with different targets. Typically, they are selected from a large number of random DNA sequence libraries. The main strategy to obtain aptamers is systematic evolution of ligands by exponential enrichment (SELEX. Low efficiency is one of the limitations for conventional PCR amplification of random DNA sequence library in aptamer selection because of relative low products and high by-products formation efficiency. Here, we developed emulsion PCR for aptamer selection. With this method, the by-products formation decreased tremendously to an undetectable level, while the products formation increased significantly. Our results indicated that by-products in conventional PCR amplification were from primer-product and product-product hybridization. In emulsion PCR, we can completely avoid the product-product hybridization and avoid the most of primer-product hybridization if the conditions were optimized. In addition, it also showed that the molecule ratio of template to compartment was crucial to by-product formation efficiency in emulsion PCR amplification. Furthermore, the concentration of the Taq DNA polymerase in the emulsion PCR mixture had a significant impact on product formation efficiency. So, the results of our study indicated that emulsion PCR could improve the efficiency of SELEX.

  20. AMES: Towards an Agile Method for ERP Selection

    OpenAIRE

    Juell-Skielse, Gustaf; Nilsson, Anders G.; Nordqvist, Andreas; Westergren, Mattias

    2012-01-01

    Conventional on-premise installations of ERP are now rapidly being replaced by ERP as service. Although ERP becomes more accessible and no longer requires local infrastructure, current selection methods do not take full advantage of the provided agility. In this paper we present AMES (Agile Method for ERP Selection), a novel method for ERP selection which better utilizes the strengths of service oriented ERP. AMES is designed to shorten lead time for selection, support identification of essen...

  1. Variable Selection in the Presence of Missing Data: Imputation-based Methods.

    Science.gov (United States)

    Zhao, Yize; Long, Qi

    2017-01-01

    Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.

  2. Random forest variable selection in spatial malaria transmission modelling in Mpumalanga Province, South Africa.

    Science.gov (United States)

    Kapwata, Thandi; Gebreslasie, Michael T

    2016-11-16

    Malaria is an environmentally driven disease. In order to quantify the spatial variability of malaria transmission, it is imperative to understand the interactions between environmental variables and malaria epidemiology at a micro-geographic level using a novel statistical approach. The random forest (RF) statistical learning method, a relatively new variable-importance ranking method, measures the variable importance of potentially influential parameters through the percent increase of the mean squared error. As this value increases, so does the relative importance of the associated variable. The principal aim of this study was to create predictive malaria maps generated using the selected variables based on the RF algorithm in the Ehlanzeni District of Mpumalanga Province, South Africa. From the seven environmental variables used [temperature, lag temperature, rainfall, lag rainfall, humidity, altitude, and the normalized difference vegetation index (NDVI)], altitude was identified as the most influential predictor variable due its high selection frequency. It was selected as the top predictor for 4 out of 12 months of the year, followed by NDVI, temperature and lag rainfall, which were each selected twice. The combination of climatic variables that produced the highest prediction accuracy was altitude, NDVI, and temperature. This suggests that these three variables have high predictive capabilities in relation to malaria transmission. Furthermore, it is anticipated that the predictive maps generated from predictions made by the RF algorithm could be used to monitor the progression of malaria and assist in intervention and prevention efforts with respect to malaria.

  3. Random forest variable selection in spatial malaria transmission modelling in Mpumalanga Province, South Africa

    Directory of Open Access Journals (Sweden)

    Thandi Kapwata

    2016-11-01

    Full Text Available Malaria is an environmentally driven disease. In order to quantify the spatial variability of malaria transmission, it is imperative to understand the interactions between environmental variables and malaria epidemiology at a micro-geographic level using a novel statistical approach. The random forest (RF statistical learning method, a relatively new variable-importance ranking method, measures the variable importance of potentially influential parameters through the percent increase of the mean squared error. As this value increases, so does the relative importance of the associated variable. The principal aim of this study was to create predictive malaria maps generated using the selected variables based on the RF algorithm in the Ehlanzeni District of Mpumalanga Province, South Africa. From the seven environmental variables used [temperature, lag temperature, rainfall, lag rainfall, humidity, altitude, and the normalized difference vegetation index (NDVI], altitude was identified as the most influential predictor variable due its high selection frequency. It was selected as the top predictor for 4 out of 12 months of the year, followed by NDVI, temperature and lag rainfall, which were each selected twice. The combination of climatic variables that produced the highest prediction accuracy was altitude, NDVI, and temperature. This suggests that these three variables have high predictive capabilities in relation to malaria transmission. Furthermore, it is anticipated that the predictive maps generated from predictions made by the RF algorithm could be used to monitor the progression of malaria and assist in intervention and prevention efforts with respect to malaria.

  4. A New Method of Random Environmental Walking for Assessing Behavioral Preferences for Different Lighting Applications

    Science.gov (United States)

    Patching, Geoffrey R.; Rahm, Johan; Jansson, Märit; Johansson, Maria

    2017-01-01

    Accurate assessment of people’s preferences for different outdoor lighting applications is increasingly considered important in the development of new urban environments. Here a new method of random environmental walking is proposed to complement current methods of assessing urban lighting applications, such as self-report questionnaires. The procedure involves participants repeatedly walking between different lighting applications by random selection of a lighting application and preferred choice or by random selection of a lighting application alone. In this manner, participants are exposed to all lighting applications of interest more than once and participants’ preferences for the different lighting applications are reflected in the number of times they walk to each lighting application. On the basis of an initial simulation study, to explore the feasibility of this approach, a comprehensive field test was undertaken. The field test included random environmental walking and collection of participants’ subjective ratings of perceived pleasantness (PP), perceived quality, perceived strength, and perceived flicker of four lighting applications. The results indicate that random environmental walking can reveal participants’ preferences for different lighting applications that, in the present study, conformed to participants’ ratings of PP and perceived quality of the lighting applications. As a complement to subjectively stated environmental preferences, random environmental walking has the potential to expose behavioral preferences for different lighting applications. PMID:28337163

  5. Application of the Random Vortex Method to Natural Convection ...

    African Journals Online (AJOL)

    Natural convection flows in channels have been studied using numerical tools such as finite difference and finite element techniques. These techniques are much demanding in computer skills and memory. Random Vortex Element method which has been used successfully in fluid flow was adopted in this work in view of its ...

  6. Outranking methods in support of supplier selection

    NARCIS (Netherlands)

    de Boer, L.; van der Wegen, Leonardus L.M.; Telgen, Jan

    1998-01-01

    Initial purchasing decisions such as make-or-buy decisions and supplier selection are decisions of strategic importance to companies. The nature of these decisions usually is complex and unstructured. Management Science techniques might be helpful tools for this kind of decision making problems. So

  7. Color selective photodetector and methods of making

    Science.gov (United States)

    Walker, Brian J.; Dorn, August; Bulovic, Vladimir; Bawendi, Moungi G.

    2013-03-19

    A photoelectric device, such as a photodetector, can include a semiconductor nanowire electrostatically associated with a J-aggregate. The J-aggregate can facilitate absorption of a desired wavelength of light, and the semiconductor nanowire can facilitate charge transport. The color of light detected by the device can be chosen by selecting a J-aggregate with a corresponding peak absorption wavelength.

  8. Using ArcMap, Google Earth, and Global Positioning Systems to select and locate random households in rural Haiti

    Directory of Open Access Journals (Sweden)

    Wampler Peter J

    2013-01-01

    Full Text Available Abstract Background A remote sensing technique was developed which combines a Geographic Information System (GIS; Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. Methods The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. Results A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. Conclusions The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only

  9. In vivo selection of randomly mutated retroviral genomes

    NARCIS (Netherlands)

    Berkhout, B.; Klaver, B.

    1993-01-01

    Darwinian evolution, that is the outgrowth of the fittest variants in a population, usually applies to living organisms over long periods of time. Recently, in vitro selection/amplification techniques have been developed that allow for the rapid evolution of functionally active nucleic acids from a

  10. Differential privacy-based evaporative cooling feature selection and classification with relief-F and random forests.

    Science.gov (United States)

    Le, Trang T; Simmons, W Kyle; Misaki, Masaya; Bodurka, Jerzy; White, Bill C; Savitz, Jonathan; McKinney, Brett A

    2017-09-15

    Classification of individuals into disease or clinical categories from high-dimensional biological data with low prediction error is an important challenge of statistical learning in bioinformatics. Feature selection can improve classification accuracy but must be incorporated carefully into cross-validation to avoid overfitting. Recently, feature selection methods based on differential privacy, such as differentially private random forests and reusable holdout sets, have been proposed. However, for domains such as bioinformatics, where the number of features is much larger than the number of observations p≫n , these differential privacy methods are susceptible to overfitting. We introduce private Evaporative Cooling, a stochastic privacy-preserving machine learning algorithm that uses Relief-F for feature selection and random forest for privacy preserving classification that also prevents overfitting. We relate the privacy-preserving threshold mechanism to a thermodynamic Maxwell-Boltzmann distribution, where the temperature represents the privacy threshold. We use the thermal statistical physics concept of Evaporative Cooling of atomic gases to perform backward stepwise privacy-preserving feature selection. On simulated data with main effects and statistical interactions, we compare accuracies on holdout and validation sets for three privacy-preserving methods: the reusable holdout, reusable holdout with random forest, and private Evaporative Cooling, which uses Relief-F feature selection and random forest classification. In simulations where interactions exist between attributes, private Evaporative Cooling provides higher classification accuracy without overfitting based on an independent validation set. In simulations without interactions, thresholdout with random forest and private Evaporative Cooling give comparable accuracies. We also apply these privacy methods to human brain resting-state fMRI data from a study of major depressive disorder. Code

  11. Using ArcMap, Google Earth, and Global Positioning Systems to select and locate random households in rural Haiti.

    Science.gov (United States)

    Wampler, Peter J; Rediske, Richard R; Molla, Azizur R

    2013-01-18

    A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This

  12. Efficient Selection of Data Mining Method

    Directory of Open Access Journals (Sweden)

    Mirela Danubianu

    2011-10-01

    Full Text Available Data mining tools can access large amounts of data and find patterns that can solve various problems, often with surprising solutions. We have analyzed the data mining methods, techniques and algorithms with their characteristics, with their advantages and weakness. Taking into account the tasks to be resolved in order to discover the different types of knowledge, the kind of databases to work on and the type of data, as well as the area for which on desire the implementation of the data mining system we have try to find a way to efficiently choose the proper methods in a given situation. ExpertDM system has the aim to find the best data mining methods for solving a task and specifying the transformation which need to be made for bringing the data at a proper form for applying these methods.

  13. Feature selection for outcome prediction in oesophageal cancer using genetic algorithm and random forest classifier.

    Science.gov (United States)

    Paul, Desbordes; Su, Ruan; Romain, Modzelewski; Sébastien, Vauclin; Pierre, Vera; Isabelle, Gardin

    2017-09-01

    The outcome prediction of patients can greatly help to personalize cancer treatment. A large amount of quantitative features (clinical exams, imaging, …) are potentially useful to assess the patient outcome. The challenge is to choose the most predictive subset of features. In this paper, we propose a new feature selection strategy called GARF (genetic algorithm based on random forest) extracted from positron emission tomography (PET) images and clinical data. The most relevant features, predictive of the therapeutic response or which are prognoses of the patient survival 3 years after the end of treatment, were selected using GARF on a cohort of 65 patients with a local advanced oesophageal cancer eligible for chemo-radiation therapy. The most relevant predictive results were obtained with a subset of 9 features leading to a random forest misclassification rate of 18±4% and an areas under the of receiver operating characteristic (ROC) curves (AUC) of 0.823±0.032. The most relevant prognostic results were obtained with 8 features leading to an error rate of 20±7% and an AUC of 0.750±0.108. Both predictive and prognostic results show better performances using GARF than using 4 other studied methods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Extremely Randomized Machine Learning Methods for Compound Activity Prediction.

    Science.gov (United States)

    Czarnecki, Wojciech M; Podlewska, Sabina; Bojarski, Andrzej J

    2015-11-09

    Speed, a relatively low requirement for computational resources and high effectiveness of the evaluation of the bioactivity of compounds have caused a rapid growth of interest in the application of machine learning methods to virtual screening tasks. However, due to the growth of the amount of data also in cheminformatics and related fields, the aim of research has shifted not only towards the development of algorithms of high predictive power but also towards the simplification of previously existing methods to obtain results more quickly. In the study, we tested two approaches belonging to the group of so-called 'extremely randomized methods'-Extreme Entropy Machine and Extremely Randomized Trees-for their ability to properly identify compounds that have activity towards particular protein targets. These methods were compared with their 'non-extreme' competitors, i.e., Support Vector Machine and Random Forest. The extreme approaches were not only found out to improve the efficiency of the classification of bioactive compounds, but they were also proved to be less computationally complex, requiring fewer steps to perform an optimization procedure.

  15. Pseudo cluster randomization dealt with selection bias and contamination in clinical trials

    NARCIS (Netherlands)

    Teerenstra, S.; Melis, R.J.F.; Peer, P.G.M.; Borm, G.F.

    2006-01-01

    BACKGROUND AND OBJECTIVES: When contamination is present, randomization on a patient level leads to dilution of the treatment effect. The usual solution is to randomize on a cluster level, but at the cost of efficiency and more importantly, this may introduce selection bias. Furthermore, it may slow

  16. A Multistage Method for Multiobjective Route Selection

    Science.gov (United States)

    Wen, Feng; Gen, Mitsuo

    The multiobjective route selection problem (m-RSP) is a key research topic in the car navigation system (CNS) for ITS (Intelligent Transportation System). In this paper, we propose an interactive multistage weight-based Dijkstra genetic algorithm (mwD-GA) to solve it. The purpose of the proposed approach is to create enough Pareto-optimal routes with good distribution for the car driver depending on his/her preference. At the same time, the routes can be recalculated according to the driver's preferences by the multistage framework proposed. In the solution approach proposed, the accurate route searching ability of the Dijkstra algorithm and the exploration ability of the Genetic algorithm (GA) are effectively combined together for solving the m-RSP problems. Solutions provided by the proposed approach are compared with the current research to show the effectiveness and practicability of the solution approach proposed.

  17. Method for producing size selected particles

    Energy Technology Data Exchange (ETDEWEB)

    Krumdick, Gregory K.; Shin, Young Ho; Takeya, Kaname

    2016-09-20

    The invention provides a system for preparing specific sized particles, the system comprising a continuous stir tank reactor adapted to receive reactants; a centrifugal dispenser positioned downstream from the reactor and in fluid communication with the reactor; a particle separator positioned downstream of the dispenser; and a solution stream return conduit positioned between the separator and the reactor. Also provided is a method for preparing specific sized particles, the method comprising introducing reagent into a continuous stir reaction tank and allowing the reagents to react to produce product liquor containing particles; contacting the liquor particles with a centrifugal force for a time sufficient to generate particles of a predetermined size and morphology; and returning unused reagents and particles of a non-predetermined size to the tank.

  18. SELECTION METHOD FOR AUTOMOTIVE PARTS RECONDITIONING

    Directory of Open Access Journals (Sweden)

    Dan Florin NITOI

    2015-05-01

    Full Text Available Paper presents technological methods for metal deposition, costs calculation and clasification for the main process that helps in automotive technologies to repair or to increase pieces properties. Paper was constructed based on many technological experiments that starts from practicans and returns to them. The main aim is to help young engineers or practicians engineers to choose the proper reconditioning process with the best information in repairing pieces from automotive industry.

  19. Extremely Randomized Machine Learning Methods for Compound Activity Prediction

    Directory of Open Access Journals (Sweden)

    Wojciech M. Czarnecki

    2015-11-01

    Full Text Available Speed, a relatively low requirement for computational resources and high effectiveness of the evaluation of the bioactivity of compounds have caused a rapid growth of interest in the application of machine learning methods to virtual screening tasks. However, due to the growth of the amount of data also in cheminformatics and related fields, the aim of research has shifted not only towards the development of algorithms of high predictive power but also towards the simplification of previously existing methods to obtain results more quickly. In the study, we tested two approaches belonging to the group of so-called ‘extremely randomized methods’—Extreme Entropy Machine and Extremely Randomized Trees—for their ability to properly identify compounds that have activity towards particular protein targets. These methods were compared with their ‘non-extreme’ competitors, i.e., Support Vector Machine and Random Forest. The extreme approaches were not only found out to improve the efficiency of the classification of bioactive compounds, but they were also proved to be less computationally complex, requiring fewer steps to perform an optimization procedure.

  20. Parameter Selection Methods in Inverse Problem Formulation

    Science.gov (United States)

    2010-11-03

    Hu, G.M. Kepler , and E.S. Rosenberg, Modeling HIV immune response and validation with clinical data, CRSC-TR07-09, March, 2007; J. Biological Dynamics...forms and identifiability, IEEE Trans. Automat. Contr. AC-19 (1974), 640 – 645. [40] G.H. Golub and C.F. Van Loan, Matrix Computations, Johns Hopkins...system, J. Math. Biology 31 (1993), 611 – 631. [46] D.G. Luenberger, Optimization by Vector Space Methods, John Wiley & Sons, New York, NY, 1969. [47] A.K

  1. Selection of industrial robots using the Polygons area method

    Directory of Open Access Journals (Sweden)

    Mortaza Honarmande Azimi

    2014-08-01

    Full Text Available Selection of robots from the several proposed alternatives is a very important and tedious task. Decision makers are not limited to one method and several methods have been proposed for solving this problem. This study presents Polygons Area Method (PAM as a multi attribute decision making method for robot selection problem. In this method, the maximum polygons area obtained from the attributes of an alternative robot on the radar chart is introduced as a decision-making criterion. The results of this method are compared with other typical multiple attribute decision-making methods (SAW, WPM, TOPSIS, and VIKOR by giving two examples. To find similarity in ranking given by different methods, Spearman’s rank correlation coefficients are obtained for different pairs of MADM methods. It was observed that the introduced method is in good agreement with other well-known MADM methods in the robot selection problem.

  2. Applying a weighted random forests method to extract karst sinkholes from LiDAR data

    Science.gov (United States)

    Zhu, Junfeng; Pierskalla, William P.

    2016-02-01

    Detailed mapping of sinkholes provides critical information for mitigating sinkhole hazards and understanding groundwater and surface water interactions in karst terrains. LiDAR (Light Detection and Ranging) measures the earth's surface in high-resolution and high-density and has shown great potentials to drastically improve locating and delineating sinkholes. However, processing LiDAR data to extract sinkholes requires separating sinkholes from other depressions, which can be laborious because of the sheer number of the depressions commonly generated from LiDAR data. In this study, we applied the random forests, a machine learning method, to automatically separate sinkholes from other depressions in a karst region in central Kentucky. The sinkhole-extraction random forest was grown on a training dataset built from an area where LiDAR-derived depressions were manually classified through a visual inspection and field verification process. Based on the geometry of depressions, as well as natural and human factors related to sinkholes, 11 parameters were selected as predictive variables to form the dataset. Because the training dataset was imbalanced with the majority of depressions being non-sinkholes, a weighted random forests method was used to improve the accuracy of predicting sinkholes. The weighted random forest achieved an average accuracy of 89.95% for the training dataset, demonstrating that the random forest can be an effective sinkhole classifier. Testing of the random forest in another area, however, resulted in moderate success with an average accuracy rate of 73.96%. This study suggests that an automatic sinkhole extraction procedure like the random forest classifier can significantly reduce time and labor costs and makes its more tractable to map sinkholes using LiDAR data for large areas. However, the random forests method cannot totally replace manual procedures, such as visual inspection and field verification.

  3. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  4. Random-breakage mapping method applied to human DNA sequences

    Science.gov (United States)

    Lobrich, M.; Rydberg, B.; Cooper, P. K.; Chatterjee, A. (Principal Investigator)

    1996-01-01

    The random-breakage mapping method [Game et al. (1990) Nucleic Acids Res., 18, 4453-4461] was applied to DNA sequences in human fibroblasts. The methodology involves NotI restriction endonuclease digestion of DNA from irradiated calls, followed by pulsed-field gel electrophoresis, Southern blotting and hybridization with DNA probes recognizing the single copy sequences of interest. The Southern blots show a band for the unbroken restriction fragments and a smear below this band due to radiation induced random breaks. This smear pattern contains two discontinuities in intensity at positions that correspond to the distance of the hybridization site to each end of the restriction fragment. By analyzing the positions of those discontinuities we confirmed the previously mapped position of the probe DXS1327 within a NotI fragment on the X chromosome, thus demonstrating the validity of the technique. We were also able to position the probes D21S1 and D21S15 with respect to the ends of their corresponding NotI fragments on chromosome 21. A third chromosome 21 probe, D21S11, has previously been reported to be close to D21S1, although an uncertainty about a second possible location existed. Since both probes D21S1 and D21S11 hybridized to a single NotI fragment and yielded a similar smear pattern, this uncertainty is removed by the random-breakage mapping method.

  5. The Random Ray Method for neutral particle transport

    Energy Technology Data Exchange (ETDEWEB)

    Tramm, John R., E-mail: jtramm@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Argonne National Laboratory, Mathematics and Computer Science Department 9700 S Cass Ave, Argonne, IL 60439 (United States); Smith, Kord S., E-mail: kord@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Forget, Benoit, E-mail: bforget@mit.edu [Massachusetts Institute of Technology, Department of Nuclear Science Engineering, 77 Massachusetts Avenue, 24-107, Cambridge, MA 02139 (United States); Siegel, Andrew R., E-mail: siegela@mcs.anl.gov [Argonne National Laboratory, Mathematics and Computer Science Department 9700 S Cass Ave, Argonne, IL 60439 (United States)

    2017-08-01

    A new approach to solving partial differential equations (PDEs) based on the method of characteristics (MOC) is presented. The Random Ray Method (TRRM) uses a stochastic rather than deterministic discretization of characteristic tracks to integrate the phase space of a problem. TRRM is potentially applicable in a number of transport simulation fields where long characteristic methods are used, such as neutron transport and gamma ray transport in reactor physics as well as radiative transfer in astrophysics. In this study, TRRM is developed and then tested on a series of exemplar reactor physics benchmark problems. The results show extreme improvements in memory efficiency compared to deterministic MOC methods, while also reducing algorithmic complexity, allowing for a sparser computational grid to be used while maintaining accuracy.

  6. An Integrated Cutting Tool Selection & Operation Sequencing Method

    NARCIS (Netherlands)

    Rho, H.M.; Geelink, R.; Geelink, R.; van t Erve, A.H.; van 't Erve, A.H.; Kals, H.J.J.

    1992-01-01

    Within the P.%RT C'APP system. the selection of an optimum operation sequence is related to the modules which perform the machining method and cutting tool selection. This study analyzes the technical and economical aspects of operation sequencing and presents a method which is capable of generating

  7. Methods for producing thin film charge selective transport layers

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Scott Ryan; Olson, Dana C.; van Hest, Marinus Franciscus Antonius Maria

    2018-01-02

    Methods for producing thin film charge selective transport layers are provided. In one embodiment, a method for forming a thin film charge selective transport layer comprises: providing a precursor solution comprising a metal containing reactive precursor material dissolved into a complexing solvent; depositing the precursor solution onto a surface of a substrate to form a film; and forming a charge selective transport layer on the substrate by annealing the film.

  8. Reporting methods of blinding in randomized trials assessing nonpharmacological treatments.

    Directory of Open Access Journals (Sweden)

    Isabelle Boutron

    2007-02-01

    Full Text Available BACKGROUND: Blinding is a cornerstone of treatment evaluation. Blinding is more difficult to obtain in trials assessing nonpharmacological treatment and frequently relies on "creative" (nonstandard methods. The purpose of this study was to systematically describe the strategies used to obtain blinding in a sample of randomized controlled trials of nonpharmacological treatment. METHODS AND FINDINGS: We systematically searched in Medline and the Cochrane Methodology Register for randomized controlled trials (RCTs assessing nonpharmacological treatment with blinding, published during 2004 in high-impact-factor journals. Data were extracted using a standardized extraction form. We identified 145 articles, with the method of blinding described in 123 of the reports. Methods of blinding of participants and/or health care providers and/or other caregivers concerned mainly use of sham procedures such as simulation of surgical procedures, similar attention-control interventions, or a placebo with a different mode of administration for rehabilitation or psychotherapy. Trials assessing devices reported various placebo interventions such as use of sham prosthesis, identical apparatus (e.g., identical but inactivated machine or use of activated machine with a barrier to block the treatment, or simulation of using a device. Blinding participants to the study hypothesis was also an important method of blinding. The methods reported for blinding outcome assessors relied mainly on centralized assessment of paraclinical examinations, clinical examinations (i.e., use of video, audiotape, photography, or adjudications of clinical events. CONCLUSIONS: This study classifies blinding methods and provides a detailed description of methods that could overcome some barriers of blinding in clinical trials assessing nonpharmacological treatment, and provides information for readers assessing the quality of results of such trials.

  9. RANDOM FORESTS-BASED FEATURE SELECTION FOR LAND-USE CLASSIFICATION USING LIDAR DATA AND ORTHOIMAGERY

    Directory of Open Access Journals (Sweden)

    H. Guan

    2012-07-01

    Full Text Available The development of lidar system, especially incorporated with high-resolution camera components, has shown great potential for urban classification. However, how to automatically select the best features for land-use classification is challenging. Random Forests, a newly developed machine learning algorithm, is receiving considerable attention in the field of image classification and pattern recognition. Especially, it can provide the measure of variable importance. Thus, in this study the performance of the Random Forests-based feature selection for urban areas was explored. First, we extract features from lidar data, including height-based, intensity-based GLCM measures; other spectral features can be obtained from imagery, such as Red, Blue and Green three bands, and GLCM-based measures. Finally, Random Forests is used to automatically select the optimal and uncorrelated features for landuse classification. 0.5-meter resolution lidar data and aerial imagery are used to assess the feature selection performance of Random Forests in the study area located in Mannheim, Germany. The results clearly demonstrate that the use of Random Forests-based feature selection can improve the classification performance by the selected features.

  10. Inventory of LCIA selection methods for assessing toxic releases. Methods and typology report part B

    DEFF Research Database (Denmark)

    Larsen, Henrik Fred; Birkved, Morten; Hauschild, Michael Zwicky

    This report describes an inventory of Life Cycle Impact Assessment (LCIA) selection methods for assessing toxic releases. It consists of an inventory of current selection methods and other Chemical Ranking and Scoring (CRS) methods assessed to be relevant for the development of (a) new selection...... method(s) in Work package 8 (WP8) of the OMNIITOX project. The selection methods and the other CRS methods are described in detail, a set of evaluation criteria are developed and the methods are evaluated against these criteria. This report (Deliverable 11B (D11B)) gives the results from task 7.1d, 7.1e...... by a characterisation method for the impact categories covering ecotoxicity and human toxicity. A selection method is therefore not a characterisation method like the “simple base method” and the “base method” that are going to be developed within WP8 but the purpose of a selection method is to focus the effort within...

  11. Analysis of training sample selection strategies for regression-based quantitative landslide susceptibility mapping methods

    Science.gov (United States)

    Erener, Arzu; Sivas, A. Abdullah; Selcuk-Kestel, A. Sevtap; Düzgün, H. Sebnem

    2017-07-01

    All of the quantitative landslide susceptibility mapping (QLSM) methods requires two basic data types, namely, landslide inventory and factors that influence landslide occurrence (landslide influencing factors, LIF). Depending on type of landslides, nature of triggers and LIF, accuracy of the QLSM methods differs. Moreover, how to balance the number of 0 (nonoccurrence) and 1 (occurrence) in the training set obtained from the landslide inventory and how to select which one of the 1's and 0's to be included in QLSM models play critical role in the accuracy of the QLSM. Although performance of various QLSM methods is largely investigated in the literature, the challenge of training set construction is not adequately investigated for the QLSM methods. In order to tackle this challenge, in this study three different training set selection strategies along with the original data set is used for testing the performance of three different regression methods namely Logistic Regression (LR), Bayesian Logistic Regression (BLR) and Fuzzy Logistic Regression (FLR). The first sampling strategy is proportional random sampling (PRS), which takes into account a weighted selection of landslide occurrences in the sample set. The second method, namely non-selective nearby sampling (NNS), includes randomly selected sites and their surrounding neighboring points at certain preselected distances to include the impact of clustering. Selective nearby sampling (SNS) is the third method, which concentrates on the group of 1's and their surrounding neighborhood. A randomly selected group of landslide sites and their neighborhood are considered in the analyses similar to NNS parameters. It is found that LR-PRS, FLR-PRS and BLR-Whole Data set-ups, with order, yield the best fits among the other alternatives. The results indicate that in QLSM based on regression models, avoidance of spatial correlation in the data set is critical for the model's performance.

  12. Local search methods based on variable focusing for random K -satisfiability

    Science.gov (United States)

    Lemoy, Rémi; Alava, Mikko; Aurell, Erik

    2015-01-01

    We introduce variable focused local search algorithms for satisfiabiliity problems. Usual approaches focus uniformly on unsatisfied clauses. The methods described here work by focusing on random variables in unsatisfied clauses. Variants are considered where variables are selected uniformly and randomly or by introducing a bias towards picking variables participating in several unsatistified clauses. These are studied in the case of the random 3-SAT problem, together with an alternative energy definition, the number of variables in unsatisfied constraints. The variable-based focused Metropolis search (V-FMS) is found to be quite close in performance to the standard clause-based FMS at optimal noise. At infinite noise, instead, the threshold for the linearity of solution times with instance size is improved by picking preferably variables in several UNSAT clauses. Consequences for algorithmic design are discussed.

  13. Comparing groups randomization and bootstrap methods using R

    CERN Document Server

    Zieffler, Andrew S; Long, Jeffrey D

    2011-01-01

    A hands-on guide to using R to carry out key statistical practices in educational and behavioral sciences research Computing has become an essential part of the day-to-day practice of statistical work, broadening the types of questions that can now be addressed by research scientists applying newly derived data analytic techniques. Comparing Groups: Randomization and Bootstrap Methods Using R emphasizes the direct link between scientific research questions and data analysis. Rather than relying on mathematical calculations, this book focus on conceptual explanations and

  14. Random projection and SVD methods in hyperspectral imaging

    Science.gov (United States)

    Zhang, Jiani

    Hyperspectral imaging provides researchers with abundant information with which to study the characteristics of objects in a scene. Processing the massive hyperspectral imagery datasets in a way that efficiently provides useful information becomes an important issue. In this thesis, we consider methods which reduce the dimension of hyperspectral data while retaining as much useful information as possible. Traditional deterministic methods for low-rank approximation are not always adaptable to process huge datasets in an effective way, and therefore probabilistic methods are useful in dimension reduction of hyperspectral images. In this thesis, we begin by generally introducing the background and motivations of this work. Next, we summarize the preliminary knowledge and the applications of SVD and PCA. After these descriptions, we present a probabilistic method, randomized Singular Value Decomposition (rSVD), for the purposes of dimension reduction, compression, reconstruction, and classification of hyperspectral data. We discuss some variations of this method. These variations offer the opportunity to obtain a more accurate reconstruction of the matrix whose singular values decay gradually, to process matrices without target rank, and to obtain the rSVD with only one single pass over the original data. Moreover, we compare the method with Compressive-Projection Principle Component Analysis (CPPCA). From the numerical results, we can see that rSVD has better performance in compression and reconstruction than truncated SVD and CPPCA. We also apply rSVD to classification methods for the hyperspectral data provided by the National Geospatial-Intelligence Agency (NGA).

  15. A Method for Search Engine Selection using Thesaurus for Selective Meta-Search Engine

    Science.gov (United States)

    Goto, Shoji; Ozono, Tadachika; Shintani, Toramatsu

    In this paper, we propose a new method for selecting search engines on WWW for selective meta-search engine. In selective meta-search engine, a method is needed that would enable selecting appropriate search engines for users' queries. Most existing methods use statistical data such as document frequency. These methods may select inappropriate search engines if a query contains polysemous words. In this paper, we describe an search engine selection method based on thesaurus. In our method, a thesaurus is constructed from documents in a search engine and is used as a source description of the search engine. The form of a particular thesaurus depends on the documents used for its construction. Our method enables search engine selection by considering relationship between terms and overcomes the problems caused by polysemous words. Further, our method does not have a centralized broker maintaining data, such as document frequency for all search engines. As a result, it is easy to add a new search engine, and meta-search engines become more scalable with our method compared to other existing methods.

  16. A Fast Adaptive Receive Antenna Selection Method in MIMO System

    Directory of Open Access Journals (Sweden)

    Chaowei Wang

    2013-01-01

    Full Text Available Antenna selection has been regarded as an effective method to acquire the diversity benefits of multiple antennas while potentially reduce hardware costs. This paper focuses on receive antenna selection. According to the proportion between the numbers of total receive antennas and selected antennas and the influence of each antenna on system capacity, we propose a fast adaptive antenna selection algorithm for wireless multiple-input multiple-output (MIMO systems. Mathematical analysis and numerical results show that our algorithm significantly reduces the computational complexity and memory requirement and achieves considerable system capacity gain compared with the optimal selection technique in the same time.

  17. Selective oropharyngeal decontamination versus selective digestive decontamination in critically ill patients: a meta-analysis of randomized controlled trials

    Directory of Open Access Journals (Sweden)

    Zhao D

    2015-07-01

    Full Text Available Di Zhao,1,* Jian Song,2,* Xuan Gao,3 Fei Gao,4 Yupeng Wu,2 Yingying Lu,5 Kai Hou1 1Department of Neurosurgery, The First Hospital of Hebei Medical University, 2Department of Neurosurgery, 3Department of Neurology, The Second Hospital of Hebei Medical University, 4Hebei Provincial Procurement Centers for Medical Drugs and Devices, 5Department of Neurosurgery, The Second Hospital of Hebei Medical University, Shijiazhuang People’s Republic of China *These authors contributed equally to this work Background: Selective digestive decontamination (SDD and selective oropharyngeal decontamination (SOD are associated with reduced mortality and infection rates among patients in intensive care units (ICUs; however, whether SOD has a superior effect than SDD remains uncertain. Hence, we conducted a meta-analysis of randomized controlled trials (RCTs to compare SOD with SDD in terms of clinical outcomes and antimicrobial resistance rates in patients who were critically ill. Methods: RCTs published in PubMed, Embase, and Web of Science were systematically reviewed to compare the effects of SOD and SDD in patients who were critically ill. Outcomes included day-28 mortality, length of ICU stay, length of hospital stay, duration of mechanical ventilation, ICU-acquired bacteremia, and prevalence of antibiotic-resistant Gram-negative bacteria. Results were expressed as risk ratio (RR with 95% confidence intervals (CIs, and weighted mean differences (WMDs with 95% CIs. Pooled estimates were performed using a fixed-effects model or random-effects model, depending on the heterogeneity among studies. Results: A total of four RCTs involving 23,822 patients met the inclusion criteria and were included in this meta-analysis. Among patients whose admitting specialty was surgery, cardiothoracic surgery (57.3% and neurosurgery (29.7% were the two main types of surgery being performed. Pooled results showed that SOD had similar effects as SDD in day-28 mortality (RR =1

  18. A comparative analysis of recruitment methods used in a randomized trial of diabetes education interventions.

    Science.gov (United States)

    Beaton, Sarah J; Sperl-Hillen, JoAnn M; Worley, Ann Von; Fernandes, Omar D; Baumer, Dorothy; Hanson, Ann M; Parker, Emily D; Busch, Maureen E; Davis, Herbert T; Spain, C Victor

    2010-11-01

    Recruitment methods heavily impact budget and outcomes in clinical trials. We conducted a post-hoc examination of the efficiency and cost of three different recruitment methods used in Journey for Control of Diabetes: the IDEA Study, a randomized controlled trial evaluating outcomes of group and individual diabetes education in New Mexico and Minnesota. Electronic databases were used to identify health plan members with diabetes and then one of the following three methods was used to recruit study participants: 1. Minnesota Method 1--Mail only (first half of recruitment period). Mailed invitations with return-response forms. 2. Minnesota Method 2--Mail and selective phone calls (second half of recruitment period). Mailed invitations with return-response forms and subsequent phone calls to nonresponders. 3. New Mexico Method 3--Mail and non-selective phone calls (full recruitment period): Mailed invitations with subsequent phone calls to all. The combined methods succeeded in meeting the recruitment goal of 623 subjects. There were 147 subjects recruited using Minnesota's Method 1, 190 using Minnesota's Method 2, and 286 using New Mexico's Method 3. Efficiency rates (percentage of invited patients who enrolled) were 4.2% for Method 1, 8.4% for Method 2, and 7.9% for Method 3. Calculated costs per enrolled subject were $71.58 (Method 1), $85.47 (Method 2), and $92.09 (Method 3). A mail-only method to assess study interest was relatively inexpensive but not efficient enough to sustain recruitment targets. Phone call follow-up after mailed invitations added to recruitment efficiency. Use of return-response forms with selective phone follow-up to non-responders was cost effective. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. Optimization of MIMO Systems Capacity Using Large Random Matrix Methods

    Directory of Open Access Journals (Sweden)

    Philippe Loubaton

    2012-11-01

    Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.

  20. Generation of Aptamers from A Primer-Free Randomized ssDNA Library Using Magnetic-Assisted Rapid Aptamer Selection

    Science.gov (United States)

    Tsao, Shih-Ming; Lai, Ji-Ching; Horng, Horng-Er; Liu, Tu-Chen; Hong, Chin-Yih

    2017-04-01

    Aptamers are oligonucleotides that can bind to specific target molecules. Most aptamers are generated using random libraries in the standard systematic evolution of ligands by exponential enrichment (SELEX). Each random library contains oligonucleotides with a randomized central region and two fixed primer regions at both ends. The fixed primer regions are necessary for amplifying target-bound sequences by PCR. However, these extra-sequences may cause non-specific bindings, which potentially interfere with good binding for random sequences. The Magnetic-Assisted Rapid Aptamer Selection (MARAS) is a newly developed protocol for generating single-strand DNA aptamers. No repeat selection cycle is required in the protocol. This study proposes and demonstrates a method to isolate aptamers for C-reactive proteins (CRP) from a randomized ssDNA library containing no fixed sequences at 5‧ and 3‧ termini using the MARAS platform. Furthermore, the isolated primer-free aptamer was sequenced and binding affinity for CRP was analyzed. The specificity of the obtained aptamer was validated using blind serum samples. The result was consistent with monoclonal antibody-based nephelometry analysis, which indicated that a primer-free aptamer has high specificity toward targets. MARAS is a feasible platform for efficiently generating primer-free aptamers for clinical diagnoses.

  1. The Effectiveness of Feature Selection Method in Solar Power Prediction

    OpenAIRE

    Md Rahat Hossain; Amanullah Maung Than Oo; A. B. M. Shawkat Ali

    2013-01-01

    This paper empirically shows that the effect of applying selected feature subsets on machine learning techniques significantly improves the accuracy for solar power prediction. Experiments are performed using five well-known wrapper feature selection methods to obtain the solar power prediction accuracy of machine learning techniques with selected feature subsets. For all the experiments, the machine learning techniques, namely, least median square (LMS), multilayer perceptron (MLP), and supp...

  2. Alternative microbial methods: An overview and selection criteria.

    NARCIS (Netherlands)

    Jasson, V.; Jacxsens, L.; Luning, P.A.; Rajkovic, A.; Uyttendaele, M.

    2010-01-01

    This study provides an overview and criteria for the selection of a method, other than the reference method, for microbial analysis of foods. In a first part an overview of the general characteristics of rapid methods available, both for enumeration and detection, is given with reference to relevant

  3. Game Methodology for Design Methods and Tools Selection

    Science.gov (United States)

    Ahmad, Rafiq; Lahonde, Nathalie; Omhover, Jean-françois

    2014-01-01

    Design process optimisation and intelligence are the key words of today's scientific community. A proliferation of methods has made design a convoluted area. Designers are usually afraid of selecting one method/tool over another and even expert designers may not necessarily know which method is the best to use in which circumstances. This…

  4. Thermodynamic method for generating random stress distributions on an earthquake fault

    Science.gov (United States)

    Barall, Michael; Harris, Ruth A.

    2012-01-01

    This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.

  5. Delay line length selection in generating fast random numbers with a chaotic laser.

    Science.gov (United States)

    Zhang, Jianzhong; Wang, Yuncai; Xue, Lugang; Hou, Jiayin; Zhang, Beibei; Wang, Anbang; Zhang, Mingjiang

    2012-04-10

    The chaotic light signals generated by an external cavity semiconductor laser have been experimentally demonstrated to extract fast random numbers. However, the photon round-trip time in the external cavity can cause the occurrence of the periodicity in random sequences. To overcome it, the exclusive-or operation on corresponding random bits in samples of the chaotic signal and its time-delay signal from a chaotic laser is required. In this scheme, the proper selection of delay length is a key issue. By doing a large number of experiments and theoretically analyzing the interplay between the Runs test and the threshold value of the autocorrelation function, we find when the corresponding delay time of autocorrelation trace with the correlation coefficient of less than 0.007 is considered as the delay time between the chaotic signal and its time-delay signal, streams of random numbers can be generated with verified randomness.

  6. Sequential methods for random-effects meta-analysis

    Science.gov (United States)

    Higgins, Julian P T; Whitehead, Anne; Simmonds, Mark

    2011-01-01

    Although meta-analyses are typically viewed as retrospective activities, they are increasingly being applied prospectively to provide up-to-date evidence on specific research questions. When meta-analyses are updated account should be taken of the possibility of false-positive findings due to repeated significance tests. We discuss the use of sequential methods for meta-analyses that incorporate random effects to allow for heterogeneity across studies. We propose a method that uses an approximate semi-Bayes procedure to update evidence on the among-study variance, starting with an informative prior distribution that might be based on findings from previous meta-analyses. We compare our methods with other approaches, including the traditional method of cumulative meta-analysis, in a simulation study and observe that it has Type I and Type II error rates close to the nominal level. We illustrate the method using an example in the treatment of bleeding peptic ulcers. Copyright © 2010 John Wiley & Sons, Ltd. PMID:21472757

  7. EMPLOYEE SELECTION METHODS IN ROMANIA: POPULARITY AND APPLICANT REACTIONS

    OpenAIRE

    Septimiu-Rare? SZABO

    2014-01-01

    This study assessed the prevalence of, and applicants’ reactions to 21 different employee selection methods. Past studies on this topic have focused mainly on countries in Western Europe. This research investigated a sample of 142 Romanian respondents using a postal survey. The most popular selection methods used by Romanian organizations were found to be CVs, ability tests and interviews. In contrast, applicants most favor work-samples, followed by ability tests, interviews, CVs and personal...

  8. Methods for selective functionalization and separation of carbon nanotubes

    Science.gov (United States)

    Strano, Michael S. (Inventor); Usrey, Monica (Inventor); Barone, Paul (Inventor); Dyke, Christopher A. (Inventor); Tour, James M. (Inventor); Kittrell, W. Carter (Inventor); Hauge, Robert H (Inventor); Smalley, Richard E. (Inventor); Marek, legal representative, Irene Marie (Inventor)

    2011-01-01

    The present invention is directed toward methods of selectively functionalizing carbon nanotubes of a specific type or range of types, based on their electronic properties, using diazonium chemistry. The present invention is also directed toward methods of separating carbon nanotubes into populations of specific types or range(s) of types via selective functionalization and electrophoresis, and also to the novel compositions generated by such separations.

  9. Supplier selection based on multi-criterial AHP method

    Directory of Open Access Journals (Sweden)

    Jana Pócsová

    2010-03-01

    Full Text Available This paper describes a case-study of supplier selection based on multi-criterial Analytic Hierarchy Process (AHP method.It is demonstrated that using adequate mathematical method can bring us “unprejudiced” conclusion, even if the alternatives (suppliercompanies are very similar in given selection-criteria. The result is the best possible supplier company from the viewpoint of chosen criteriaand the price of the product.

  10. Method for Selection of Solvents for Promotion of Organic Reactions

    DEFF Research Database (Denmark)

    Gani, Rafiqul; Jiménez-González, Concepción; Constable, David J.C.

    2005-01-01

    A method to select appropriate green solvents for the promotion of a class of organic reactions has been developed. The method combines knowledge from industrial practice and physical insights with computer-aided property estimation tools for selection/design of solvents. In particular, it employs...... and the solvent-environmental properties guide the decision making process. The current method is applicable only to organic reactions occurring in the liquid phase. Another gas or solid phase, which may or may not be at equilibrium with the reacting liquid phase, may also be present. The objective of this method...

  11. Predicting Metabolic Syndrome Using the Random Forest Method

    Directory of Open Access Journals (Sweden)

    Apilak Worachartcheewan

    2015-01-01

    Full Text Available Aims. This study proposes a computational method for determining the prevalence of metabolic syndrome (MS and to predict its occurrence using the National Cholesterol Education Program Adult Treatment Panel III (NCEP ATP III criteria. The Random Forest (RF method is also applied to identify significant health parameters. Materials and Methods. We used data from 5,646 adults aged between 18–78 years residing in Bangkok who had received an annual health check-up in 2008. MS was identified using the NCEP ATP III criteria. The RF method was applied to predict the occurrence of MS and to identify important health parameters surrounding this disorder. Results. The overall prevalence of MS was 23.70% (34.32% for males and 17.74% for females. RF accuracy for predicting MS in an adult Thai population was 98.11%. Further, based on RF, triglyceride levels were the most important health parameter associated with MS. Conclusion. RF was shown to predict MS in an adult Thai population with an accuracy >98% and triglyceride levels were identified as the most informative variable associated with MS. Therefore, using RF to predict MS may be potentially beneficial in identifying MS status for preventing the development of diabetes mellitus and cardiovascular diseases.

  12. UTA Method for the Consulting Firm Selection Problem

    Directory of Open Access Journals (Sweden)

    A. Tuş Işık

    2016-04-01

    Full Text Available The market conditions change due to the introduction of new products, unforseen demand fluctuations, rapid change of the life cycle of products and profit margins. Therefore, companies try to survive in a competitive environment by making their operational decisions strategically and systematically. Selection problems play an important role among these decisions. In the literature there are many robust MCDM (Multi Criteria Decision Making methods that consider the conflicting selection criteria and alternatives. In this paper the consulting firm selection problem is solved with UTA (UTility Additive method which is one of the MCDM methods. This method considers preference of the decision makers on alternatives and uses linear programming model to obtain a utility function having a minimum deviation from the preferences. In order to illustrate the efficiency and effectiveness of the method, a real case study is presented.

  13. Using MACBETH method for supplier selection in manufacturing environment

    Directory of Open Access Journals (Sweden)

    Prasad Karande

    2013-04-01

    Full Text Available Supplier selection is always found to be a complex decision-making problem in manufacturing environment. The presence of several independent and conflicting evaluation criteria, either qualitative or quantitative, makes the supplier selection problem a candidate to be solved by multi-criteria decision-making (MCDM methods. Even several MCDM methods have already been proposed for solving the supplier selection problems, the need for an efficient method that can deal with qualitative judgments related to supplier selection still persists. In this paper, the applicability and usefulness of measuring attractiveness by a categorical-based evaluation technique (MACBETH is demonstrated to act as a decision support tool while solving two real time supplier selection problems having qualitative performance measures. The ability of MACBETH method to quantify the qualitative performance measures helps to provide a numerical judgment scale for ranking the alternative suppliers and selecting the best one. The results obtained from MACBETH method exactly corroborate with those derived by the past researchers employing different mathematical approaches.

  14. A data mining approach to selecting herbs with similar efficacy: Targeted selection methods based on medical subject headings (MeSH).

    Science.gov (United States)

    Yea, Sang-Jun; Seong, BoSeok; Jang, Yunji; Kim, Chul

    2016-04-22

    Natural products have long been the most important source of ingredients in the discovery of new drugs. Moreover, since the Nagoya Protocol, finding alternative herbs with similar efficacy in traditional medicine has become a very important issue. Although random selection is a common method of finding ethno-medicinal herbs of similar efficacy, it proved to be less effective; therefore, this paper proposes a novel targeted selection method using data mining approaches in the MEDLINE database in order to identify and select herbs with a similar degree of efficacy. From among sixteen categories of medical subject headings (MeSH) descriptors, three categories containing terms related to herbal compounds, efficacy, toxicity, and the metabolic process were selected. In order to select herbs of similar efficacy in a targeted way, we adopted the similarity measurement method based on MeSH. In order to evaluate the proposed algorithm, we built up three different validation datasets which contain lists of original herbs and corresponding medicinal herbs of similar efficacy. The average area under curve (AUC) of the proposed algorithm was found to be about 500% larger than the random selection method. We found that the proposed algorithm puts more hits at the front of the top-10 list than the random selection method, and precisely discerns the efficacy of the herbs. It was also found that the AUC of the experiments either remained the same or increased slightly in all three validation datasets as the search range was increased. This study reveals and proves that the proposed algorithm is significantly more accurate and efficient in finding alternative herbs of similar efficacy than the random selection method. As such, it is hoped that this approach will be used in diverse applications in the ethno-pharmacology field. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.

  15. Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.

    Science.gov (United States)

    Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano

    2017-11-08

    Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed

  16. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data

    KAUST Repository

    Abusamra, Heba

    2013-05-01

    Microarray technology has enriched the study of gene expression in such a way that scientists are now able to measure the expression levels of thousands of genes in a single experiment. Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification, interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This thesis aims on a comparative study of state-of-the-art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k- nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t- statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used for this study. Different experiments have been applied to compare the performance of the classification methods with and without performing feature selection. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in

  17. Selecting Optimal Parameters of Random Linear Network Coding for Wireless Sensor Networks

    DEFF Research Database (Denmark)

    Heide, Janus; Zhang, Qi; Fitzek, Frank

    2013-01-01

    This work studies how to select optimal code parameters of Random Linear Network Coding (RLNC) in Wireless Sensor Networks (WSNs). With Rateless Deluge [1] the authors proposed to apply Network Coding (NC) for Over-the-Air Programming (OAP) in WSNs, and demonstrated that with NC a significant...

  18. Review and selection of methods for structural reliability analysis

    NARCIS (Netherlands)

    Van Eekelen, A.J.

    1997-01-01

    To select a method for analyzing structural reliability problems, including pptimization under reliability constraints, a literature survey was performed. In this review the most frequently used and most generally applicable methods are described. An extensive list of references is included. The

  19. The Effectiveness of Feature Selection Method in Solar Power Prediction

    Directory of Open Access Journals (Sweden)

    Md Rahat Hossain

    2013-01-01

    Full Text Available This paper empirically shows that the effect of applying selected feature subsets on machine learning techniques significantly improves the accuracy for solar power prediction. Experiments are performed using five well-known wrapper feature selection methods to obtain the solar power prediction accuracy of machine learning techniques with selected feature subsets. For all the experiments, the machine learning techniques, namely, least median square (LMS, multilayer perceptron (MLP, and support vector machine (SVM, are used. Afterwards, these results are compared with the solar power prediction accuracy of those same machine leaning techniques (i.e., LMS, MLP, and SVM but without applying feature selection methods (WAFS. Experiments are carried out using reliable and real life historical meteorological data. The comparison between the results clearly shows that LMS, MLP, and SVM provide better prediction accuracy (i.e., reduced MAE and MASE with selected feature subsets than without selected feature subsets. Experimental results of this paper facilitate to make a concrete verdict that providing more attention and effort towards the feature subset selection aspect (e.g., selected feature subsets on prediction accuracy which is investigated in this paper can significantly contribute to improve the accuracy of solar power prediction.

  20. Collocation methods for uncertainty quanti cation in PDE models with random data

    KAUST Repository

    Nobile, Fabio

    2014-01-06

    In this talk we consider Partial Differential Equations (PDEs) whose input data are modeled as random fields to account for their intrinsic variability or our lack of knowledge. After parametrizing the input random fields by finitely many independent random variables, we exploit the high regularity of the solution of the PDE as a function of the input random variables and consider sparse polynomial approximations in probability (Polynomial Chaos expansion) by collocation methods. We first address interpolatory approximations where the PDE is solved on a sparse grid of Gauss points in the probability space and the solutions thus obtained interpolated by multivariate polynomials. We present recent results on optimized sparse grids in which the selection of points is based on a knapsack approach and relies on sharp estimates of the decay of the coefficients of the polynomial chaos expansion of the solution. Secondly, we consider regression approaches where the PDE is evaluated on randomly chosen points in the probability space and a polynomial approximation constructed by the least square method. We present recent theoretical results on the stability and optimality of the approximation under suitable conditions between the number of sampling points and the dimension of the polynomial space. In particular, we show that for uniform random variables, the number of sampling point has to scale quadratically with the dimension of the polynomial space to maintain the stability and optimality of the approximation. Numerical results show that such condition is sharp in the monovariate case but seems to be over-constraining in higher dimensions. The regression technique seems therefore to be attractive in higher dimensions.

  1. Investigation of the paired-gear method in selectivity studies

    DEFF Research Database (Denmark)

    Sistiaga, Manu; Herrmann, Bent; Larsen, R.B.

    2009-01-01

    We estimated selectivity parameters using simultaneously the paired-gear and covered codend method for two fish species and four different selection systems, for a total of eight study cases. The deviation (Δ) in L50 and SR between these sampling methods observed in a former simulation study...... was repeated throughout the eight cases in this investigation. When using the paired-gear method, the distribution of the estimated L50 and SR is wider; the distribution of the estimated split parameter has a higher variability than the true split; the estimated mean L50 and SR can be biased; the estimated...... recommend that the methodology used to obtain selectivity estimates using the paired-gear method be reviewed....

  2. The adverse effect of selective cyclooxygenase-2 inhibitor on random skin flap survival in rats.

    Directory of Open Access Journals (Sweden)

    Haiyong Ren

    Full Text Available BACKGROUND: Cyclooxygenase-2(COX-2 inhibitors provide desired analgesic effects after injury or surgery, but evidences suggested they also attenuate wound healing. The study is to investigate the effect of COX-2 inhibitor on random skin flap survival. METHODS: The McFarlane flap model was established in 40 rats and evaluated within two groups, each group gave the same volume of Parecoxib and saline injection for 7 days. The necrotic area of the flap was measured, the specimens of the flap were stained with haematoxylin-eosin(HE for histologic analysis. Immunohistochemical staining was performed to analyse the level of VEGF and COX-2 . RESULTS: 7 days after operation, the flap necrotic area ratio in study group (66.65 ± 2.81% was significantly enlarged than that of the control group(48.81 ± 2.33%(P <0.01. Histological analysis demonstrated angiogenesis with mean vessel density per mm(2 being lower in study group (15.4 ± 4.4 than in control group (27.2 ± 4.1 (P <0.05. To evaluate the expression of COX-2 and VEGF protein in the intermediate area II in the two groups by immunohistochemistry test .The expression of COX-2 in study group was (1022.45 ± 153.1, and in control group was (2638.05 ± 132.2 (P <0.01. The expression of VEGF in the study and control groups were (2779.45 ± 472.0 vs (4938.05 ± 123.6(P <0.01.In the COX-2 inhibitor group, the expressions of COX-2 and VEGF protein were remarkably down-regulated as compared with the control group. CONCLUSION: Selective COX-2 inhibitor had adverse effect on random skin flap survival. Suppression of neovascularization induced by low level of VEGF was supposed to be the biological mechanism.

  3. A simple random amplified polymorphic DNA genotyping method for field isolates of Dermatophilus congolensis.

    Science.gov (United States)

    Larrasa, J; Garcia, A; Ambrose, N C; Alonso, J M; Parra, A; de Mendoza, M Hermoso; Salazar, J; Rey, J; de Mendoza, J Hermoso

    2002-04-01

    Dermatophilus congolensis is the pathogenic actinomycete that causes dermatophilosis in cattle, lumpy wool in sheep and rain scald in horses. Phenotypic variation between isolates has previously been described, but its genetic basis, extent and importance have not been investigated. Standard DNA extraction methods are not always successful for D. congolensis due to its complex life cycle, one stage of which is encapsulated. Here we describe the development of rapid and reliable DNA extraction and random amplified polymorphic DNA (RAPD) methods that can be used for genotyping D. congolensis field isolates. Our results suggest that genotypic variation between isolates correlates with host species. Several DNA extraction methods and RAPD protocols were compared. An extraction method based on incubation of the bacterium in lysozyme, sodium dodecyl sulphate (SDS) and proteinase K treatments and phenolic extraction yielded high-quality DNA, which was used to optimize RAPD-polymerase chain reaction (PCR) protocols for two random primers. An alternative rapid, non-phenolic extraction method based on proteinase K treatment and thermal shock was selected for routine RAPD typing of isolates. DNA extracted from reference strains from cattle, sheep and horse using either method gave reproducible banding patterns with different DNA batches and different thermal cyclers. The rapid DNA extraction method and RAPD-PCR were applied to 38 D. congolensis field isolates. The band patterns of the field and type isolates correlated with host species but not with geographical location.

  4. Selectivity in analytical chemistry: two interpretations for univariate methods.

    Science.gov (United States)

    Dorkó, Zsanett; Verbić, Tatjana; Horvai, George

    2015-01-01

    Selectivity is extremely important in analytical chemistry but its definition is elusive despite continued efforts by professional organizations and individual scientists. This paper shows that the existing selectivity concepts for univariate analytical methods broadly fall in two classes: selectivity concepts based on measurement error and concepts based on response surfaces (the response surface being the 3D plot of the univariate signal as a function of analyte and interferent concentration, respectively). The strengths and weaknesses of the different definitions are analyzed and contradictions between them unveiled. The error based selectivity is very general and very safe but its application to a range of samples (as opposed to a single sample) requires the knowledge of some constraint about the possible sample compositions. The selectivity concepts based on the response surface are easily applied to linear response surfaces but may lead to difficulties and counterintuitive results when applied to nonlinear response surfaces. A particular advantage of this class of selectivity is that with linear response surfaces it can provide a concentration independent measure of selectivity. In contrast, the error based selectivity concept allows only yes/no type decision about selectivity. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. An analysis of methods for the selection of atlases for use in medical image segmentation

    Science.gov (United States)

    Prescott, Jeffrey W.; Best, Thomas M.; Haq, Furqan; Jackson, Rebecca; Gurcan, Metin

    2010-03-01

    The use of atlases has been shown to be a robust method for segmentation of medical images. In this paper we explore different methods of selection of atlases for the segmentation of the quadriceps muscles in magnetic resonance (MR) images, although the results are pertinent for a wide range of applications. The experiments were performed using 103 images from the Osteoarthritis Initiative (OAI). The images were randomly split into a training set consisting of 50 images and a testing set of 53 images. Three different atlas selection methods were systematically compared. First, a set of readers was assigned the task of selecting atlases from a training population of images, which were selected to be representative subgroups of the total population. Second, the same readers were instructed to select atlases from a subset of the training data which was stratified based on population modes. Finally, every image in the training set was employed as an atlas, with no input from the readers, and the atlas which had the best initial registration, judged by an appropriate registration metric, was used in the final segmentation procedure. The segmentation results were quantified using the Zijdenbos similarity index (ZSI). The results show that over all readers the agreement of the segmentation algorithm decreased from 0.76 to 0.74 when using population modes to assist in atlas selection. The use of every image in the training set as an atlas outperformed both manual atlas selection methods, achieving a ZSI of 0.82.

  6. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method.

    Science.gov (United States)

    Chaharsooghi, S K; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis.

  7. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method

    Science.gov (United States)

    Chaharsooghi, S. K.; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain management and examines the problem of identifying a new model for supplier selection based on extended model of TBL approach in supply chain by presenting fuzzy multicriteria method. Linguistic values of experts' subjective preferences are expressed with fuzzy numbers and Neofuzzy TOPSIS is proposed for finding the best solution of supplier selection problem. Numerical results show that the proposed model is efficient for integrating sustainability in supplier selection problem. The importance of using complimentary aspects of sustainability and Neofuzzy TOPSIS concept in sustainable supplier selection process is shown with sensitivity analysis. PMID:27379267

  8. A model selection method for nonlinear system identification based FMRI effective connectivity analysis.

    Science.gov (United States)

    Li, Xingfeng; Coyle, Damien; Maguire, Liam; McGinnity, Thomas M; Benali, Habib

    2011-07-01

    In this paper a model selection algorithm for a nonlinear system identification method is proposed to study functional magnetic resonance imaging (fMRI) effective connectivity. Unlike most other methods, this method does not need a pre-defined structure/model for effective connectivity analysis. Instead, it relies on selecting significant nonlinear or linear covariates for the differential equations to describe the mapping relationship between brain output (fMRI response) and input (experiment design). These covariates, as well as their coefficients, are estimated based on a least angle regression (LARS) method. In the implementation of the LARS method, Akaike's information criterion corrected (AICc) algorithm and the leave-one-out (LOO) cross-validation method were employed and compared for model selection. Simulation comparison between the dynamic causal model (DCM), nonlinear identification method, and model selection method for modelling the single-input-single-output (SISO) and multiple-input multiple-output (MIMO) systems were conducted. Results show that the LARS model selection method is faster than DCM and achieves a compact and economic nonlinear model simultaneously. To verify the efficacy of the proposed approach, an analysis of the dorsal and ventral visual pathway networks was carried out based on three real datasets. The results show that LARS can be used for model selection in an fMRI effective connectivity study with phase-encoded, standard block, and random block designs. It is also shown that the LOO cross-validation method for nonlinear model selection has less residual sum squares than the AICc algorithm for the study.

  9. Supplier selection for furniture industry with fuzzy TOPSIS method

    OpenAIRE

    Esra Kurt Tekez; Nuray Bark

    2016-01-01

    Supplier selection has big importance in terms of profitability and growth of companies in the increasingly competitive environment. Supplier selection decision is a very significant topic for the group evaluation of the decision makers in businesses. However, such decisions are often ambiguous and complex as having many qualitative and quantitative factors as well as multiple decision-makers. Hence, fuzzy multi-criteria decision-making methods have been developed to solve these problems. The...

  10. RecRWR: A Recursive Random Walk Method for Improved Identification of Diseases

    Directory of Open Access Journals (Sweden)

    Joel Perdiz Arrais

    2015-01-01

    Full Text Available High-throughput methods such as next-generation sequencing or DNA microarrays lack precision, as they return hundreds of genes for a single disease profile. Several computational methods applied to physical interaction of protein networks have been successfully used in identification of the best disease candidates for each expression profile. An open problem for these methods is the ability to combine and take advantage of the wealth of biomedical data publicly available. We propose an enhanced method to improve selection of the best disease targets for a multilayer biomedical network that integrates PPI data annotated with stable knowledge from OMIM diseases and GO biological processes. We present a comprehensive validation that demonstrates the advantage of the proposed approach, Recursive Random Walk with Restarts (RecRWR. The obtained results outline the superiority of the proposed approach, RecRWR, in identifying disease candidates, especially with high levels of biological noise and benefiting from all data available.

  11. CHull: a generic convex-hull-based model selection method.

    Science.gov (United States)

    Wilderjans, Tom F; Ceulemans, Eva; Meers, Kristof

    2013-03-01

    When analyzing data, researchers are often confronted with a model selection problem (e.g., determining the number of components/factors in principal components analysis [PCA]/factor analysis or identifying the most important predictors in a regression analysis). To tackle such a problem, researchers may apply some objective procedure, like parallel analysis in PCA/factor analysis or stepwise selection methods in regression analysis. A drawback of these procedures is that they can only be applied to the model selection problem at hand. An interesting alternative is the CHull model selection procedure, which was originally developed for multiway analysis (e.g., multimode partitioning). However, the key idea behind the CHull procedure--identifying a model that optimally balances model goodness of fit/misfit and model complexity--is quite generic. Therefore, the procedure may also be used when applying many other analysis techniques. The aim of this article is twofold. First, we demonstrate the wide applicability of the CHull method by showing how it can be used to solve various model selection problems in the context of PCA, reduced K-means, best-subset regression, and partial least squares regression. Moreover, a comparison of CHull with standard model selection methods for these problems is performed. Second, we present the CHULL software, which may be downloaded from http://ppw.kuleuven.be/okp/software/CHULL/, to assist the user in applying the CHull procedure.

  12. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  13. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.

    Science.gov (United States)

    Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan

    2017-01-01

    Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  14. Selecting the Best Project Using the Fuzzy ELECTRE Method

    Directory of Open Access Journals (Sweden)

    Babak Daneshvar Rouyendegh

    2012-01-01

    Full Text Available Selecting projects is often a difficult task. It is complicated because there is usually more than one dimension for measuring the impact of each project, especially when there is more than one decision maker. This paper is aimed to present the fuzzy ELECTRE approach for prioritizing the most effective projects to improve decision making. To begin with, the ELECTRE is one of most extensively used methods to solve multicriteria decision making (MCDM problems. The ELECTRE evaluation method is widely recognized for high-performance policy analysis involving both qualitative and quantitative criteria. In this paper, we consider a real application of project selection using the opinion of experts to be applied into a model by one of the group decision makers, called the fuzzy ELECTRE method. A numerical example for project selection is given to clarify the main developed result in this paper.

  15. Combining AHP and DEA Methods for Selecting a Project Manager

    Directory of Open Access Journals (Sweden)

    Baruch Keren

    2014-07-01

    Full Text Available A project manager has a major influence on the success or failure of the project. A good project manager can match between the strategy and objectives of the organization and the goals of the project. Therefore, the selection of the appropriate project manager is a key factor for the success of the project. A potential project manager is judged by his or her proven performance and personal qualifications. This paper proposes a method to calculate the weighted scores and the full rank of candidates for managing a project, and to select the best of those candidates. The proposed method combines specific methodologies: the Data Envelopment Analysis (DEA and the Analytical Hierarchical Process (AHP and uses DEA Ranking Methods to enhance selection.

  16. Comparison of some selected methods for accident investigation.

    Science.gov (United States)

    Sklet, Snorre

    2004-07-26

    Even if the focus on risk management is increasing in our society, major accidents resulting in several fatalities seem to be unavoidable in some industries. Since the consequences of such major accidents are unacceptable, a thorough investigation of the accidents should be performed in order to learn from what has happened, and prevent future accidents. During the last decades, a number of methods for accident investigation have been developed. Each of these methods has different areas of application and different qualities and deficiencies. A combination of several methods ought to be used in a comprehensive investigation of a complex accident. This paper gives a brief description of a selection of some important, recognised, and commonly used methods for investigation of accidents. Further, the selected methods are compared according to important characteristics.

  17. Personal name in Igbo Culture: A dataset on randomly selected personal names and their statistical analysis.

    Science.gov (United States)

    Okagbue, Hilary I; Opanuga, Abiodun A; Adamu, Muminu O; Ugwoke, Paulinus O; Obasi, Emmanuela C M; Eze, Grace A

    2017-12-01

    This data article contains the statistical analysis of Igbo personal names and a sample of randomly selected of such names. This was presented as the following: 1). A simple random sampling of some Igbo personal names and their respective gender associated with each name. 2). The distribution of the vowels, consonants and letters of alphabets of the personal names. 3). The distribution of name length. 4). The distribution of initial and terminal letters of Igbo personal names. The significance of the data was discussed.

  18. Control selection methods in recent case-control studies conducted as part of infectious disease outbreaks.

    Science.gov (United States)

    Waldram, Alison; McKerr, Caoimhe; Gobin, Maya; Adak, Goutam; Stuart, James M; Cleary, Paul

    2015-06-01

    Successful investigation of national outbreaks of communicable disease relies on rapid identification of the source. Case-control methodologies are commonly used to achieve this. We assessed control selection methods used in recently published case-control studies for methodological and resource issues to determine if a standard approach could be identified. Neighbourhood controls were the most frequently used method in 53 studies of a range of different sizes, infections and settings. The most commonly used method of data collection was face to face interview. Control selection issues were identified in four areas: method of identification of controls, appropriateness of controls, ease of recruitment of controls, and resource requirements. Potential biases arising from the method of control selection were identified in half of the studies assessed. There is a need to develop new ways of selecting controls in a rapid, random and representative manner to improve the accuracy and timeliness of epidemiological investigations and maximise the effectiveness of public health interventions. Innovative methods such as prior recruitment of controls could improve timeliness and representativeness of control selection.

  19. A Comparative Study of Feature Selection and Classification Methods for Gene Expression Data of Glioma

    KAUST Repository

    Abusamra, Heba

    2013-11-01

    Microarray gene expression data gained great importance in recent years due to its role in disease diagnoses and prognoses which help to choose the appropriate treatment plan for patients. This technology has shifted a new era in molecular classification. Interpreting gene expression data remains a difficult problem and an active research area due to their native nature of “high dimensional low sample size”. Such problems pose great challenges to existing classification methods. Thus, effective feature selection techniques are often needed in this case to aid to correctly classify different tumor types and consequently lead to a better understanding of genetic signatures as well as improve treatment strategies. This paper aims on a comparative study of state-of-the- art feature selection methods, classification methods, and the combination of them, based on gene expression data. We compared the efficiency of three different classification methods including: support vector machines, k-nearest neighbor and random forest, and eight different feature selection methods, including: information gain, twoing rule, sum minority, max minority, gini index, sum of variances, t-statistics, and one-dimension support vector machine. Five-fold cross validation was used to evaluate the classification performance. Two publicly available gene expression data sets of glioma were used in the experiments. Results revealed the important role of feature selection in classifying gene expression data. By performing feature selection, the classification accuracy can be significantly boosted by using a small number of genes. The relationship of features selected in different feature selection methods is investigated and the most frequent features selected in each fold among all methods for both datasets are evaluated.

  20. Randomized controlled trial of internal and external targeted temperature management methods in post- cardiac arrest patients.

    Science.gov (United States)

    Look, Xinqi; Li, Huihua; Ng, Mingwei; Lim, Eric Tien Siang; Pothiawala, Sohil; Tan, Kenneth Boon Kiat; Sewa, Duu Wen; Shahidah, Nur; Pek, Pin Pin; Ong, Marcus Eng Hock

    2017-07-05

    Targeted temperature management post-cardiac arrest is currently implemented using various methods, broadly categorized as internal and external. This study aimed to evaluate survival-to-hospital discharge and neurological outcomes (Glasgow-Pittsburgh Score) of post-cardiac arrest patients undergoing internal cooling verses external cooling. A randomized controlled trial of post-resuscitation cardiac arrest patients was conducted from October 2008-September 2014. Patients were randomized to either internal or external cooling methods. Historical controls were selected matched by age and gender. Analysis using SPSS version 21.0 presented descriptive statistics and frequencies while univariate logistic regression was done using R 3.1.3. 23 patients were randomized to internal cooling and 22 patients to external cooling and 42 matched controls were selected. No significant difference was seen between internal and external cooling in terms of survival, neurological outcomes and complications. However in the internal cooling arm, there was lower risk of developing overcooling (p=0.01) and rebound hyperthermia (p=0.02). Compared to normothermia, internal cooling had higher survival (OR=3.36, 95% CI=(1.130, 10.412), and lower risk of developing cardiac arrhythmias (OR=0.18, 95% CI=(0.04, 0.63)). Subgroup analysis showed those with cardiac cause of arrest (OR=4.29, 95% CI=(1.26, 15.80)) and sustained ROSC (OR=5.50, 95% CI=(1.64, 20.39)) had better survival with internal cooling compared to normothermia. Cooling curves showed tighter temperature control for internal compared to external cooling. Internal cooling showed tighter temperature control compared to external cooling. Internal cooling can potentially provide better survival-to-hospital discharge outcomes and reduce cardiac arrhythmia complications in carefully selected patients as compared to normothermia. Copyright © 2017. Published by Elsevier Inc.

  1. A comparison of methods for representing sparsely sampled random quantities.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente Jose; Swiler, Laura Painton; Urbina, Angel; Mullins, Joshua

    2013-09-01

    This report discusses the treatment of uncertainties stemming from relatively few samples of random quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse data samples it is not practical to have a goal of accurately estimating the underlying probability density function (PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a specified percentile range of the actual PDF, say the range between 0.025 and .975 percentiles, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the desired percentile range of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem interesting and difficult. In this report, five uncertainty representation techniques are characterized for their performance on twenty-one test problems (over thousands of trials for each problem) according to these two opposing objectives and other performance measures. Two of the methods, statistical Tolerance Intervals and a kernel density approach specifically developed for handling sparse data, exhibit significantly better overall performance than the others.

  2. Efficient randomized methods for stability analysis of fluids systems

    Science.gov (United States)

    Dawson, Scott; Rowley, Clarence

    2016-11-01

    We show that probabilistic algorithms that have recently been developed for the approximation of large matrices can be utilized to numerically evaluate the properties of linear operators in fluids systems. In particular, we present an algorithm that is well suited for optimal transient growth (i.e., nonmodal stability) analysis. For non-normal systems, such analysis can be important for analyzing local regions of convective instability, and in identifying high-amplitude transients that can trigger nonlinear instabilities. Our proposed algorithms are easy to wrap around pre-existing timesteppers for linearized forward and adjoint equations, are highly parallelizable, and come with known error bounds. Furthermore, they allow for efficient computation of optimal growth modes for numerous time horizons simultaneously. We compare the proposed algorithm to both direct matrix-forming and Krylov subspace approaches on a number of test problems. We will additionally discuss the potential for randomized methods to assist more broadly in the speed-up of algorithms for analyzing both fluids data and operators. Supported by AFOSR Grant FA9550-14-1-0289.

  3. Statistical methods for mechanical characterization of randomly reinforced media

    Science.gov (United States)

    Tashkinov, Mikhail

    2017-12-01

    Advanced materials with heterogeneous microstructure attract extensive interest of researchers and engineers due to combination of unique properties and ability to create materials that are most suitable for each specific application. One of the challenging tasks is development of models of mechanical behavior for such materials since precision of the obtained numerical results highly depends on level of consideration of features of their heterogeneous microstructure. In most cases, numerical modeling of composite structures is based on multiscale approaches that require special techniques for establishing connection between parameters at different scales. This work offers a review of instruments of the statistics and the probability theory that are used for mechanical characterization of heterogeneous media with random positions of reinforcements. Such statistical descriptors are involved in assessment of correlations between the microstructural components and are parts of mechanical theories which require formalization of the information about microstructural morphology. Particularly, the paper addresses application of the instruments of statistics for geometry description and media reconstruction as well as their utilization in homogenization methods and local stochastic stress and strain field analysis.

  4. Selection of Construction Methods: A Knowledge-Based Approach

    Directory of Open Access Journals (Sweden)

    Ximena Ferrada

    2013-01-01

    Full Text Available The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method’ selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods’ selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  5. Personnel selection using group fuzzy AHP and SAW methods

    Directory of Open Access Journals (Sweden)

    Ali Reza Afshari

    2017-01-01

    Full Text Available Personnel evaluation and selection is a very important activity for the enterprises. Different job needs different ability and the requirement of criteria which can measure ability is different. It needs a suitable and flexible method to evaluate the performance of each candidate according to different requirements of different jobs in relation to each criterion. Analytic Hierarchy Process (AHP is one of Multi Criteria decision making methods derived from paired comparisons. Simple Additive Weighting (SAW is most frequently used multi attribute decision technique. The method is based on the weighted average. It successfully models the ambiguity and imprecision associated with the pair wise comparison process and reduces the personal biasness. This study tries to analyze the Analytic Hierarchy Process in order to make the recruitment process more reasonable, based on the fuzzy multiple criteria decision making model to achieve the goal of personnel selection. Finally, an example is implemented to demonstrate the practicability of the proposed method.

  6. Toward optimal feature selection using ranking methods and classification algorithms

    Directory of Open Access Journals (Sweden)

    Novaković Jasmina

    2011-01-01

    Full Text Available We presented a comparison between several feature ranking methods used on two real datasets. We considered six ranking methods that can be divided into two broad categories: statistical and entropy-based. Four supervised learning algorithms are adopted to build models, namely, IB1, Naive Bayes, C4.5 decision tree and the RBF network. We showed that the selection of ranking methods could be important for classification accuracy. In our experiments, ranking methods with different supervised learning algorithms give quite different results for balanced accuracy. Our cases confirm that, in order to be sure that a subset of features giving the highest accuracy has been selected, the use of many different indices is recommended.

  7. Application of random forests methods to diabetic retinopathy classification analyses.

    Directory of Open Access Journals (Sweden)

    Ramon Casanova

    Full Text Available BACKGROUND: Diabetic retinopathy (DR is one of the leading causes of blindness in the United States and world-wide. DR is a silent disease that may go unnoticed until it is too late for effective treatment. Therefore, early detection could improve the chances of therapeutic interventions that would alleviate its effects. METHODOLOGY: Graded fundus photography and systemic data from 3443 ACCORD-Eye Study participants were used to estimate Random Forest (RF and logistic regression classifiers. We studied the impact of sample size on classifier performance and the possibility of using RF generated class conditional probabilities as metrics describing DR risk. RF measures of variable importance are used to detect factors that affect classification performance. PRINCIPAL FINDINGS: Both types of data were informative when discriminating participants with or without DR. RF based models produced much higher classification accuracy than those based on logistic regression. Combining both types of data did not increase accuracy but did increase statistical discrimination of healthy participants who subsequently did or did not have DR events during four years of follow-up. RF variable importance criteria revealed that microaneurysms counts in both eyes seemed to play the most important role in discrimination among the graded fundus variables, while the number of medicines and diabetes duration were the most relevant among the systemic variables. CONCLUSIONS AND SIGNIFICANCE: We have introduced RF methods to DR classification analyses based on fundus photography data. In addition, we propose an approach to DR risk assessment based on metrics derived from graded fundus photography and systemic data. Our results suggest that RF methods could be a valuable tool to diagnose DR diagnosis and evaluate its progression.

  8. Selection of Voice Therapy Methods. Results of an Online Survey.

    Science.gov (United States)

    Burg, Iris; Meier, Birte; Nolte, Katharina; Oppermann, Tina; Rogg, Verena; Beushausen, Ulla

    2015-11-01

    Providing an evidence basis for voice therapy in the German-speaking countries faces the challenge that-for historical reasons-a variety of direct voice therapy methods is available. The aim of this study was to clarify which therapy methods are chosen and the underlying principles for this selection. An online survey was implemented to identify to what extent the variety of methods described in theory is also applied in practice. A total of 434 voice therapists in Germany, Austria, and Switzerland were asked, among other things, which methods they prefer. A significant majority of therapists do not apply one specific method but rather work with a unique combination of direct voice therapy methods for individual clients. These results show that the variety of methods described in the literature is also applied in voice therapy practice. The combination of methods becomes apparent during the choice of exercises. The type of voice disorder plays no decisive role in the method selection process, whereas certain patient variables do have an influence on this process. In particular, the patients' movement restrictions, their state of mind or mood on a given day, and aspects of learning theory are taken into account. The results suggest that a patient-oriented selection of appropriate exercises is of primary importance to voice therapists and that they rarely focus on specific direct voice therapy methods. It becomes clear that an evaluation of single methods does not correspond to practical experience, and therefore, an overall evaluation of voice therapy appears to be more useful. Copyright © 2015. Published by Elsevier Inc.

  9. A proposed method for world weightlifting championships team selection.

    Science.gov (United States)

    Chiu, Loren Z F

    2009-08-01

    The caliber of competitors at the World Weightlifting Championships (WWC) has increased greatly over the past 20 years. As the WWC are the primary qualifiers for Olympic slots (1996 to present), it is imperative for a nation to select team members who will finish with a high placing and score team points. Previous selection methods were based on a simple percentage system. Analysis of the results from the 2006 and 2007 WWC indicates a curvilinear trend in each weight class, suggesting a simple percentage system will not maximize the number of team points earned. To maximize team points, weightlifters should be selected based on their potential to finish in the top 25. A 5-tier ranking system is proposed that should ensure the athletes with the greatest potential to score team points are selected.

  10. A Selection Method for Pipe Network Boosting Plans

    Science.gov (United States)

    Qiu, Weiwei; Li, Mengyao; Weng, Haoyang

    2017-12-01

    Based on the fuzzy mathematics theory, a multi-objective fuzzy comprehensive evaluation method used for selection of pipe network boosting plans was proposed by computing relative membership matrix and weight vector for indexes. The example results show that the multi-objective fuzzy comprehensive evaluation method combining the indexes and the fuzzy relationship between them is suited to realities and can provide reference for decision of pipe network boosting plan.

  11. An iterative method for selecting degenerate multiplex PCR primers.

    Science.gov (United States)

    Souvenir, Richard; Buhler, Jeremy; Stormo, Gary; Zhang, Weixiong

    2007-01-01

    Single-nucleotide polymorphism (SNP) genotyping is an important molecular genetics process, which can produce results that will be useful in the medical field. Because of inherent complexities in DNA manipulation and analysis, many different methods have been proposed for a standard assay. One of the proposed techniques for performing SNP genotyping requires amplifying regions of DNA surrounding a large number of SNP loci. To automate a portion of this particular method, it is necessary to select a set of primers for the experiment. Selecting these primers can be formulated as the Multiple Degenerate Primer Design (MDPD) problem. The Multiple, Iterative Primer Selector (MIPS) is an iterative beam-search algorithm for MDPD. Theoretical and experimental analyses show that this algorithm performs well compared with the limits of degenerate primer design. Furthermore, MIPS outperforms an existing algorithm that was designed for a related degenerate primer selection problem.

  12. Bayesian Variable Selection Methods for Matched Case-Control Studies.

    Science.gov (United States)

    Asafu-Adjei, Josephine; Mahlet, G Tadesse; Coull, Brent; Balasubramanian, Raji; Lev, Michael; Schwamm, Lee; Betensky, Rebecca

    2017-01-31

    Matched case-control designs are currently used in many biomedical applications. To ensure high efficiency and statistical power in identifying features that best discriminate cases from controls, it is important to account for the use of matched designs. However, in the setting of high dimensional data, few variable selection methods account for matching. Bayesian approaches to variable selection have several advantages, including the fact that such approaches visit a wider range of model subsets. In this paper, we propose a variable selection method to account for case-control matching in a Bayesian context and apply it using simulation studies, a matched brain imaging study conducted at Massachusetts General Hospital, and a matched cardiovascular biomarker study conducted by the High Risk Plaque Initiative.

  13. A method of extracting the number of trial participants from abstracts describing randomized controlled trials.

    Science.gov (United States)

    Hansen, Marie J; Rasmussen, Nana Ø; Chung, Grace

    2008-01-01

    We have developed a method for extracting the number of trial participants from abstracts describing randomized controlled trials (RCTs); the number of trial participants may be an indication of the reliability of the trial. The method depends on statistical natural language processing. The number of interest was determined by a binary supervised classification based on a support vector machine algorithm. The method was trialled on 223 abstracts in which the number of trial participants was identified manually to act as a gold standard. Automatic extraction resulted in 2 false-positive and 19 false-negative classifications. The algorithm was capable of extracting the number of trial participants with an accuracy of 97% and an F-measure of 0.84. The algorithm may improve the selection of relevant articles in regard to question-answering, and hence may assist in decision-making.

  14. Assessment of Digital Access Control Methods Used by Selected ...

    African Journals Online (AJOL)

    Assessment of Digital Access Control Methods Used by Selected Academic Libraries in South-West Nigeria. ... information professionals with the knowledge that would enable them establish an effective strategy to protect e-resources from such abuses as plagiarism, piracy and infringement of intellectual property rights.

  15. Standard methods for rearing and selection of Apis mellifera queens

    DEFF Research Database (Denmark)

    Büchler, Ralph; Andonov, Sreten; Bienefeld, Kaspar

    2013-01-01

    Here we cover a wide range of methods currently in use and recommended in modern queen rearing, selection and breeding. The recommendations are meant to equally serve as standards for both scientific and practical beekeeping purposes. The basic conditions and different management techniques for q...

  16. Simulated Performance Evaluation of a Selective Tracker Through Random Scenario Generation

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar

    2006-01-01

      The paper presents a simulation study on the performance of a target tracker using selective track splitting filter algorithm through a random scenario implemented on a digital signal processor.  In a typical track splitting filter all the observation which fall inside a likelihood ellipse...... are used for update, however, in our proposed selective track splitting filter less number of observations are used for track update.  Much of the previous performance work [1] has been done on specific (deterministic) scenarios. One of the reasons for considering the specific scenarios, which were...

  17. Selective Integration in the Material-Point Method

    DEFF Research Database (Denmark)

    2009-01-01

    The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared to a tra......The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....

  18. A new DEA-GAHP method for supplier selection problem

    Directory of Open Access Journals (Sweden)

    Behrooz Ahadian

    2012-10-01

    Full Text Available Supplier selection is one of the most important decisions made in supply chain management. Supplier evaluation problem has been in the center of supply chain researcher’s attention in these years. Managers regard some of these studies and methods inappropriate due to simple, weight scoring methods that generally are based on subjective opinions and judgments of decision maker units involved in the supplier evaluation process yielding imprecise and even unreliable results. This paper seeks to propose a methodology to integrate data envelopment analysis (DEA and group analytical hierarchy process (GAHP for evaluating and selecting the most efficient supplier. We develop a methodology, which consists of 6 steps, one by one has been introduced in lecture and finally applicability of proposed method is indicated by assessing 12 suppliers in a numerical example.

  19. Some selected quantitative methods of thermal image analysis in Matlab.

    Science.gov (United States)

    Koprowski, Robert

    2016-05-01

    The paper presents a new algorithm based on some selected automatic quantitative methods for analysing thermal images. It shows the practical implementation of these image analysis methods in Matlab. It enables to perform fully automated and reproducible measurements of selected parameters in thermal images. The paper also shows two examples of the use of the proposed image analysis methods for the area of ​​the skin of a human foot and face. The full source code of the developed application is also provided as an attachment. The main window of the program during dynamic analysis of the foot thermal image. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. Pyrochemical and Dry Processing Methods Program. A selected bibliography

    Energy Technology Data Exchange (ETDEWEB)

    McDuffie, H.F.; Smith, D.H.; Owen, P.T.

    1979-03-01

    This selected bibliography with abstracts was compiled to provide information support to the Pyrochemical and Dry Processing Methods (PDPM) Program sponsored by DOE and administered by the Argonne National Laboratory. Objectives of the PDPM Program are to evaluate nonaqueous methods of reprocessing spent fuel as a route to the development of proliferation-resistant and diversion-resistant methods for widespread use in the nuclear industry. Emphasis was placed on the literature indexed in the ERDA--DOE Energy Data Base (EDB). The bibliography includes indexes to authors, subject descriptors, EDB subject categories, and titles.

  1. Selective nerve root blocks vs. caudal epidural injection for single level prolapsed lumbar intervertebral disc - A prospective randomized study.

    Science.gov (United States)

    Singh, Sudhir; Kumar, Sanjiv; Chahal, Gaurav; Verma, Reetu

    2017-01-01

    Chronic lumbar radiculopathy has a lifetime prevalence of 5.3% in men and 3.7% in women. It usually resolves spontaneously, but up to 30% cases will have pronounced symptoms even after one year. A prospective randomized single-blind study was conducted to compare the efficacy of caudal epidural steroid injection and selective nerve root block in management of pain and disability in cases of lumbar disc herniation. Eighty patients with confirmed single-level lumbar disc herniation were equally divided in two groups: (a) caudal epidural and (b) selective nerve root block group, by a computer-generated random allocation method. The caudal group received three injections of steroid mixed with local anesthetics while selective nerve root block group received single injection of steroid mixed with local anesthetic agent. Patients were assessed for pain relief and reduction in disability. In SNRB group, pain reduced by more than 50% up till 6 months, while in caudal group more than 50% reduction of pain was maintained till 1 year. The reduction in ODI in SNRB group was 52.8% till 3 months, 48.6% till 6 months, and 46.7% at 1 year, while in caudal group the improvement was 59.6%, 64.6%, 65.1%, and 65.4% at corresponding follow-up periods, respectively. Caudal epidural block is an easy and safe method with better pain relief and improvement in functional disability than selective nerve root block. Selective nerve root block injection is technically more demanding and has to be given by a skilled anesthetist.

  2. Hyperspectral image classification based on NMF Features Selection Method

    Science.gov (United States)

    Abe, Bolanle T.; Jordaan, J. A.

    2013-12-01

    Hyperspectral instruments are capable of collecting hundreds of images corresponding to wavelength channels for the same area on the earth surface. Due to the huge number of features (bands) in hyperspectral imagery, land cover classification procedures are computationally expensive and pose a problem known as the curse of dimensionality. In addition, higher correlation among contiguous bands increases the redundancy within the bands. Hence, dimension reduction of hyperspectral data is very crucial so as to obtain good classification accuracy results. This paper presents a new feature selection technique. Non-negative Matrix Factorization (NMF) algorithm is proposed to obtain reduced relevant features in the input domain of each class label. This aimed to reduce classification error and dimensionality of classification challenges. Indiana pines of the Northwest Indiana dataset is used to evaluate the performance of the proposed method through experiments of features selection and classification. The Waikato Environment for Knowledge Analysis (WEKA) data mining framework is selected as a tool to implement the classification using Support Vector Machines and Neural Network. The selected features subsets are subjected to land cover classification to investigate the performance of the classifiers and how the features size affects classification accuracy. Results obtained shows that performances of the classifiers are significant. The study makes a positive contribution to the problems of hyperspectral imagery by exploring NMF, SVMs and NN to improve classification accuracy. The performances of the classifiers are valuable for decision maker to consider tradeoffs in method accuracy versus method complexity.

  3. Faculty input in book selection: a comparison of alternative methods.

    Science.gov (United States)

    Bell, J A; Bredderman, P J; Stangohr, M K; O'Brien, K F

    1987-07-01

    In an era of tight funding, academic medical center libraries need to determine their users' needs in order to provide cost-effective resource collections. Although faculty input is valuable, it is impractical to impose such ongoing responsibility on faculty members. This study tested an alternative method by comparing faculty preferences in discipline-specific subjects with faculty choices on corresponding discipline-specific, new-book approval slips from a vendor. Collection development librarian selections, based on formal selection criteria, were evaluated against both measures of faculty preferences. It was found that faculty members' subject ratings did not accurately predict their book choices. Implications of this and the other findings are discussed.

  4. Randomized BioBrick assembly: a novel DNA assembly method for randomizing and optimizing genetic circuits and metabolic pathways.

    Science.gov (United States)

    Sleight, Sean C; Sauro, Herbert M

    2013-09-20

    The optimization of genetic circuits and metabolic pathways often involves constructing various iterations of the same construct or using directed evolution to achieve the desired function. Alternatively, a method that randomizes individual parts in the same assembly reaction could be used for optimization by allowing for the ability to screen large numbers of individual clones expressing randomized circuits or pathways for optimal function. Here we describe a new assembly method to randomize genetic circuits and metabolic pathways from modular DNA fragments derived from PCR-amplified BioBricks. As a proof-of-principle for this method, we successfully assembled CMY (Cyan-Magenta-Yellow) three-gene circuits using Gibson Assembly that express CFP, RFP, and YFP with independently randomized promoters, ribosome binding sites, transcriptional terminators, and all parts randomized simultaneously. Sequencing results from 24 CMY circuits with various parts randomized show that 20/24 circuits are distinct and expression varies over a 200-fold range above background levels. We then adapted this method to randomize the same parts with enzyme coding sequences from the lycopene biosynthesis pathway instead of fluorescent proteins, designed to independently express each enzyme in the pathway from a different promoter. Lycopene production is improved using this randomization method by about 30% relative to the highest polycistronic-expressing pathway. These results demonstrate the potential of generating nearly 20,000 unique circuit or pathway combinations when three parts are permutated at each position in a three-gene circuit or pathway, and the methodology can likely be adapted to other circuits and pathways to maximize products of interest.

  5. A systematic and practical method for selecting systems engineering tools

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2017-01-01

    The complexity of many types of systems has grown considerably over the last decades. Using appropriate systems engineering tools therefore becomes increasingly important. Starting the tool selection process can be intimidating because organizations often only have a vague idea about what they need....... The tremendous number of available tools makes it difficult to get an overview and identify the best choice. Selecting wrong tools due to inappropriate analysis can have severe impact on the success of the company. This paper presents a systematic method for selecting systems engineering tools based on thorough...... analyses of the actual needs and the available tools. Grouping needs into categories, allow us to obtain a comprehensive set of requirements for the tools. The entire model-based systems engineering discipline was categorized for a modeling tool case to enable development of a tool specification...

  6. Multicriteria Personnel Selection by the Modified Fuzzy VIKOR Method

    Science.gov (United States)

    Alguliyev, Rasim M.; Aliguliyev, Ramiz M.; Mahmudova, Rasmiyya S.

    2015-01-01

    Personnel evaluation is an important process in human resource management. The multicriteria nature and the presence of both qualitative and quantitative factors make it considerably more complex. In this study, a fuzzy hybrid multicriteria decision-making (MCDM) model is proposed to personnel evaluation. This model solves personnel evaluation problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to evaluate the suitability of personnel and the approximate reasoning of linguistic values. For evaluation, we have selected five information culture criteria. The weights of the criteria were calculated using worst-case method. After that, modified fuzzy VIKOR is proposed to rank the alternatives. The outcome of this research is ranking and selecting best alternative with the help of fuzzy VIKOR and modified fuzzy VIKOR techniques. A comparative analysis of results by fuzzy VIKOR and modified fuzzy VIKOR methods is presented. Experiments showed that the proposed modified fuzzy VIKOR method has some advantages over fuzzy VIKOR method. Firstly, from a computational complexity point of view, the presented model is effective. Secondly, compared to fuzzy VIKOR method, it has high acceptable advantage compared to fuzzy VIKOR method. PMID:26516634

  7. Multicriteria Personnel Selection by the Modified Fuzzy VIKOR Method.

    Science.gov (United States)

    Alguliyev, Rasim M; Aliguliyev, Ramiz M; Mahmudova, Rasmiyya S

    2015-01-01

    Personnel evaluation is an important process in human resource management. The multicriteria nature and the presence of both qualitative and quantitative factors make it considerably more complex. In this study, a fuzzy hybrid multicriteria decision-making (MCDM) model is proposed to personnel evaluation. This model solves personnel evaluation problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to evaluate the suitability of personnel and the approximate reasoning of linguistic values. For evaluation, we have selected five information culture criteria. The weights of the criteria were calculated using worst-case method. After that, modified fuzzy VIKOR is proposed to rank the alternatives. The outcome of this research is ranking and selecting best alternative with the help of fuzzy VIKOR and modified fuzzy VIKOR techniques. A comparative analysis of results by fuzzy VIKOR and modified fuzzy VIKOR methods is presented. Experiments showed that the proposed modified fuzzy VIKOR method has some advantages over fuzzy VIKOR method. Firstly, from a computational complexity point of view, the presented model is effective. Secondly, compared to fuzzy VIKOR method, it has high acceptable advantage compared to fuzzy VIKOR method.

  8. Multicriteria Personnel Selection by the Modified Fuzzy VIKOR Method

    Directory of Open Access Journals (Sweden)

    Rasim M. Alguliyev

    2015-01-01

    Full Text Available Personnel evaluation is an important process in human resource management. The multicriteria nature and the presence of both qualitative and quantitative factors make it considerably more complex. In this study, a fuzzy hybrid multicriteria decision-making (MCDM model is proposed to personnel evaluation. This model solves personnel evaluation problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to evaluate the suitability of personnel and the approximate reasoning of linguistic values. For evaluation, we have selected five information culture criteria. The weights of the criteria were calculated using worst-case method. After that, modified fuzzy VIKOR is proposed to rank the alternatives. The outcome of this research is ranking and selecting best alternative with the help of fuzzy VIKOR and modified fuzzy VIKOR techniques. A comparative analysis of results by fuzzy VIKOR and modified fuzzy VIKOR methods is presented. Experiments showed that the proposed modified fuzzy VIKOR method has some advantages over fuzzy VIKOR method. Firstly, from a computational complexity point of view, the presented model is effective. Secondly, compared to fuzzy VIKOR method, it has high acceptable advantage compared to fuzzy VIKOR method.

  9. Impact of amoxicillin therapy on resistance selection in patients with community-acquired lower respiratory tract infections : A randomized, placebo-controlled study

    NARCIS (Netherlands)

    Malhotra-Kumar, Surbhi; Van Heirstraeten, Liesbet; Coenen, Samuel; Lammens, Christine; Adriaenssens, Niels; Kowalczyk, Anna; Godycki-Cwirko, Maciek; Bielicka, Zuzana; Hupkova, Helena; Lannering, Christina; Mölstad, Sigvard; Fernandez-Vandellos, Patricia; Torres, Antoni; Parizel, Maxim; Ieven, Margareta; Butler, Chris C.; Verheij, Theo; Little, Paul; Goossens, Hermanon; Frimodt-Møller, Niels; Bruno, Pascale; Hering, Iris; Lemiengre, Marieke; Loens, Katherine; Malmvall, Bo Eric; Muras, Magdalena; Romano, Nuria Sanchez; Prat, Matteu Serra; Svab, Igor; Swain, Jackie; Tarsia, Paolo; Leus, Frank; Veen, Robert; Worby, Tricia

    2016-01-01

    Objectives: To determine the effect of amoxicillin treatment on resistance selection in patients with community-acquired lower respiratory tract infections in a randomized, placebo-controlled trial. Methods: Patients were prescribed amoxicillin 1 g, three times daily (n = 52) or placebo (n = 50) for

  10. The Long-Term Effectiveness of a Selective, Personality-Targeted Prevention Program in Reducing Alcohol Use and Related Harms: A Cluster Randomized Controlled Trial

    Science.gov (United States)

    Newton, Nicola C.; Conrod, Patricia J.; Slade, Tim; Carragher, Natacha; Champion, Katrina E.; Barrett, Emma L.; Kelly, Erin V.; Nair, Natasha K.; Stapinski, Lexine; Teesson, Maree

    2016-01-01

    Background: This study investigated the long-term effectiveness of Preventure, a selective personality-targeted prevention program, in reducing the uptake of alcohol, harmful use of alcohol, and alcohol-related harms over a 3-year period. Methods: A cluster randomized controlled trial was conducted to assess the effectiveness of Preventure.…

  11. Multi-Index Monte Carlo and stochastic collocation methods for random PDEs

    KAUST Repository

    Nobile, Fabio

    2016-01-09

    In this talk we consider the problem of computing statistics of the solution of a partial differential equation with random data, where the random coefficient is parametrized by means of a finite or countable sequence of terms in a suitable expansion. We describe and analyze a Multi-Index Monte Carlo (MIMC) and a Multi-Index Stochastic Collocation method (MISC). the former is both a stochastic version of the combination technique introduced by Zenger, Griebel and collaborators and an extension of the Multilevel Monte Carlo (MLMC) method first described by Heinrich and Giles. Instead of using firstorder differences as in MLMC, MIMC uses mixed differences to reduce the variance of the hierarchical differences dramatically. This in turn yields new and improved complexity results, which are natural generalizations of Giles s MLMC analysis, and which increase the domain of problem parameters for which we achieve the optimal convergence, O(TOL-2). On the same vein, MISC is a deterministic combination technique based on mixed differences of spatial approximations and quadratures over the space of random data. Provided enough mixed regularity, MISC can achieve better complexity than MIMC. Moreover, we show that in the optimal case the convergence rate of MISC is only dictated by the convergence of the deterministic solver applied to a one-dimensional spatial problem. We propose optimization procedures to select the most effective mixed differences to include in MIMC and MISC. Such optimization is a crucial step that allows us to make MIMC and MISC computationally effective. We finally show the effectiveness of MIMC and MISC with some computational tests, including tests with a infinite countable number of random parameters.

  12. Analysis of Criteria Influencing Contractor Selection Using TOPSIS Method

    Science.gov (United States)

    Alptekin, Orkun; Alptekin, Nesrin

    2017-10-01

    Selection of the most suitable contractor is an important process in public construction projects. This process is a major decision which may influence the progress and success of a construction project. Improper selection of contractors may lead to problems such as bad quality of work and delay in project duration. Especially in the construction projects of public buildings, the proper choice of contractor is beneficial to the public institution. Public procurement processes have different characteristics in respect to dissimilarities in political, social and economic features of every country. In Turkey, Turkish Public Procurement Law PPL 4734 is the main regulatory law for the procurement of the public buildings. According to the PPL 4734, public construction administrators have to contract with the lowest bidder who has the minimum requirements according to the criteria in prequalification process. Public administrators are not sufficient for selection of the proper contractor because of the restrictive provisions of the PPL 4734. The lowest bid method does not enable public construction administrators to select the most qualified contractor and they have realised the fact that the selection of a contractor based on lowest bid alone is inadequate and may lead to the failure of the project in terms of time delay Eand poor quality standards. In order to evaluate the overall efficiency of a project, it is necessary to identify selection criteria. This study aims to focus on identify importance of other criteria besides lowest bid criterion in contractor selection process of PPL 4734. In this study, a survey was conducted to staff of Department of Construction Works of Eskisehir Osmangazi University. According to TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution) for analysis results, termination of construction work in previous tenders is the most important criterion of 12 determined criteria. The lowest bid criterion is ranked in rank 5.

  13. Probability variance CHI feature selection method for unbalanced data

    Science.gov (United States)

    Zhang, Xiaowen; Chen, Bingfeng

    2017-08-01

    The problem of feature selection on unbalanced text data is a difficult problem to be solved. In view of the above problems, this paper analyzes the distribution of the feature items in the class and the class and the difference of the document under the unbalanced data set. The research is based on the word frequency probability and the document probability measurement feature and the document in the unbalanced data this paper proposes a CHI feature selection method based on probabilistic variance, which improves the traditional chi-square statistical model by introducing the intra-class word frequency probability factor, inter-class document probability concentration factor and intra-class uniformity factor. The experiment proves the effectiveness and feasibility of the method.

  14. ERP SYSTEM SELECTION BY AHP METHOD: CASE STUDY FROM TURKEY

    OpenAIRE

    Rouyendegh, Babak Daneshvar; Erkan, Turan Erman

    2011-01-01

    An Enterprise Resource Planning (ERP) system is a critical investment that can significantly affect competitiveness of a corporate in the future. There are many national and international ERP vendors in Turkey. This study presents a comprehensive framework for selecting a suitable ERP system by using Analytic Hierarchy Process (AHP). AHP method, directs how to determine the priority of a set of alternatives and the relative importance of attributes in a multiple criteria decision-making probl...

  15. UTA Method for the Consulting Firm Selection Problem

    OpenAIRE

    A. Tuş Işık; E. Aytaç Adalı

    2016-01-01

    The market conditions change due to the introduction of new products, unforseen demand fluctuations, rapid change of the life cycle of products and profit margins. Therefore, companies try to survive in a competitive environment by making their operational decisions strategically and systematically. Selection problems play an important role among these decisions. In the literature there are many robust MCDM (Multi Criteria Decision Making) methods that consider the conflicting sel...

  16. Essay on Methods in Futures Studies and a Selective Bibliography

    DEFF Research Database (Denmark)

    Poulsen, Claus

    2005-01-01

    Futures studies is often conflated with science fiction or pop-futurism. Consequently there is a need for demarcation of what is futures studies and what is not. From the same reason the essay stresses the need for quality control to focus on futures research and its methods: Publications in futu...... programme are (only) partly reduced by applying Causal Layered Analysis as an internal quality control. The following selective bibliography is focussed on these methodological issues...

  17. Rapid and selective method for quantitation of metronidazole in pharmaceuticals.

    Science.gov (United States)

    Sanyal, A K

    1988-01-01

    A selective and highly sensitive assay for N-1-substituted nitroimidazoles has been modified and adapted for rapid estimation of metronidazole in pharmaceuticals. The color reaction is based on diazotization of sulfanilamide with the nitrite ions liberated by alkaline hydrolysis of metronidazole and subsequent coupling of the diazonium salt with N-1-(naphthyl)-ethylenediamine dihydrochloride. This method is applicable for the assay of benzoyl metronidazole in oral suspension. Officially recommended excipients and preservatives do not interfere.

  18. Prediction Of Students’ Learning Study Periode By Using Random Forest Method (Case Study: Stikom Bali

    Directory of Open Access Journals (Sweden)

    I Made Budi Adnyana

    2016-10-01

    Full Text Available Graduation on time is one of the assessment elements of the college accreditation. Furthermore, graduation on time is an important issue because it indicates an effectiveness of college. Academic division of STIKOM Bali face many difficulties on predicting student graduation time because lack of information and analysis. Predictions of graduation time can help academic division in making appropriate strategy to shorten the study time. Data mining can be applied on this prediction problems using random forest classification methods. Random forest is a collection of of several tree, where each tree dependent on the pixels on each vector that selected randomly and independent .Sample data obtained from academic division of STIKOM Bali. This research use sample data of last 2 years graduated students, such as IPK, SKS, the number of inactive, and study time. The classification output consists of 2 class, “graduate on time” and “graduate over the time”. From the experimental result, 83.54 % accuracy value obtained.

  19. Methods of Investigation of Sexual Crimes (Selected Issues)

    OpenAIRE

    Pixová, Monika

    2016-01-01

    Cizojazyčné resumé The diploma thesis deals with the methods of investigation of sexual crimes. Because the topic is too broad I've decided to focus only on selected issues of methods of investigation of child sexual abuse. Investigation of child sexual abuse is very specific in comparison to other sexual crimes. During the investigation the child victim must be handled very sensitively in order to avoid secondary victimization. The purpose of this thesis is to describe the specifics of inves...

  20. Analysis of tuning methods in semiconductor frequency-selective surfaces

    Science.gov (United States)

    Shemelya, Corey; Palm, Dominic; Fip, Tassilo; Rahm, Marco

    2017-02-01

    Advanced technology, such as sensing and communication equipment, has recently begun to combine optically sensitive nano-scale structures with customizable semiconductor material systems. Included within this broad field of study is the aptly named frequency-selective surface; which is unique in that it can be artificially designed to produce a specific electromagnetic or optical response. With the inherent utility of a frequency-selective surface, there has been an increased interest in the area of dynamic frequency-selective surfaces, which can be altered through optical or electrical tuning. This area has had exciting break throughs as tuning methods have evolved; however, these methods are typically energy intensive (optical tuning) or have met with limited success (electrical tuning). As such, this work investigates multiple structures and processes which implement semiconductor electrical biasing and/or optical tuning. Within this study are surfaces ranging from transmission meta-structures to metamaterial surface-waves and the associated coupling schemes. This work shows the utility of each design, while highlighting potential methods for optimizing dynamic meta-surfaces. As an added constraint, the structures were also designed to operate in unison with a state-of-the-art Ti:Sapphire Spitfire Ace and Spitfire Ace PA dual system (12 Watt) with pulse front matching THz generation and an EOS detection system. Additionally, the Ti:Sapphire laser system would provide the means for optical tunablity, while electrical tuning can be obtained through external power supplies.

  1. Shaker Random Testing with Low Kurtosis: Review of the Methods and Application for Sigma Limiting

    Directory of Open Access Journals (Sweden)

    Alexander Steinwolf

    2010-01-01

    Full Text Available The non-Gaussian random shaker testing with kurtosis control has been known as a way of increasing the excitation crest factor in order to realistically simulate ground vehicle vibrations and other situations when the time history includes extreme peaks higher than those appearing in Gaussian random signals. However, an opposite action is also useful in other applications, particularly in modal testing. If the PSD is the only test specification, more power can be extracted from the same shaker if the crest factor is decreased and an extra space is created between the peaks of reduced height and the system abort limit. To achieve this, a technique of sigma clipping is commonly used but it generates harmonic distortions reducing dynamic range of shaker system. It is shown in the paper that the non-Gaussian phase selection in the IFFT generation can reduce kurtosis to 1.7 and bring the crest factor of drive signals from 4.5 to 2. The phase selection method does this without any loss of the controller's dynamic range that inevitably occurs after sigma clipping or polynomial transformation of time histories.

  2. Analysis of tree stand horizontal structure using random point field methods

    Directory of Open Access Journals (Sweden)

    O. P. Sekretenko

    2015-06-01

    Full Text Available This paper uses the model approach to analyze the horizontal structure of forest stands. The main types of models of random point fields and statistical procedures that can be used to analyze spatial patterns of trees of uneven and even-aged stands are described. We show how modern methods of spatial statistics can be used to address one of the objectives of forestry – to clarify the laws of natural thinning of forest stand and the corresponding changes in its spatial structure over time. Studying natural forest thinning, we describe the consecutive stages of modeling: selection of the appropriate parametric model, parameter estimation and generation of point patterns in accordance with the selected model, the selection of statistical functions to describe the horizontal structure of forest stands and testing of statistical hypotheses. We show the possibilities of a specialized software package, spatstat, which is designed to meet the challenges of spatial statistics and provides software support for modern methods of analysis of spatial data. We show that a model of stand thinning that does not consider inter-tree interaction can project the size distribution of the trees properly, but the spatial pattern of the modeled stand is not quite consistent with observed data. Using data of three even-aged pine forest stands of 25, 55, and 90-years old, we demonstrate that the spatial point process models are useful for combining measurements in the forest stands of different ages to study the forest stand natural thinning.

  3. The selective dynamical downscaling method for extreme-wind atlases

    DEFF Research Database (Denmark)

    Larsén, Xiaoli Guo; Badger, Jake; Hahmann, Andrea N.

    2012-01-01

    and (iii) post-processing. The post-processing generalizes the winds from the mesoscale modelling to standard conditions, i.e. 10-m height over a homogeneous surface with roughness length of 5 cm. The generalized winds are then used to calculate the 50-year wind using the annual maximum method for each...... mesoscale grid point. The generalization of the mesoscale winds through the post-processing provides a framework for data validation and for applying further the mesoscale extreme winds at specific places using microscale modelling. The results are compared with measurements from two areas with different......A selective dynamical downscaling method is developed to obtain extreme-wind atlases for large areas. The method is general, efficient and flexible. The method consists of three steps: (i) identifying storm episodes for a particular area, (ii) downscaling of the storms using mesoscale modelling...

  4. Genome-wide association data classification and SNPs selection using two-stage quality-based Random Forests.

    Science.gov (United States)

    Nguyen, Thanh-Tung; Huang, Joshua; Wu, Qingyao; Nguyen, Thuy; Li, Mark

    2015-01-01

    Single-nucleotide polymorphisms (SNPs) selection and identification are the most important tasks in Genome-wide association data analysis. The problem is difficult because genome-wide association data is very high dimensional and a large portion of SNPs in the data is irrelevant to the disease. Advanced machine learning methods have been successfully used in Genome-wide association studies (GWAS) for identification of genetic variants that have relatively big effects in some common, complex diseases. Among them, the most successful one is Random Forests (RF). Despite of performing well in terms of prediction accuracy in some data sets with moderate size, RF still suffers from working in GWAS for selecting informative SNPs and building accurate prediction models. In this paper, we propose to use a new two-stage quality-based sampling method in random forests, named ts-RF, for SNP subspace selection for GWAS. The method first applies p-value assessment to find a cut-off point that separates informative and irrelevant SNPs in two groups. The informative SNPs group is further divided into two sub-groups: highly informative and weak informative SNPs. When sampling the SNP subspace for building trees for the forest, only those SNPs from the two sub-groups are taken into account. The feature subspaces always contain highly informative SNPs when used to split a node at a tree. This approach enables one to generate more accurate trees with a lower prediction error, meanwhile possibly avoiding overfitting. It allows one to detect interactions of multiple SNPs with the diseases, and to reduce the dimensionality and the amount of Genome-wide association data needed for learning the RF model. Extensive experiments on two genome-wide SNP data sets (Parkinson case-control data comprised of 408,803 SNPs and Alzheimer case-control data comprised of 380,157 SNPs) and 10 gene data sets have demonstrated that the proposed model significantly reduced prediction errors and outperformed

  5. Alternating direction methods for latent variable gaussian graphical model selection.

    Science.gov (United States)

    Ma, Shiqian; Xue, Lingzhou; Zou, Hui

    2013-08-01

    Chandrasekaran, Parrilo, and Willsky (2012) proposed a convex optimization problem for graphical model selection in the presence of unobserved variables. This convex optimization problem aims to estimate an inverse covariance matrix that can be decomposed into a sparse matrix minus a low-rank matrix from sample data. Solving this convex optimization problem is very challenging, especially for large problems. In this letter, we propose two alternating direction methods for solving this problem. The first method is to apply the classic alternating direction method of multipliers to solve the problem as a consensus problem. The second method is a proximal gradient-based alternating-direction method of multipliers. Our methods take advantage of the special structure of the problem and thus can solve large problems very efficiently. A global convergence result is established for the proposed methods. Numerical results on both synthetic data and gene expression data show that our methods usually solve problems with 1 million variables in 1 to 2 minutes and are usually 5 to 35 times faster than a state-of-the-art Newton-CG proximal point algorithm.

  6. Objects Classification for Mobile Robots Using Hierarchic Selective Search Method

    Directory of Open Access Journals (Sweden)

    Xu Cheng

    2016-01-01

    Full Text Available Aiming at determining the category of an image captured from mobile robots for intelligent application, classification with the bag-of-words model is proved effectively in near-duplicate/planar images. When it comes to images from mobile robots with complex background, does it still work well? In this paper, based on the merging criterion improvement, a method named hierarchical selective search is proposed hierarchically extracting complementary features to form a combined and environment-adaptable similarity measurement for segmentation resulting a small and high-quality regions set. Simultaneously those regions rather than a whole image are used for classification. As a result, it well improved the classification accuracy and make the bog-of-word model still work well on classification for mobile robots. The experiments on hierarchical selective search show its better performance than selective search on two task datasets for mobile robots. The experiments on classification shows the samples from regions are better than those original whole images. The advantage of less quantity and higher quality object regions from hierarchical selective search is more prominent when it comes to those special tasks for mobile robots with scarce data.

  7. Appropriate model selection methods for nonstationary generalized extreme value models

    Science.gov (United States)

    Kim, Hanbeen; Kim, Sooyoung; Shin, Hongjoon; Heo, Jun-Haeng

    2017-04-01

    Several evidences of hydrologic data series being nonstationary in nature have been found to date. This has resulted in the conduct of many studies in the area of nonstationary frequency analysis. Nonstationary probability distribution models involve parameters that vary over time. Therefore, it is not a straightforward process to apply conventional goodness-of-fit tests to the selection of an appropriate nonstationary probability distribution model. Tests that are generally recommended for such a selection include the Akaike's information criterion (AIC), corrected Akaike's information criterion (AICc), Bayesian information criterion (BIC), and likelihood ratio test (LRT). In this study, the Monte Carlo simulation was performed to compare the performances of these four tests, with regard to nonstationary as well as stationary generalized extreme value (GEV) distributions. Proper model selection ratios and sample sizes were taken into account to evaluate the performances of all the four tests. The BIC demonstrated the best performance with regard to stationary GEV models. In case of nonstationary GEV models, the AIC proved to be better than the other three methods, when relatively small sample sizes were considered. With larger sample sizes, the AIC, BIC, and LRT presented the best performances for GEV models which have nonstationary location and/or scale parameters, respectively. Simulation results were then evaluated by applying all four tests to annual maximum rainfall data of selected sites, as observed by the Korea Meteorological Administration.

  8. Selected Tools and Methods from Quality Management Field

    Directory of Open Access Journals (Sweden)

    Kateřina BRODECKÁ

    2009-06-01

    Full Text Available Following paper describes selected tools and methods from Quality management field and their practical applications on defined examples. Solved examples were elaborated in the form of electronic support. This in detail elaborated electronic support provides students opportunity to thoroughly practice specific issues, help them to prepare for exams and consequently will lead to education improvement. Especially students of combined study form will appreciate this support. The paper specifies project objectives, subjects that will be covered by mentioned support, target groups, structure and the way of elaboration of electronic exercise book in view. The emphasis is not only on manual solution of selected examples that may help students to understand the principles and relationships, but also on solving and results interpreting of selected examples using software support. Statistic software Statgraphics Plus v 5.0 is used while working support, because it is free to use for all students of the faculty. Exemplary example from the subject Basic Statistical Methods of Quality Management is also part of this paper.

  9. Variable selection in near-infrared spectroscopy: benchmarking of feature selection methods on biodiesel data.

    Science.gov (United States)

    Balabin, Roman M; Smirnov, Sergey V

    2011-04-29

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm(-1)) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  10. Variable selection in near-infrared spectroscopy: Benchmarking of feature selection methods on biodiesel data

    Energy Technology Data Exchange (ETDEWEB)

    Balabin, Roman M., E-mail: balabin@org.chem.ethz.ch [Department of Chemistry and Applied Biosciences, ETH Zurich, 8093 Zurich (Switzerland); Smirnov, Sergey V. [Unimilk Joint Stock Co., 143421 Moscow Region (Russian Federation)

    2011-04-29

    During the past several years, near-infrared (near-IR/NIR) spectroscopy has increasingly been adopted as an analytical tool in various fields from petroleum to biomedical sectors. The NIR spectrum (above 4000 cm{sup -1}) of a sample is typically measured by modern instruments at a few hundred of wavelengths. Recently, considerable effort has been directed towards developing procedures to identify variables (wavelengths) that contribute useful information. Variable selection (VS) or feature selection, also called frequency selection or wavelength selection, is a critical step in data analysis for vibrational spectroscopy (infrared, Raman, or NIRS). In this paper, we compare the performance of 16 different feature selection methods for the prediction of properties of biodiesel fuel, including density, viscosity, methanol content, and water concentration. The feature selection algorithms tested include stepwise multiple linear regression (MLR-step), interval partial least squares regression (iPLS), backward iPLS (BiPLS), forward iPLS (FiPLS), moving window partial least squares regression (MWPLS), (modified) changeable size moving window partial least squares (CSMWPLS/MCSMWPLSR), searching combination moving window partial least squares (SCMWPLS), successive projections algorithm (SPA), uninformative variable elimination (UVE, including UVE-SPA), simulated annealing (SA), back-propagation artificial neural networks (BP-ANN), Kohonen artificial neural network (K-ANN), and genetic algorithms (GAs, including GA-iPLS). Two linear techniques for calibration model building, namely multiple linear regression (MLR) and partial least squares regression/projection to latent structures (PLS/PLSR), are used for the evaluation of biofuel properties. A comparison with a non-linear calibration model, artificial neural networks (ANN-MLP), is also provided. Discussion of gasoline, ethanol-gasoline (bioethanol), and diesel fuel data is presented. The results of other spectroscopic

  11. Novel Zn2+-chelating peptides selected from a fimbria-displayed random peptide library

    DEFF Research Database (Denmark)

    Kjærgaard, Kristian; Schembri, Mark; Klemm, Per

    2001-01-01

    H adhesin. FimH is a component of the fimbrial organelle that can accommodate and display a diverse range of peptide sequences on the E. coli cell surface. In this study we have constructed a random peptide library in FimH. The library, consisting of similar to 40 million individual clones, was screened...... for peptide sequences that conferred on recombinant cells the ability to bind Zn2+. By serial selection, sequences that exhibited various degrees of binding affinity and specificity toward Zn2+ were enriched. None of the isolated sequences showed similarity to known Zn2+-binding proteins, indicating...

  12. A functional renormalization method for wave propagation in random media

    Science.gov (United States)

    Lamagna, Federico; Calzetta, Esteban

    2017-08-01

    We develop the exact renormalization group approach as a way to evaluate the effective speed of the propagation of a scalar wave in a medium with random inhomogeneities. We use the Martin-Siggia-Rose formalism to translate the problem into a non equilibrium field theory one, and then consider a sequence of models with a progressively lower infrared cutoff; in the limit where the cutoff is removed we recover the problem of interest. As a test of the formalism, we compute the effective dielectric constant of an homogeneous medium interspersed with randomly located, interpenetrating bubbles. A simple approximation to the renormalization group equations turns out to be equivalent to a self-consistent two-loops evaluation of the effective dielectric constant.

  13. Relationship of Source Selection Methods to Contract Outcomes: an Analysis of Air Force Source Selection

    Science.gov (United States)

    2015-12-01

    increase as a result. However, acquisition teams rely on price competition among offerors to ensure fair and reasonable offers (FAR 15.402(a)(2...require modifications after award due to the contractor’s increased capability. Finally, price competition is still typically present. In terms of...on the contract management process, with special emphasis on the source selection methods of tradeoff and lowest price technically acceptable (LPTA

  14. METHODS OF SELECTING THE EFFECTIVE MODELS OF BUILDINGS REPROFILING PROJECTS

    Directory of Open Access Journals (Sweden)

    Александр Иванович МЕНЕЙЛЮК

    2016-02-01

    Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.

  15. Comparative study of standing wave reduction methods using random modulation for transcranial ultrasonication.

    Science.gov (United States)

    Furuhata, Hiroshi; Saito, Osamu

    2013-08-01

    Various transcranial sonotherapeutic technologies have risks related to standing waves in the skull. In this study, we present a comparative study on standing waves using four different activation methods: sinusoidal (SIN), frequency modulation by noise (FMN), periodic selection of random frequency (PSRF), and random switching of both inverse carriers (RSBIC). The standing wave was produced and monitored by the schlieren method using a flat plane and a human skull. The minimum ratio RSW, which is defined by the ratio of the mean of the difference between local maximal value and local minimal value of amplitude to the average value of the amplitude, was 36% for SIN, 24% for FMN, 13% for PSRF, and 4%for RSBIC for the flat reflective plate, and it was 25% for SIN, 11% for FMN, 13% for PSRF, and 5% for RSBIC for the inner surface of the human skull. This study is expected to have a role in the development of safer therapeutic equipment. Copyright © 2013 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  16. Improvement of the image quality of random phase--free holography using an iterative method

    CERN Document Server

    Shimobaba, Tomoyoshi; Endo, Yutaka; Hirayama, Ryuji; Hiyama, Daisuke; Hasegawa, Satoki; Nagahama, Yuki; Sano, Marie; Oikawa, Minoru; Sugie, Takashige; Ito, Tomoyoshi

    2015-01-01

    Our proposed method of random phase-free holography using virtual convergence light can obtain large reconstructed images exceeding the size of the hologram, without the assistance of random phase. The reconstructed images have low-speckle noise in the amplitude and phase-only holograms (kinoforms); however, in low-resolution holograms, we obtain a degraded image quality compared to the original image. We propose an iterative random phase-free method with virtual convergence light to address this problem.

  17. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness

    Science.gov (United States)

    Li, Jin; Tran, Maggie; Siwabessy, Justy

    2016-01-01

    Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia’s marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70). We developed optimal predictive models to predict seabed hardness using random forest (RF) based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS) methods that are variable importance (VI), averaged variable importance (AVI), knowledge informed AVI (KIAVI), Boruta and regularized RF (RRF) were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1) hard90 and hard70 are effective seabed hardness classification schemes; 2) seabed hardness of four classes can be predicted with a high degree of accuracy; 3) the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4) the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5) FS methods select the most accurate predictive model(s) instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6) RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to ‘small p and large n’ problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  18. Selecting Optimal Random Forest Predictive Models: A Case Study on Predicting the Spatial Distribution of Seabed Hardness.

    Directory of Open Access Journals (Sweden)

    Jin Li

    Full Text Available Spatially continuous predictions of seabed hardness are important baseline environmental information for sustainable management of Australia's marine jurisdiction. Seabed hardness is often inferred from multibeam backscatter data with unknown accuracy and can be inferred from underwater video footage at limited locations. In this study, we classified the seabed into four classes based on two new seabed hardness classification schemes (i.e., hard90 and hard70. We developed optimal predictive models to predict seabed hardness using random forest (RF based on the point data of hardness classes and spatially continuous multibeam data. Five feature selection (FS methods that are variable importance (VI, averaged variable importance (AVI, knowledge informed AVI (KIAVI, Boruta and regularized RF (RRF were tested based on predictive accuracy. Effects of highly correlated, important and unimportant predictors on the accuracy of RF predictive models were examined. Finally, spatial predictions generated using the most accurate models were visually examined and analysed. This study confirmed that: 1 hard90 and hard70 are effective seabed hardness classification schemes; 2 seabed hardness of four classes can be predicted with a high degree of accuracy; 3 the typical approach used to pre-select predictive variables by excluding highly correlated variables needs to be re-examined; 4 the identification of the important and unimportant predictors provides useful guidelines for further improving predictive models; 5 FS methods select the most accurate predictive model(s instead of the most parsimonious ones, and AVI and Boruta are recommended for future studies; and 6 RF is an effective modelling method with high predictive accuracy for multi-level categorical data and can be applied to 'small p and large n' problems in environmental sciences. Additionally, automated computational programs for AVI need to be developed to increase its computational efficiency and

  19. PReFerSim: fast simulation of demography and selection under the Poisson Random Field model.

    Science.gov (United States)

    Ortega-Del Vecchyo, Diego; Marsden, Clare D; Lohmueller, Kirk E

    2016-11-15

    The Poisson Random Field (PRF) model has become an important tool in population genetics to study weakly deleterious genetic variation under complicated demographic scenarios. Currently, there are no freely available software applications that allow simulation of genetic variation data under this model. Here we present PReFerSim, an ANSI C program that performs forward simulations under the PRF model. PReFerSim models changes in population size, arbitrary amounts of inbreeding, dominance and distributions of selective effects. Users can track summaries of genetic variation over time and output trajectories of selected alleles. PReFerSim is freely available at: https://github.com/LohmuellerLab/PReFerSim CONTACT: klohmueller@ucla.eduSupplementary information: Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. A Solution Method for Linear and Geometrically Nonlinear MDOF Systems with Random Properties subject to Random Excitation

    DEFF Research Database (Denmark)

    Micaletti, R. C.; Cakmak, A. S.; Nielsen, Søren R. K.

    structural properties. The resulting state-space formulation is a system of ordinary stochastic differential equations with random coefficient and deterministic initial conditions which are subsequently transformed into ordinary stochastic differential equations with deterministic coefficients and random...... initial conditions. This transformation facilitates the derivation of differential equations which govern the evolution of the unconditional statistical moments of response. Primary consideration is given to linear systems and systems with odd polynomial nonlinearities, for in these cases...... there is a significant reduction in the number of equations to be solved. The method is illustrated for a five-story shear-frame structure with nonlinear interstory restoring forces and random damping and stiffness properties. The results of the proposed method are compared to those estimated by extensive Monte Carlo...

  1. An assessment of spacecraft target mode selection methods

    Science.gov (United States)

    Mercer, J. F.; Aglietti, G. S.; Remedia, M.; Kiley, A.

    2017-11-01

    Coupled Loads Analyses (CLAs), using finite element models (FEMs) of the spacecraft and launch vehicle to simulate critical flight events, are performed in order to determine the dynamic loadings that will be experienced by spacecraft during launch. A validation process is carried out on the spacecraft FEM beforehand to ensure that the dynamics of the analytical model sufficiently represent the behavior of the physical hardware. One aspect of concern is the containment of the FEM correlation and update effort to focus on the vibration modes which are most likely to be excited under test and CLA conditions. This study therefore provides new insight into the prioritization of spacecraft FEM modes for correlation to base-shake vibration test data. The work involved example application to large, unique, scientific spacecraft, with modern FEMs comprising over a million degrees of freedom. This comprehensive investigation explores: the modes inherently important to the spacecraft structures, irrespective of excitation; the particular 'critical modes' which produce peak responses to CLA level excitation; an assessment of several traditional target mode selection methods in terms of ability to predict these 'critical modes'; and an indication of the level of correlation these FEM modes achieve compared to corresponding test data. Findings indicate that, although the traditional methods of target mode selection have merit and are able to identify many of the modes of significance to the spacecraft, there are 'critical modes' which may be missed by conventional application of these methods. The use of different thresholds to select potential target modes from these parameters would enable identification of many of these missed modes. Ultimately, some consideration of the expected excitations is required to predict all modes likely to contribute to the response of the spacecraft in operation.

  2. A Novel Method for Increasing the Entropy of a Sequence of Independent, Discrete Random Variables

    Directory of Open Access Journals (Sweden)

    Mieczyslaw Jessa

    2015-10-01

    Full Text Available In this paper, we propose a novel method for increasing the entropy of a sequence of independent, discrete random variables with arbitrary distributions. The method uses an auxiliary table and a novel theorem that concerns the entropy of a sequence in which the elements are a bitwise exclusive-or sum of independent discrete random variables.

  3. Inefficiency of randomization methods that balance on stratum margins and improvements with permuted blocks and a sequential method.

    Science.gov (United States)

    Kaiser, Lee D

    2012-07-20

    Stratified permuted blocks randomization is commonly applied in clinical trials, but other randomization methods that attempt to balance treatment counts marginally for the stratification variables are able to accommodate more stratification variables. When the analysis stratifies on the cells formed by crossing the stratification variables, these other randomization methods yield treatment effect estimates with larger variance than does stratified permuted blocks. When it is truly necessary to balance the randomization on many stratification variables, it is shown how this inefficiency can be improved by using a sequential randomization method where the first level balances on the crossing of the strata used in the analysis and further stratification variables fall lower in the sequential hierarchy. Copyright © 2012 John Wiley & Sons, Ltd.

  4. A method for selecting cis-acting regulatory sequences that respond to small molecule effectors

    Directory of Open Access Journals (Sweden)

    Allas Ülar

    2010-08-01

    Full Text Available Abstract Background Several cis-acting regulatory sequences functioning at the level of mRNA or nascent peptide and specifically influencing transcription or translation have been described. These regulatory elements often respond to specific chemicals. Results We have developed a method that allows us to select cis-acting regulatory sequences that respond to diverse chemicals. The method is based on the β-lactamase gene containing a random sequence inserted into the beginning of the ORF. Several rounds of selection are used to isolate sequences that suppress β-lactamase expression in response to the compound under study. We have isolated sequences that respond to erythromycin, troleandomycin, chloramphenicol, meta-toluate and homoserine lactone. By introducing synonymous and non-synonymous mutations we have shown that at least in the case of erythromycin the sequences act at the peptide level. We have also tested the cross-activities of the constructs and found that in most cases the sequences respond most strongly to the compound on which they were isolated. Conclusions Several selected peptides showed ligand-specific changes in amino acid frequencies, but no consensus motif could be identified. This is consistent with previous observations on natural cis-acting peptides, showing that it is often impossible to demonstrate a consensus. Applying the currently developed method on a larger scale, by selecting and comparing an extended set of sequences, might allow the sequence rules underlying the activity of cis-acting regulatory peptides to be identified.

  5. Resampling method for applying density-dependent habitat selection theory to wildlife surveys.

    Science.gov (United States)

    Tardy, Olivia; Massé, Ariane; Pelletier, Fanie; Fortin, Daniel

    2015-01-01

    Isodar theory can be used to evaluate fitness consequences of density-dependent habitat selection by animals. A typical habitat isodar is a regression curve plotting competitor densities in two adjacent habitats when individual fitness is equal. Despite the increasing use of habitat isodars, their application remains largely limited to areas composed of pairs of adjacent habitats that are defined a priori. We developed a resampling method that uses data from wildlife surveys to build isodars in heterogeneous landscapes without having to predefine habitat types. The method consists in randomly placing blocks over the survey area and dividing those blocks in two adjacent sub-blocks of the same size. Animal abundance is then estimated within the two sub-blocks. This process is done 100 times. Different functional forms of isodars can be investigated by relating animal abundance and differences in habitat features between sub-blocks. We applied this method to abundance data of raccoons and striped skunks, two of the main hosts of rabies virus in North America. Habitat selection by raccoons and striped skunks depended on both conspecific abundance and the difference in landscape composition and structure between sub-blocks. When conspecific abundance was low, raccoons and striped skunks favored areas with relatively high proportions of forests and anthropogenic features, respectively. Under high conspecific abundance, however, both species preferred areas with rather large corn-forest edge densities and corn field proportions. Based on random sampling techniques, we provide a robust method that is applicable to a broad range of species, including medium- to large-sized mammals with high mobility. The method is sufficiently flexible to incorporate multiple environmental covariates that can reflect key requirements of the focal species. We thus illustrate how isodar theory can be used with wildlife surveys to assess density-dependent habitat selection over large

  6. Selecting the appropriate pacing mode for patients with sick sinus syndrome: evidence from randomized clinical trials.

    Science.gov (United States)

    Albertsen, A E; Nielsen, J C

    2003-12-01

    Several observational studies have indicated that selection of pacing mode may be important for the clinical outcome in patients with symptomatic bradycardia, affecting the development of atrial fibrillation (AF), thromboembolism, congestive heart failure, mortality and quality of life. In this paper we present and discuss the most recent data from six randomized trials on mode selection in patients with sick sinus syndrome (SSS). In pacing mode selection, VVI(R) pacing is the least attractive solution, increasing the incidence of AF and-as compared with AAI(R) pacing, also the incidence of heart failure, thromboembolism and death. VVI(R) pacing should not be used as the primary pacing mode in patients with SSS, who haven't chronic AF. AAIR pacing is superior to DDDR pacing, reducing AF and preserving left ventricular function. Single site right ventricular pacing-VVI(R) or DDD(R) mode-causes an abnormal ventricular activation and contraction (called ventricular desynchronization), which results in a reduced left ventricular function. Despite the risk of AV block, we consider AAIR pacing to be the optimal pacing mode for isolated SSS today and an algorithm to select patients for AAIR pacing is suggested. Trials on new pacemaker algorithms minimizing right ventricular pacing as well as trials testing alternative pacing sites and multisite pacing to reduce ventricular desynchronization can be expected within the next years.

  7. Duration and speed of speech events: A selection of methods

    Directory of Open Access Journals (Sweden)

    Gibbon Dafydd

    2015-07-01

    Full Text Available The study of speech timing, i.e. the duration and speed or tempo of speech events, has increased in importance over the past twenty years, in particular in connection with increased demands for accuracy, intelligibility and naturalness in speech technology, with applications in language teaching and testing, and with the study of speech timing patterns in language typology. H owever, the methods used in such studies are very diverse, and so far there is no accessible overview of these methods. Since the field is too broad for us to provide an exhaustive account, we have made two choices: first, to provide a framework of paradigmatic (classificatory, syntagmatic (compositional and functional (discourse-oriented dimensions for duration analysis; and second, to provide worked examples of a selection of methods associated primarily with these three dimensions. Some of the methods which are covered are established state-of-the-art approaches (e.g. the paradigmatic Classification and Regression Trees, CART , analysis, others are discussed in a critical light (e.g. so-called ‘rhythm metrics’. A set of syntagmatic approaches applies to the tokenisation and tree parsing of duration hierarchies, based on speech annotations, and a functional approach describes duration distributions with sociolinguistic variables. Several of the methods are supported by a new web-based software tool for analysing annotated speech data, the Time Group Analyser.

  8. Geography and genography: prediction of continental origin using randomly selected single nucleotide polymorphisms

    Directory of Open Access Journals (Sweden)

    Ramoni Marco F

    2007-03-01

    Full Text Available Abstract Background Recent studies have shown that when individuals are grouped on the basis of genetic similarity, group membership corresponds closely to continental origin. There has been considerable debate about the implications of these findings in the context of larger debates about race and the extent of genetic variation between groups. Some have argued that clustering according to continental origin demonstrates the existence of significant genetic differences between groups and that these differences may have important implications for differences in health and disease. Others argue that clustering according to continental origin requires the use of large amounts of genetic data or specifically chosen markers and is indicative only of very subtle genetic differences that are unlikely to have biomedical significance. Results We used small numbers of randomly selected single nucleotide polymorphisms (SNPs from the International HapMap Project to train naïve Bayes classifiers for prediction of ancestral continent of origin. Predictive accuracy was tested on two independent data sets. Genetically similar groups should be difficult to distinguish, especially if only a small number of genetic markers are used. The genetic differences between continentally defined groups are sufficiently large that one can accurately predict ancestral continent of origin using only a minute, randomly selected fraction of the genetic variation present in the human genome. Genotype data from only 50 random SNPs was sufficient to predict ancestral continent of origin in our primary test data set with an average accuracy of 95%. Genetic variations informative about ancestry were common and widely distributed throughout the genome. Conclusion Accurate characterization of ancestry is possible using small numbers of randomly selected SNPs. The results presented here show how investigators conducting genetic association studies can use small numbers of arbitrarily

  9. Joint random beam and spectrum selection for spectrum sharing systems with partial channel state information

    KAUST Repository

    Abdallah, Mohamed M.

    2013-11-01

    In this work, we develop joint interference-aware random beam and spectrum selection scheme that provide enhanced performance for the secondary network under the condition that the interference observed at the primary receiver is below a predetermined acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a set of primary links composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes jointly select a beam, among a set of power-optimized random beams, as well as the primary spectrum that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint. In particular, we consider the case where the interference level is described by a q-bit description of its magnitude, whereby we propose a technique to find the optimal quantizer thresholds in a mean square error (MSE) sense. © 2013 IEEE.

  10. Interference-aware random beam selection schemes for spectrum sharing systems

    KAUST Repository

    Abdallah, Mohamed

    2012-10-19

    Spectrum sharing systems have been recently introduced to alleviate the problem of spectrum scarcity by allowing secondary unlicensed networks to share the spectrum with primary licensed networks under acceptable interference levels to the primary users. In this work, we develop interference-aware random beam selection schemes that provide enhanced performance for the secondary network under the condition that the interference observed by the receivers of the primary network is below a predetermined/acceptable value. We consider a secondary link composed of a transmitter equipped with multiple antennas and a single-antenna receiver sharing the same spectrum with a primary link composed of a single-antenna transmitter and a single-antenna receiver. The proposed schemes select a beam, among a set of power-optimized random beams, that maximizes the signal-to-interference-plus-noise ratio (SINR) of the secondary link while satisfying the primary interference constraint for different levels of feedback information describing the interference level at the primary receiver. For the proposed schemes, we develop a statistical analysis for the SINR statistics as well as the capacity and bit error rate (BER) of the secondary link.

  11. Selection of sterilization methods for planetary return missions

    Science.gov (United States)

    Trofimov, V. I.; Victorov, A.; Ivanov, M.

    1996-01-01

    Two tasks must be accomplished to provide planetary protection for Mars return missions: (1) sterilization of the scientific module to be landed on Mars and (2) reliable sterilization of all material returned to Earth, while ensuring the scientific integrity of martian samples. This paper examines similarity and differences between these two tasks, and includes a discussion of technological implementation conditions and the nature of terrestrial and hypothesized martian microflora. The feasibility of a number of chemical and physical (ultraviolet and ionizing radiation and heating) methods of sterilization for use on the ground and onboard are discussed and compared. A combination of different methods will probably be selected as the most appropriate for ensuring planetary protection on the return mission.

  12. Development of an optimal velocity selection method with velocity obstacle

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min Geuk; Oh, Jun Ho [KAIST, Daejeon (Korea, Republic of)

    2015-08-15

    The Velocity obstacle (VO) method is one of the most well-known methods for local path planning, allowing consideration of dynamic obstacles and unexpected obstacles. Typical VO methods separate a velocity map into a collision area and a collision-free area. A robot can avoid collisions by selecting its velocity from within the collision-free area. However, if there are numerous obstacles near a robot, the robot will have very few velocity candidates. In this paper, a method for choosing optimal velocity components using the concept of pass-time and vertical clearance is proposed for the efficient movement of a robot. The pass-time is the time required for a robot to pass by an obstacle. By generating a latticized available velocity map for a robot, each velocity component can be evaluated using a cost function that considers the pass-time and other aspects. From the output of the cost function, even a velocity component that will cause a collision in the future can be chosen as a final velocity if the pass-time is sufficiently long enough.

  13. SELECTING A MANAGEMENT SYSTEM HOSPITAL BY A METHOD MULTICRITERIA

    Directory of Open Access Journals (Sweden)

    Vitorino, Sidney L.

    2016-12-01

    Full Text Available The objective of this report is to assess how the multi-criteria method Analytic Hierarchy Process [HP] can help a hospital complex to choose a more suitable management system, known as Enterprise Resource Planning (ERP. The choice coated is very complex due to the novelty of the process of choosing and conflicts generated between areas that did not have a single view of organizational needs, generating a lot of pressure in the department responsible for implementing systems. To assist in this process, he was hired an expert consultant in decision-making and AHP, which in its role of facilitator, contributed to the criteria for system selection were defined, and the choice to occur within a consensual process. We used the study of a single case, based on two indepth interviews with the consultant and the project manager, and documents generated by the advisory and the tool that supported the method. The results of this analysis showed that the method could effectively collaborate in the system acquisition process, but knowledge of the problems of employees and senior management support, it was not used in new decisions of the organization. We conclude that this method contributed to the consensus in the procurement process, team commitment and engagement of those involved.

  14. Development of modelling method selection tool for health services management: from problem structuring methods to modelling and simulation methods.

    Science.gov (United States)

    Jun, Gyuchan T; Morris, Zoe; Eldabi, Tillal; Harper, Paul; Naseer, Aisha; Patel, Brijesh; Clarkson, John P

    2011-05-19

    There is an increasing recognition that modelling and simulation can assist in the process of designing health care policies, strategies and operations. However, the current use is limited and answers to questions such as what methods to use and when remain somewhat underdeveloped. The aim of this study is to provide a mechanism for decision makers in health services planning and management to compare a broad range of modelling and simulation methods so that they can better select and use them or better commission relevant modelling and simulation work. This paper proposes a modelling and simulation method comparison and selection tool developed from a comprehensive literature review, the research team's extensive expertise and inputs from potential users. Twenty-eight different methods were identified, characterised by their relevance to different application areas, project life cycle stages, types of output and levels of insight, and four input resources required (time, money, knowledge and data). The characterisation is presented in matrix forms to allow quick comparison and selection. This paper also highlights significant knowledge gaps in the existing literature when assessing the applicability of particular approaches to health services management, where modelling and simulation skills are scarce let alone money and time. A modelling and simulation method comparison and selection tool is developed to assist with the selection of methods appropriate to supporting specific decision making processes. In particular it addresses the issue of which method is most appropriate to which specific health services management problem, what the user might expect to be obtained from the method, and what is required to use the method. In summary, we believe the tool adds value to the scarce existing literature on methods comparison and selection.

  15. A random plasma glucose method for screening for gestational diabetes.

    Directory of Open Access Journals (Sweden)

    Maheshwari J

    1989-01-01

    Full Text Available Low renal threshold for glucose during pregnancy renders glycosuria less specific for the diagnosis of gestational diabetes. Screening for gestational diabetes was done by utilising random plasma glucose (RPG. RPG was done at the first antenatal visit. In 12,623 patients who registered for antenatal care at the N.W.M. Hospital, 1371 patients had a RPG more than 100 mg%. An oral glucose tolerance test was advised in these patients. The pick-up rate of gestational diabetes correlated with RPG level. Thirty-six cases of gestational diabetes were picked up. The pick up rate is significantly higher as compared to that which would have been detected utilising conventional screening criteria.

  16. Automatic segmentation of brain images: selection of region extraction methods

    Science.gov (United States)

    Gong, Leiguang; Kulikowski, Casimir A.; Mezrich, Reuben S.

    1991-07-01

    In automatically analyzing brain structures from a MR image, the choice of low level region extraction methods depends on the characteristics of both the target object and the surrounding anatomical structures in the image. The authors have experimented with local thresholding, global thresholding, and other techniques, using various types of MR images for extracting the major brian landmarks and different types of lesions. This paper describes specifically a local- binary thresholding method and a new global-multiple thresholding technique developed for MR image segmentation and analysis. The initial testing results on their segmentation performance are presented, followed by a comparative analysis of the two methods and their ability to extract different types of normal and abnormal brain structures -- the brain matter itself, tumors, regions of edema surrounding lesions, multiple sclerosis lesions, and the ventricles of the brain. The analysis and experimental results show that the global multiple thresholding techniques are more than adequate for extracting regions that correspond to the major brian structures, while local binary thresholding is helpful for more accurate delineation of small lesions such as those produced by MS, and for the precise refinement of lesion boundaries. The detection of other landmarks, such as the interhemispheric fissure, may require other techniques, such as line-fitting. These experiments have led to the formulation of a set of generic computer-based rules for selecting the appropriate segmentation packages for particular types of problems, based on which further development of an innovative knowledge- based, goal directed biomedical image analysis framework is being made. The system will carry out the selection automatically for a given specific analysis task.

  17. Method selection for mercury removal from hard coal

    Directory of Open Access Journals (Sweden)

    Dziok Tadeusz

    2017-01-01

    Full Text Available Mercury is commonly found in coal and the coal utilization processes constitute one of the main sources of mercury emission to the environment. This issue is particularly important for Poland, because the Polish energy production sector is based on brown and hard coal. The forecasts show that this trend in energy production will continue in the coming years. At the time of the emission limits introduction, methods of reducing the mercury emission will have to be implemented in Poland. Mercury emission can be reduced as a result of using coal with a relatively low mercury content. In the case of the absence of such coals, the methods of mercury removal from coal can be implemented. The currently used and developing methods include the coal cleaning process (both the coal washing and the dry deshaling as well as the thermal pretreatment of coal (mild pyrolysis. The effectiveness of these methods various for different coals, which is caused by the diversity of coal origin, various characteristics of coal and, especially, by the various modes of mercury occurrence in coal. It should be mentioned that the coal cleaning process allows for the removal of mercury occurring in mineral matter, mainly in pyrite. The thermal pretreatment of coal allows for the removal of mercury occurring in organic matter as well as in the inorganic constituents characterized by a low temperature of mercury release. In this paper, the guidelines for the selection of mercury removal method from hard coal were presented. The guidelines were developed taking into consideration: the effectiveness of mercury removal from coal in the process of coal cleaning and thermal pretreatment, the synergy effect resulting from the combination of these processes, the direction of coal utilization as well as the influence of these processes on coal properties.

  18. Breast cancer tumor classification using LASSO method selection approach

    Energy Technology Data Exchange (ETDEWEB)

    Celaya P, J. M.; Ortiz M, J. A.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Garza V, I.; Martinez F, M.; Ortiz R, J. M., E-mail: morvymm@yahoo.com.mx [Universidad Autonoma de Zacatecas, Av. Ramon Lopez Velarde 801, Col. Centro, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    Breast cancer is one of the leading causes of deaths worldwide among women. Early tumor detection is key in reducing breast cancer deaths and screening mammography is the widest available method for early detection. Mammography is the most common and effective breast cancer screening test. However, the rate of positive findings is very low, making the radiologic interpretation monotonous and biased toward errors. In an attempt to alleviate radiological workload, this work presents a computer-aided diagnosis (CAD x) method aimed to automatically classify tumor lesions into malign or benign as a means to a second opinion. The CAD x methos, extracts image features, and classifies the screening mammogram abnormality into one of two categories: subject at risk of having malignant tumor (malign), and healthy subject (benign). In this study, 143 abnormal segmentation s (57 malign and 86 benign) from the Breast Cancer Digital Repository (BCD R) public database were used to train and evaluate the CAD x system. Percentile-rank (p-rank) was used to standardize the data. Using the LASSO feature selection methodology, the model achieved a Leave-one-out-cross-validation area under the receiver operating characteristic curve (Auc) of 0.950. The proposed method has the potential to rank abnormal lesions with high probability of malignant findings aiding in the detection of potential malign cases as a second opinion to the radiologist. (Author)

  19. A Lightweight Structure Redesign Method Based on Selective Laser Melting

    Directory of Open Access Journals (Sweden)

    Li Tang

    2016-11-01

    Full Text Available The purpose of this paper is to present a new design method of lightweight parts fabricated by selective laser melting (SLM based on the “Skin-Frame” and to explore the influence of fabrication defects on SLM parts with different sizes. Some standard lattice parts were designed according to the Chinese GB/T 1452-2005 standard and manufactured by SLM. Then these samples were tested in an MTS Insight 30 compression testing machine to study the trends of the yield process with different structure sizes. A set of standard cylinder samples were also designed according to the Chinese GB/T 228-2010 standard. These samples, which were made of iron-nickel alloy (IN718, were also processed by SLM, and then tested in the universal material testing machine INSTRON 1346 to obtain their tensile strength. Furthermore, a lightweight redesigned method was researched. Then some common parts such as a stopper and connecting plate were redesigned using this method. These redesigned parts were fabricated and some application tests have already been performed. The compression testing results show that when the minimum structure size is larger than 1.5 mm, the mechanical characteristics will hardly be affected by process defects. The cylinder parts were fractured by the universal material testing machine at about 1069.6 MPa. These redesigned parts worked well in application tests, with both the weight and fabrication time of these parts reduced more than 20%.

  20. Exploring Bayesian model selection methods for effective field theory expansions

    Science.gov (United States)

    Schaffner, Taylor; Yamauchi, Yukari; Furnstahl, Richard

    2017-09-01

    A fundamental understanding of the microscopic properties and interactions of nuclei has long evaded physicists due to the complex nature of quantum chromodynamics (QCD). One approach to modeling nuclear interactions is known as chiral effective field theory (EFT). Today, the method's greatest limitation lies in the approximation of interaction potentials and their corresponding uncertainties. Computing EFT expansion coefficients, known as Low-Energy Constants (LECs), from experimental data reduces to a problem of statistics and fitting. In the conventional approach, the fitting is done using frequentist methods that fail to evaluate the quality of the model itself (e.g., how many orders to use) in addition to its fit to the data. By utilizing Bayesian statistical methods for model selection, the model's quality can be taken into account, providing a more controlled and robust EFT expansion. My research involves probing different Bayesian model checking techniques to determine the most effective means for use with estimating the values of LECs. In particular, we are using model problems to explore the Bayesian calculation of an EFT expansion's evidence and an approximation to this value known as the WAIC (Widely Applicable Information Criterion). This work was supported in part by the National Science Foundation under Grant No. PHY-1306250.

  1. Does the Use of a Decision Aid Improve Decision Making in Prosthetic Heart Valve Selection? A Multicenter Randomized Trial

    NARCIS (Netherlands)

    Korteland, Nelleke M.; Ahmed, Yunus; Koolbergen, David R.; Brouwer, Marjan; de Heer, Frederiek; Kluin, Jolanda; Bruggemans, Eline F.; Klautz, Robert J. M.; Stiggelbout, Anne M.; Bucx, Jeroen J. J.; Roos-Hesselink, Jolien W.; Polak, Peter; Markou, Thanasie; van den Broek, Inge; Ligthart, Rene; Bogers, Ad J. J. C.; Takkenberg, Johanna J. M.

    2017-01-01

    A Dutch online patient decision aid to support prosthetic heart valve selection was recently developed. A multicenter randomized controlled trial was conducted to assess whether use of the patient decision aid results in optimization of shared decision making in prosthetic heart valve selection. In

  2. Material Design, Selection, and Manufacturing Methods for System Sustainment

    Energy Technology Data Exchange (ETDEWEB)

    David Sowder, Jim Lula, Curtis Marshall

    2010-02-18

    This paper describes a material selection and validation process proven to be successful for manufacturing high-reliability long-life product. The National Secure Manufacturing Center business unit of the Kansas City Plant (herein called KCP) designs and manufactures complex electrical and mechanical components used in extreme environments. The material manufacturing heritage is founded in the systems design to manufacturing practices that support the U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA). Material Engineers at KCP work with the systems designers to recommend materials, develop test methods, perform analytical analysis of test data, define cradle to grave needs, present final selection and fielding. The KCP material engineers typically will maintain cost control by utilizing commercial products when possible, but have the resources and to develop and produce unique formulations as necessary. This approach is currently being used to mature technologies to manufacture materials with improved characteristics using nano-composite filler materials that will enhance system design and production. For some products the engineers plan and carry out science-based life-cycle material surveillance processes. Recent examples of the approach include refurbished manufacturing of the high voltage power supplies for cockpit displays in operational aircraft; dry film lubricant application to improve bearing life for guided munitions gyroscope gimbals, ceramic substrate design for electrical circuit manufacturing, and tailored polymeric materials for various systems. The following examples show evidence of KCP concurrent design-to-manufacturing techniques used to achieve system solutions that satisfy or exceed demanding requirements.

  3. Selection of disposal contractor by multi criteria decision making methods

    Directory of Open Access Journals (Sweden)

    Cenker Korkmazer

    2016-08-01

    Full Text Available Hazardous waste is substance that threaten people and environment in case of improper storage, disposal and transport due to its concentration, physical and chemical properties. Companies producing hazardous waste as a result of several activities mostly do not have any own disposal facilities. In addition, they do not pay attention enough to determine the right contractor as a disposal facility. On the other hand, there are various qualitative and quantitative criteria affecting the selection of the contractor and conflicting with each other. The aim of the performed study is to assist one of these companies producing hazardous waste in the selection of the best contractor that eliminates hazardous waste economic and harmless way. In the study, contractor weights in percentage is calculated by using Analytic Network Process (ANP as one of the multi-criteria decision making (MCDM methods and widely used in the literature which considers both qualitative and quantitative criteria. In the next step, by the help of the mathematical model, contractors that will be given which type of hazardous waste are identified. This integrated approach can be used as a guide for similar firms.

  4. Selective outcome reporting and sponsorship in randomized controlled trials in IVF and ICSI.

    Science.gov (United States)

    Braakhekke, M; Scholten, I; Mol, F; Limpens, J; Mol, B W; van der Veen, F

    2017-10-01

    Are randomized controlled trials (RCTs) on IVF and ICSI subject to selective outcome reporting and is this related to sponsorship? There are inconsistencies, independent from sponsorship, in the reporting of primary outcome measures in the majority of IVF and ICSI trials, indicating selective outcome reporting. RCTs are subject to bias at various levels. Of these biases, selective outcome reporting is particularly relevant to IVF and ICSI trials since there is a wide variety of outcome measures to choose from. An established cause of reporting bias is sponsorship. It is, at present, unknown whether RCTs in IVF/ICSI are subject to selective outcome reporting and whether this is related with sponsorship. We systematically searched RCTs on IVF and ICSI published between January 2009 and March 2016 in MEDLINE, EMBASE, the Cochrane Central Register of Controlled Trials and the publisher subset of PubMed. We analysed 415 RCTs. Per included RCT, we extracted data on impact factor of the journal, sample size, power calculation, and trial registry and thereafter data on primary outcome measure, the direction of trial results and sponsorship. Of the 415 identified RCTs, 235 were excluded for our primary analysis, because the sponsorship was not reported. Of the 180 RCTs included in our analysis, 7 trials did not report on any primary outcome measure and 107 of the remaining 173 trials (62%) reported on surrogate primary outcome measures. Of the 114 registered trials, 21 trials (18%) provided primary outcomes in their manuscript that were different from those in the trial registry. This indicates selective outcome reporting. We found no association between selective outcome reporting and sponsorship. We ran additional analyses to include the trials that had not reported sponsorship and found no outcomes that differed from our primary analysis. Since the majority of the trials did not report on sponsorship, there is a risk on sampling bias. IVF and ICSI trials are subject, to

  5. Implications of structural genomics target selection strategies: Pfam5000, whole genome, and random approaches

    Energy Technology Data Exchange (ETDEWEB)

    Chandonia, John-Marc; Brenner, Steven E.

    2004-07-14

    The structural genomics project is an international effort to determine the three-dimensional shapes of all important biological macromolecules, with a primary focus on proteins. Target proteins should be selected according to a strategy which is medically and biologically relevant, of good value, and tractable. As an option to consider, we present the Pfam5000 strategy, which involves selecting the 5000 most important families from the Pfam database as sources for targets. We compare the Pfam5000 strategy to several other proposed strategies that would require similar numbers of targets. These include including complete solution of several small to moderately sized bacterial proteomes, partial coverage of the human proteome, and random selection of approximately 5000 targets from sequenced genomes. We measure the impact that successful implementation of these strategies would have upon structural interpretation of the proteins in Swiss-Prot, TrEMBL, and 131 complete proteomes (including 10 of eukaryotes) from the Proteome Analysis database at EBI. Solving the structures of proteins from the 5000 largest Pfam families would allow accurate fold assignment for approximately 68 percent of all prokaryotic proteins (covering 59 percent of residues) and 61 percent of eukaryotic proteins (40 percent of residues). More fine-grained coverage which would allow accurate modeling of these proteins would require an order of magnitude more targets. The Pfam5000 strategy may be modified in several ways, for example to focus on larger families, bacterial sequences, or eukaryotic sequences; as long as secondary consideration is given to large families within Pfam, coverage results vary only slightly. In contrast, focusing structural genomics on a single tractable genome would have only a limited impact in structural knowledge of other proteomes: a significant fraction (about 30-40 percent of the proteins, and 40-60 percent of the residues) of each proteome is classified in small

  6. Clinical outcome of intracytoplasmic injection of spermatozoa morphologically selected under high magnification: a prospective randomized study.

    Science.gov (United States)

    Balaban, Basak; Yakin, Kayhan; Alatas, Cengiz; Oktem, Ozgur; Isiklar, Aycan; Urman, Bulent

    2011-05-01

    Recent evidence shows that the selection of spermatozoa based on the analysis of morphology under high magnification (×6000) may have a positive impact on embryo development in cases with severe male factor infertility and/or previous implantation failures. The objective of this prospective randomized study was to compare the clinical outcome of 87 intracytoplasmic morphologically selected sperm injection (IMSI) cycles with 81 conventional intracytoplasmic sperm injection (ICSI) cycles in an unselected infertile population. IMSI did not provide a significant improvement in the clinical outcome compared with ICSI although there were trends for higher implantation (28.9% versus 19.5%), clinical pregnancy (54.0% versus 44.4%) and live birth rates (43.7% versus 38.3%) in the IMSI group. However, severe male factor patients benefited from the IMSI procedure as shown by significantly higher implantation rates compared with their counterparts in the ICSI group (29.6% versus 15.2%, P=0.01). These results suggest that IMSI may improve IVF success rates in a selected group of patients with male factor infertility. New technological developments enable the real time examination of motile spermatozoa with an inverted light microscope equipped with high-power differential interference contrast optics, enhanced by digital imaging. High magnification (over ×6000) provides the identification of spermatozoa with a normal nucleus and nuclear content. Intracytoplasmic injection of spermatozoa selected according to fine nuclear morphology under high magnification may improve the clinical outcome in cases with severe male factor infertility. Copyright © 2010 Reproductive Healthcare Ltd. Published by Elsevier Ltd. All rights reserved.

  7. A Comparison of Dietary Habits between Recreational Runners and a Randomly Selected Adult Population in Slovenia.

    Science.gov (United States)

    Škof, Branko; Rotovnik Kozjek, Nada

    2015-09-01

    The aim of the study was to compare the dietary habits of recreational runners with those of a random sample of the general population. We also wanted to determine the influence of gender, age and sports performance of recreational runners on their basic diet and compliance with recommendations in sports nutrition. The study population consisted of 1,212 adult Slovenian recreational runners and 774 randomly selected residents of Slovenia between the ages of 18 and 65 years. The data on the dietary habits of our subjects was gathered by means of two questionnaires. The following parameters were evaluated: the type of diet, a food pattern, and the frequency of consumption of individual food groups, the use of dietary supplements, fluid intake, and alcohol consumption. Recreational runners had better compliance with recommendations for healthy nutrition than the general population. This pattern increased with the runner's age and performance level. Compared to male runners, female runners ate more regularly and had a more frequent consumption of food groups associated with a healthy diet (fruit, vegetables, whole grain foods, and low-fat dairy products). The consumption of simple sugars and use of nutritional supplements by well-trained runners was inadequate with values recommended for physically active individuals. Recreational runners are an exemplary population group that actively seeks to adopt a healthier lifestyle.

  8. Inversion-based data-driven time-space domain random noise attenuation method

    Science.gov (United States)

    Zhao, Yu-Min; Li, Guo-Fa; Wang, Wei; Zhou, Zhen-Xiao; Tang, Bo-Wen; Zhang, Wen-Bo

    2017-12-01

    Conventional time-space domain and frequency-space domain prediction filtering methods assume that seismic data consists of two parts, signal and random noise. That is, the so-called additive noise model. However, when estimating random noise, it is assumed that random noise can be predicted from the seismic data by convolving with a prediction error filter. That is, the source-noise model. Model inconsistencies, before and after denoising, compromise the noise attenuation and signal-preservation performances of prediction filtering methods. Therefore, this study presents an inversion-based time-space domain random noise attenuation method to overcome the model inconsistencies. In this method, a prediction error filter (PEF), is first estimated from seismic data; the filter characterizes the predictability of the seismic data and adaptively describes the seismic data's space structure. After calculating PEF, it can be applied as a regularized constraint in the inversion process for seismic signal from noisy data. Unlike conventional random noise attenuation methods, the proposed method solves a seismic data inversion problem using regularization constraint; this overcomes the model inconsistency of the prediction filtering method. The proposed method was tested on both synthetic and real seismic data, and results from the prediction filtering method and the proposed method are compared. The testing demonstrated that the proposed method suppresses noise effectively and provides better signal-preservation performance.

  9. Application of Delphi method in site selection of desalination plants

    Directory of Open Access Journals (Sweden)

    M. Sepehr

    2017-12-01

    Full Text Available Given the reduced freshwater supplies across the world, seawater desalination is one of the appropriate methods available for producing freshwater. Selecting an optimal location is crucial in the installation of these plants owing to the environmental problems they cause. The present study was conducted to identify optimal locations for installing desalination Plants in the coastal areas of southern Iran (Hormozgan Province with application of Delphi method. To implement this technique and identify, screen and prioritize effective criteria and sub-criteria, ten experts were surveyed through questionnaires and eight criteria and 18 sub-criteria were identified. All these sub-criteria were evaluated and classified in ArcGIS into five classes as input layers. The maps were then integrated based on the modulation importance coefficient and the identified priorities using a linear Delphi model and the final map was reclassified into five categories. Environmentally sensitive areas and seawater quality were respectively the criterion and sub-criterion that received the highest importance. After combining the layers and obtaining the final map, 63 locations were identified for installing desalination plants in the coastal areas on the Persian Gulf and Oman Sea in Hormozgan Province.  At the end, 27 locations were high important and had optimal environmental conditions for establishing desalination plants. Of the 27 locations, six were located in the coastal area of the Oman Sea, one in the coastal area of the Strait of Hormuz and 20 others in the coastal area of the Persian Gulf.

  10. A numerical study of rays in random media. [Monte Carlo method simulation

    Science.gov (United States)

    Youakim, M. Y.; Liu, C. H.; Yeh, K. C.

    1973-01-01

    Statistics of electromagnetic rays in a random medium are studied numerically by the Monte Carlo method. Two dimensional random surfaces with prescribed correlation functions are used to simulate the random media. Rays are then traced in these sample media. Statistics of the ray properties such as the ray positions and directions are computed. Histograms showing the distributions of the ray positions and directions at different points along the ray path as well as at given points in space are given. The numerical experiment is repeated for different cases corresponding to weakly and strongly random media with isotropic and anisotropic irregularities. Results are compared with those derived from theoretical investigations whenever possible.

  11. [Acupuncture combined with auricle cutting method for blood stasis-type psoriasis: a randomized controlled trial].

    Science.gov (United States)

    Li, Ting; Liu, Zhi-Yan; Yang, Huan; Ma, Zhong; Qu, Hong-Yan; Li, Yu; Huang, Hai-Bin; Liu, Juan; Li, Jie; Wu, Ji-Xin

    2014-05-01

    To verify the clinical efficacy of acupuncture combined with auricle cutting method for treatment of blood stasis-type psoriasis. Fifty-six cases of blood stasis-type psoriasis were randomly divided into a combined therapy group, a auricle cutting group, an acupuncture group and a control group, 14 cases in each one. Based on regular treatment of TCM decoction in four groups, the combined therapy group was treated with acupuncture and auricle cutting method, and the auricle cutting group was treated with sham-acupuncture and auricle cutting, and the acupuncture group was treated with acupuncture and sham auricle cutting, and the control group was treated with sham-acupuncture and sham auricle cutting. The acupuncture was applied at Dazhui (GV 14), Feishu (BL 13), Ganshu (BL 18) and Geshu (BL 17), etc., and manipulated with routine technique; in the sham acupuncture, the needle was inserted into dermis layer so that the needles could be swung without being dropped out. In the auricle cutting, erbeixin (P1) of unilateral auricle was selected and cut by Chan needle to perform bloodletting; in the sham auricle cutting, the neighborhood approximately 0.5 cm next to erbeixin (P) of auricle was selected as cutting area. The treatment was given once a day, seven days as a treatment session for totally two sessions. Psoriasis area and severity index (PASI) before and after treatment was observed and efficacy of each group was compared. The effective rate was 57.1% (8/14) in the combined therapy group, which was superior to 14.3% (2/14) in the auricle cutting group, 7.1% (1/14) in the acupuncture group and 0.0% (0/14) in the control group (all P cutting, P cutting method and TCM decoction, among which the interaction effect of auricle cutting and acupuncture combined with TCM decoction is the most significant.

  12. Designing randomized-controlled trials to improve head-louse treatment: systematic review using a vignette-based method.

    Science.gov (United States)

    Do-Pham, Giao; Le Cleach, Laurence; Giraudeau, Bruno; Maruani, Annabel; Chosidow, Olivier; Ravaud, Philippe

    2014-03-01

    Head-louse infestation remains a public health problem. Despite published randomized-controlled trials, no consensus-based clinical practice guidelines for its management emerged because of the heterogeneity of trial methodologies. Our study was undertaken to attempt to find an optimal trial framework: minimizing the risk of bias, while taking feasibility into account. To do so, we used the vignette-based method. A systematic review first identified trials on head-louse infestation; 49 were selected and their methodological constraints assessed. Methodological features were extracted and combined by arborescence to generate a broad spectrum of potential designs, called vignettes, yielding 357 vignettes. A panel of 48 experts then rated one-on-one comparisons of those vignettes to obtain a ranking of the designs. Methodological items retained for vignette generation were income level of the population, types of treatments compared, randomization unit, blinding, treatment-administration site, diagnosis method and criteria, and primary outcome measure. The expert panel selected vignettes with cluster randomization, centralized treatment administration, and blinding of the outcome assessor. The vignette method identified optimal designs to standardize future head-louse treatment trials, thereby obtaining valid conclusions and comparable data from future trials, and appears to be a reliable way to generate evidence-based guidelines.

  13. Sequence-Based Prediction of RNA-Binding Proteins Using Random Forest with Minimum Redundancy Maximum Relevance Feature Selection

    Directory of Open Access Journals (Sweden)

    Xin Ma

    2015-01-01

    Full Text Available The prediction of RNA-binding proteins is one of the most challenging problems in computation biology. Although some studies have investigated this problem, the accuracy of prediction is still not sufficient. In this study, a highly accurate method was developed to predict RNA-binding proteins from amino acid sequences using random forests with the minimum redundancy maximum relevance (mRMR method, followed by incremental feature selection (IFS. We incorporated features of conjoint triad features and three novel features: binding propensity (BP, nonbinding propensity (NBP, and evolutionary information combined with physicochemical properties (EIPP. The results showed that these novel features have important roles in improving the performance of the predictor. Using the mRMR-IFS method, our predictor achieved the best performance (86.62% accuracy and 0.737 Matthews correlation coefficient. High prediction accuracy and successful prediction performance suggested that our method can be a useful approach to identify RNA-binding proteins from sequence information.

  14. Comparison of multimedia system and conventional method in patients’ selecting prosthetic treatment

    Directory of Open Access Journals (Sweden)

    Baghai R

    2010-12-01

    Full Text Available "nBackground and Aims: Selecting an appropriate treatment plan is one of the most critical aspects of dental treatments. The purpose of this study was to compare multimedia system and conventional method in patients' selecting prosthetic treatment and the time consumed."nMaterials and Methods: Ninety patients were randomly divided into three groups. Patients in group A, once were instructed using the conventional method of dental office and once multimedia system and time was measured in seconds from the beginning of the instruction till the patient had came to decision. The patients were asked about the satisfaction of the method used for them. In group B, patients were only instructed using the conventional method, whereas they were only exposed to soft ware in group C. The data were analyzed with Paired-T-test"n(in group A and T-test and Mann-Whitney test (in groups B and C."nResult: There was a significant difference between multimedia system and conventional method in group A and also between groups B and C (P<0.001. In group A and between groups B and C, patient's satisfaction about multimedia system was better. However, in comparison between groups B and C, multimedia system did not have a significant effect in treatment selection score (P=0.08."nConclusion: Using multimedia system is recommended due to its high ability in giving answers to a large number of patient's questions as well as in terms of marketing.

  15. Modeling and Analysis of Supplier Selection Method Using ...

    African Journals Online (AJOL)

    This research paper deals with the development of supplier selection methodology for an organization. In today's dynamic environment supplier selection decision presents organizations with a complex scenario and the age old tradition of selecting suppliers based on solitary criteria, mainly price and the practice of ...

  16. Optimization methods for selecting founder individuals for captive breeding or reintroduction of endangered species.

    Science.gov (United States)

    Miller, Webb; Wright, Stephen J; Zhang, Yu; Schuster, Stephan C; Hayes, Vanessa M

    2010-01-01

    Methods from genetics and genomics can be employed to help save endangered species. One potential use is to provide a rational strategy for selecting a population of founders for a captive breeding program. The hope is to capture most of the available genetic diversity that remains in the wild population, to provide a safe haven where representatives of the species can be bred, and eventually to release the progeny back into the wild. However, the founders are often selected based on a random-sampling strategy whose validity is based on unrealistic assumptions. Here we outline an approach that starts by using cutting-edge genome sequencing and genotyping technologies to objectively assess the available genetic diversity. We show how combinatorial optimization methods can be applied to these data to guide the selection of the founder population. In particular, we develop a mixed-integer linear programming technique that identifies a set of animals whose genetic profile is as close as possible to specified abundances of alleles (i.e., genetic variants), subject to constraints on the number of founders and their genders and ages.

  17. A DYNAMIC FEATURE SELECTION METHOD FOR DOCUMENT RANKING WITH RELEVANCE FEEDBACK APPROACH

    Directory of Open Access Journals (Sweden)

    K. Latha

    2010-07-01

    Full Text Available Ranking search results is essential for information retrieval and Web search. Search engines need to not only return highly relevant results, but also be fast to satisfy users. As a result, not all available features can be used for ranking, and in fact only a small percentage of these features can be used. Thus, it is crucial to have a feature selection mechanism that can find a subset of features that both meets latency requirements and achieves high relevance. In this paper we describe a 0/1 knapsack procedure for automatically selecting features to use within Generalization model for Document Ranking. We propose an approach for Relevance Feedback using Expectation Maximization method and evaluate the algorithm on the TREC Collection for describing classes of feedback textual information retrieval features. Experimental results, evaluated on standard TREC-9 part of the OHSUMED collections, show that our feature selection algorithm produces models that are either significantly more effective than, or equally effective as, models such as Markov Random Field model, Correlation Co-efficient and Count Difference method

  18. Control group selection in critical care randomized controlled trials evaluating interventional strategies: An ethical assessment.

    Science.gov (United States)

    Silverman, Henry J; Miller, Franklin G

    2004-03-01

    Ethical concern has been raised with critical care randomized controlled trials in which the standard of care reflects a broad range of clinical practices. Commentators have argued that trials without an unrestricted control group, in which standard practices are implemented at the discretion of the attending physician, lack the ability to redefine the standard of care and might expose subjects to excessive harms due to an inability to stop early. To develop a framework for analyzing control group selection for critical care trials. Ethical analysis. A key ethical variable in trial design is the extent with which the control group adequately reflects standard care practices. Such a control group might incorporate either the "unrestricted" practices of physicians or a protocol that specifies and restricts the parameters of standard practices. Control group selection should be determined with respect to the following ethical objectives of trial design: 1) clinical value, 2) scientific validity, 3) efficiency and feasibility, and 4) protection of human subjects. Because these objectives may conflict, control group selection will involve trade-offs and compromises. Trials using a protocolized rather than an unrestricted standard care control group will likely have enhanced validity. However, if the protocolized control group lacks representativeness to standard care practices, then trials that use such groups will offer less clinical value and could provide less assurance of protecting subjects compared with trials that use unrestricted control groups. For trials evaluating contrasting strategies that do not adequately represent standard practices, use of a third group that is more representative of standard practices will enhance clinical value and increase the ability to stop early if needed to protect subjects. These advantages might come at the expense of efficiency and feasibility. Weighing and balancing the competing ethical objectives of trial design should be

  19. A METHOD FOR SELECTING SOFTWARE FOR DYNAMIC EVENT ANALYSIS I: PROBLEM SELECTION

    Energy Technology Data Exchange (ETDEWEB)

    J. M. Lacy; S. R. Novascone; W. D. Richins; T. K. Larson

    2007-08-01

    New nuclear power reactor designs will require resistance to a variety of possible malevolent attacks, as well as traditional dynamic accident scenarios. The design/analysis team may be faced with a broad range of phenomena including air and ground blasts, high-velocity penetrators or shaped charges, and vehicle or aircraft impacts. With a host of software tools available to address these high-energy events, the analysis team must evaluate and select the software most appropriate for their particular set of problems. The accuracy of the selected software should then be validated with respect to the phenomena governing the interaction of the threat and structure. In this paper, we present a method for systematically comparing current high-energy physics codes for specific applications in new reactor design. Several codes are available for the study of blast, impact, and other shock phenomena. Historically, these packages were developed to study specific phenomena such as explosives performance, penetrator/target interaction, or accidental impacts. As developers generalize the capabilities of their software, legacy biases and assumptions can remain that could affect the applicability of the code to other processes and phenomena. R&D institutions generally adopt one or two software packages and use them almost exclusively, performing benchmarks on a single-problem basis. At the Idaho National Laboratory (INL), new comparative information was desired to permit researchers to select the best code for a particular application by matching its characteristics to the physics, materials, and rate scale (or scales) representing the problem at hand. A study was undertaken to investigate the comparative characteristics of a group of shock and high-strain rate physics codes including ABAQUS, LS-DYNA, CTH, ALEGRA, ALE-3D, and RADIOSS. A series of benchmark problems were identified to exercise the features and capabilities of the subject software. To be useful, benchmark problems

  20. [An optimal selection method of samples of calibration set and validation set for spectral multivariate analysis].

    Science.gov (United States)

    Liu, Wei; Zhao, Zhong; Yuan, Hong-Fu; Song, Chun-Feng; Li, Xiao-Yu

    2014-04-01

    The side effects in spectral multivariate modeling caused by the uneven distribution of sample numbers in the region of the calibration set and validation set were analyzed, and the "average" phenomenon that samples with small property values are predicted with larger values, and those with large property values are predicted with less values in spectral multivariate calibration is showed in this paper. Considering the distribution feature of spectral space and property space simultaneously, a new method of optimal sample selection named Rank-KS is proposed. Rank-KS aims at improving the uniformity of calibration set and validation set. Y-space was divided into some regions uniformly, samples of calibration set and validation set were extracted by Kennard-Stone (KS) and Random-Select (RS) algorithm respectively in every region, so the calibration set was distributed evenly and had a strong presentation. The proposed method were applied to the prediction of dimethylcarbonate (DMC) content in gasoline with infrared spectra and dimethylsulfoxide in its aqueous solution with near infrared spectra. The "average" phenomenon showed in the prediction of multiple linear regression (MLR) model of dimethylsulfoxide was weakened effectively by Rank-KS. For comparison, the MLR models and PLS1 models of MDC and dimethylsulfoxide were constructed by using RS, KS, Rank-Select, sample set partitioning based on joint X- and Y-blocks (SPXY) and proposed Rank-KS algorithms to select the calibration set, respectively. Application results verified that the best prediction was achieved by using Rank-KS. Especially, for the distribution of sample set with more in the middle and less on the boundaries, or none in the local, prediction of the model constructed by calibration set selected using Rank-KS can be improved obviously.

  1. GAIN RATIO BASED FEATURE SELECTION METHOD FOR PRIVACY PRESERVATION

    Directory of Open Access Journals (Sweden)

    R. Praveena Priyadarsini

    2011-04-01

    Full Text Available Privacy-preservation is a step in data mining that tries to safeguard sensitive information from unsanctioned disclosure and hence protecting individual data records and their privacy. There are various privacy preservation techniques like k-anonymity, l-diversity and t-closeness and data perturbation. In this paper k-anonymity privacy protection technique is applied to high dimensional datasets like adult and census. since, both the data sets are high dimensional, feature subset selection method like Gain Ratio is applied and the attributes of the datasets are ranked and low ranking attributes are filtered to form new reduced data subsets. K-anonymization privacy preservation technique is then applied on reduced datasets. The accuracy of the privacy preserved reduced datasets and the original datasets are compared for their accuracy on the two functionalities of data mining namely classification and clustering using naïve Bayesian and k-means algorithm respectively. Experimental results show that classification and clustering accuracy are comparatively the same for reduced k-anonym zed datasets and the original data sets.

  2. Utilization of Selected Data Mining Methods for Communication Network Analysis

    Directory of Open Access Journals (Sweden)

    V. Ondryhal

    2011-06-01

    Full Text Available The aim of the project was to analyze the behavior of military communication networks based on work with real data collected continuously since 2005. With regard to the nature and amount of the data, data mining methods were selected for the purpose of analyses and experiments. The quality of real data is often insufficient for an immediate analysis. The article presents the data cleaning operations which have been carried out with the aim to improve the input data sample to obtain reliable models. Gradually, by means of properly chosen SW, network models were developed to verify generally valid patterns of network behavior as a bulk service. Furthermore, unlike the commercially available communication networks simulators, the models designed allowed us to capture nonstandard models of network behavior under an increased load, verify the correct sizing of the network to the increased load, and thus test its reliability. Finally, based on previous experience, the models enabled us to predict emergency situations with a reasonable accuracy.

  3. Conflicts of Interest, Selective Inertia, and Research Malpractice in Randomized Clinical Trials: An Unholy Trinity.

    Science.gov (United States)

    Berger, Vance W

    2015-08-01

    Recently a great deal of attention has been paid to conflicts of interest in medical research, and the Institute of Medicine has called for more research into this important area. One research question that has not received sufficient attention concerns the mechanisms of action by which conflicts of interest can result in biased and/or flawed research. What discretion do conflicted researchers have to sway the results one way or the other? We address this issue from the perspective of selective inertia, or an unnatural selection of research methods based on which are most likely to establish the preferred conclusions, rather than on which are most valid. In many cases it is abundantly clear that a method that is not being used in practice is superior to the one that is being used in practice, at least from the perspective of validity, and that it is only inertia, as opposed to any serious suggestion that the incumbent method is superior (or even comparable), that keeps the inferior procedure in use, to the exclusion of the superior one. By focusing on these flawed research methods we can go beyond statements of potential harm from real conflicts of interest, and can more directly assess actual (not potential) harm.

  4. A Comparison of Three Methods of Analyzing Dichotomous Data in a Randomized Block Design.

    Science.gov (United States)

    Mandeville, Garrett K.

    Results of a comparative study of F and Q tests, in a randomized block design with one replication per cell, are presented. In addition to these two procedures, a multivariate test was also considered. The model and test statistics, data generation and parameter selection, results, summary and conclusions are presented. Ten tables contain the…

  5. Fast egg collection method greatly improves randomness of egg sampling in Drosophila melanogaster

    DEFF Research Database (Denmark)

    Schou, Mads Fristrup

    2013-01-01

    When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column and dimini......When obtaining samples for population genetic studies, it is essential that the sampling is random. For Drosophila, one of the crucial steps in sampling experimental flies is the collection of eggs. Here an egg collection method is presented, which randomizes the eggs in a water column...... and to obtain a representative collection of genotypes, the method presented here is strongly recommended when collecting eggs from Drosophila....

  6. Randomized trial of switching from prescribed non-selective non-steroidal anti-inflammatory drugs to prescribed celecoxib

    DEFF Research Database (Denmark)

    Macdonald, Thomas M; Hawkey, Chris J; Ford, Ian

    2017-01-01

    BACKGROUND: Selective cyclooxygenase-2 inhibitors and conventional non-selective non-steroidal anti-inflammatory drugs (nsNSAIDs) have been associated with adverse cardiovascular (CV) effects. We compared the CV safety of switching to celecoxib vs. continuing nsNSAID therapy in a European setting....... METHOD: Patients aged 60 years and over with osteoarthritis or rheumatoid arthritis, free from established CV disease and taking chronic prescribed nsNSAIDs, were randomized to switch to celecoxib or to continue their previous nsNSAID. The primary endpoint was hospitalization for non-fatal myocardial...... expected developed an on-treatment (OT) primary CV event and the rate was similar for celecoxib, 0.95 per 100 patient-years, and nsNSAIDs, 0.86 per 100 patient-years (HR = 1.12, 95% confidence interval, 0.81-1.55; P = 0.50). Comparable intention-to-treat (ITT) rates were 1.14 per 100 patient...

  7. Selection of the signal synchronization method in software GPS receivers

    Directory of Open Access Journals (Sweden)

    Vlada S. Sokolović

    2011-04-01

    Full Text Available Introduction This paper presents a critical analysis of the signal processing flow carried out in a software GPS receiver and a critical comparison of different architectures for signal processing within the GPS receiver. A model of software receivers is shown. Based on the displayed model, a receiver has been realized in the MATLAB software package, in which the simulations of signal processing were carried out. The aim of this paper is to demonstrate the advantages and disadvantages of different methods of the synchronization of signals in the receiver, and to propose a solution acceptable for possible implementation. The signal processing flow was observed from the input circuit to the extraction of the bits of the navigation message. The entire signal processing was performed on the L1 signal and the data collected by the input circuit SE4110. A radio signal from the satellite was accepted with the input circuit, filtered and translated into a digital form. The input circuit ends with the hardware of the receiver. A digital signal from the input circuit is brought into the PC Pentium 4 (AMD 3000 + where the receiver is realized in Matlab. Model of software GPS receiver The first level of processing is signal acquisition. Signal acquisition was realized using the cyclic convolution. The acquisition process was carried out by measuring signals from satellites, and these parameters are passed to the next level of processing. The next level was done by monitoring the synchronization signal and extracting the navigation message bits. On the basis of the detection of the navigation message the receiver calculates the position of a satellite and then, based on the position of the satellite, its own position. Tracking of GPS signal synchronization In order to select the most acceptable method of signal synchronization in the receiver, different methods of signal synchronization are compared. The early-late-DLL (Delay Lock Loop, TDL (Tau Dither Loop

  8. Method for selection of optimal road safety composite index with examples from DEA and TOPSIS method.

    Science.gov (United States)

    Rosić, Miroslav; Pešić, Dalibor; Kukić, Dragoslav; Antić, Boris; Božović, Milan

    2017-01-01

    Concept of composite road safety index is a popular and relatively new concept among road safety experts around the world. As there is a constant need for comparison among different units (countries, municipalities, roads, etc.) there is need to choose an adequate method which will make comparison fair to all compared units. Usually comparisons using one specific indicator (parameter which describes safety or unsafety) can end up with totally different ranking of compared units which is quite complicated for decision maker to determine "real best performers". Need for composite road safety index is becoming dominant since road safety presents a complex system where more and more indicators are constantly being developed to describe it. Among wide variety of models and developed composite indexes, a decision maker can come to even bigger dilemma than choosing one adequate risk measure. As DEA and TOPSIS are well-known mathematical models and have recently been increasingly used for risk evaluation in road safety, we used efficiencies (composite indexes) obtained by different models, based on DEA and TOPSIS, to present PROMETHEE-RS model for selection of optimal method for composite index. Method for selection of optimal composite index is based on three parameters (average correlation, average rank variation and average cluster variation) inserted into a PROMETHEE MCDM method in order to choose the optimal one. The model is tested by comparing 27 police departments in Serbia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. [Methods for selecting foster families for psychiatric family care].

    Science.gov (United States)

    Schmidt-Michel, P O; Konrad, M; Krüger, M

    1989-11-01

    Psychiatric foster family care of no more than two patients living in the foster family can be seen as a therapeutic setting, where longterm chronic patients can improve in their social functioning. Recent studies found the family characteristics as decisive for potential therapeutic effects. So the question arises how to select adequate foster family applicants. In an empirical study with 105 applicant-families we have tried to uncover the selection-procedures and mechanism of the foster care team that finally lead to adequate/non-adequate distinction. The results of the study show that the differences between the two applicant groups (selected vs non selected) are not identical with the intended selection criteria of the team members. Some major differences were found in areas that were totally independent from the team-criteria: the selected-as-adequate-families had a more intensive exchange with the outside world, educated more children and were therefore assumed to be socially more competent than the not selected applicant group. So selecting foster families comes up as a complicated decision making process that goes beyond checking up some criteria.

  10. Random genetic drift, natural selection, and noise in human cranial evolution.

    Science.gov (United States)

    Roseman, Charles C

    2016-08-01

    This study assesses the extent to which relationships among groups complicate comparative studies of adaptation in recent human cranial variation and the extent to which departures from neutral additive models of evolution hinder the reconstruction of population relationships among groups using cranial morphology. Using a maximum likelihood evolutionary model fitting approach and a mixed population genomic and cranial data set, I evaluate the relative fits of several widely used models of human cranial evolution. Moreover, I compare the goodness of fit of models of cranial evolution constrained by genomic variation to test hypotheses about population specific departures from neutrality. Models from population genomics are much better fits to cranial variation than are traditional models from comparative human biology. There is not enough evolutionary information in the cranium to reconstruct much of recent human evolution but the influence of population history on cranial variation is strong enough to cause comparative studies of adaptation serious difficulties. Deviations from a model of random genetic drift along a tree-like population history show the importance of environmental effects, gene flow, and/or natural selection on human cranial variation. Moreover, there is a strong signal of the effect of natural selection or an environmental factor on a group of humans from Siberia. The evolution of the human cranium is complex and no one evolutionary process has prevailed at the expense of all others. A holistic unification of phenome, genome, and environmental context, gives us a strong point of purchase on these problems, which is unavailable to any one traditional approach alone. Am J Phys Anthropol 160:582-592, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. The effect of a number of selective points in modeling of polymerization reacting Monte Carlo method: studying the initiation reaction

    CERN Document Server

    Sadi, M; Dabir, B

    2003-01-01

    Monte Carlo Method is one of the most powerful techniques to model different processes, such as polymerization reactions. By this method, without any need to solve moment equations, a very detailed information on the structure and properties of polymers are obtained. The number of algorithm repetitions (selected volumes of reactor for modelling which represent the number of initial molecules) is very important in this method. In Monte Carlo method calculations are based on the random number of generations and reaction probability determinations. so the number of algorithm repetition is very important. In this paper, the initiation reaction was considered alone and the importance of number of initiator molecules on the result were studied. It can be concluded that Monte Carlo method will not give accurate results if the number of molecules is not satisfied to be big enough, because in that case , selected volume would not be representative of the whole system.

  12. Cluster randomized trials in comparative effectiveness research: randomizing hospitals to test methods for prevention of healthcare-associated infections.

    Science.gov (United States)

    Platt, Richard; Takvorian, Samuel U; Septimus, Edward; Hickok, Jason; Moody, Julia; Perlin, Jonathan; Jernigan, John A; Kleinman, Ken; Huang, Susan S

    2010-06-01

    The need for evidence about the effectiveness of therapeutics and other medical practices has triggered new interest in methods for comparative effectiveness research. Describe an approach to comparative effectiveness research involving cluster randomized trials in networks of hospitals, health plans, or medical practices with centralized administrative and informatics capabilities. We discuss the example of an ongoing cluster randomized trial to prevent methicillin-resistant Staphylococcus aureus (MRSA) infection in intensive care units (ICUs). The trial randomizes 45 hospitals to: (a) screening cultures of ICU admissions, followed by Contact Precautions if MRSA-positive, (b) screening cultures of ICU admissions followed by decolonization if MRSA-positive, or (c) universal decolonization of ICU admissions without screening. All admissions to adult ICUs. The primary outcome is MRSA-positive clinical cultures occurring >or=2 days following ICU admission. Secondary outcomes include blood and urine infection caused by MRSA (and, separately, all pathogens), as well as the development of resistance to decolonizing agents. Recruitment of hospitals is complete. Data collection will end in Summer 2011. This trial takes advantage of existing personnel, procedures, infrastructure, and information systems in a large integrated hospital network to conduct a low-cost evaluation of prevention strategies under usual practice conditions. This approach is applicable to many comparative effectiveness topics in both inpatient and ambulatory settings.

  13. A Stochastic Collocation Method for Elliptic Partial Differential Equations with Random Input Data

    KAUST Repository

    Babuška, Ivo

    2010-01-01

    This work proposes and analyzes a stochastic collocation method for solving elliptic partial differential equations with random coefficients and forcing terms. These input data are assumed to depend on a finite number of random variables. The method consists of a Galerkin approximation in space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space, and naturally leads to the solution of uncoupled deterministic problems as in the Monte Carlo approach. It treats easily a wide range of situations, such as input data that depend nonlinearly on the random variables, diffusivity coefficients with unbounded second moments, and random variables that are correlated or even unbounded. We provide a rigorous convergence analysis and demonstrate exponential convergence of the “probability error” with respect to the number of Gauss points in each direction of the probability space, under some regularity assumptions on the random input data. Numerical examples show the effectiveness of the method. Finally, we include a section with developments posterior to the original publication of this work. There we review sparse grid stochastic collocation methods, which are effective collocation strategies for problems that depend on a moderately large number of random variables.

  14. Understanding Sample Surveys: Selective Learning about Social Science Research Methods

    Science.gov (United States)

    Currin-Percival, Mary; Johnson, Martin

    2010-01-01

    We investigate differences in what students learn about survey methodology in a class on public opinion presented in two critically different ways: with the inclusion or exclusion of an original research project using a random-digit-dial telephone survey. Using a quasi-experimental design and data obtained from pretests and posttests in two public…

  15. CURE-SMOTE algorithm and hybrid algorithm for feature selection and parameter optimization based on random forests.

    Science.gov (United States)

    Ma, Li; Fan, Suohai

    2017-03-14

    The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.

  16. Noise-induced hearing loss in randomly selected New York dairy farmers.

    Science.gov (United States)

    May, J J; Marvel, M; Regan, M; Marvel, L H; Pratt, D S

    1990-01-01

    To understand better the effects of noise levels associated with dairy farming, we randomly selected 49 full-time dairy farmers from an established cohort. Medical and occupational histories were taken and standard audiometric testing was done. Forty-six males (94%) and three females (6%) with a mean age of 43.5 (+/- 13) years and an average of 29.4 (+/- 14) years in farming were tested. Pure Tone Average thresholds (PTA4) at 0.5, 1.0, 2.0, and 3.0 kHz plus High Frequency Average thresholds (HFA3) at 3.0, 4.0, and 6.0 kHz were calculated. Subjects with a loss of greater than or equal to 20 db in either ear were considered abnormal. Eighteen subjects (37%) had abnormal PTA4S and 32 (65%) abnormal HFA3S. The left ear was more severely affected in both groups (p less than or equal to .05, t-test). Significant associations were found between hearing loss and years worked (odds ratio 4.1, r = .53) and age (odds ratio 4.1, r = .59). No association could be found between hearing loss and measles; mumps; previous ear infections; or use of power tools, guns, motorcycles, snowmobiles, or stereo headphones. Our data suggest that among farmers, substantial hearing loss occurs especially in the high-frequency ranges. Presbycusis is an important confounding variable.

  17. Modeling Slotted Aloha as a Stochastic Game with Random Discrete Power Selection Algorithms

    Directory of Open Access Journals (Sweden)

    Rachid El-Azouzi

    2009-01-01

    Full Text Available We consider the uplink case of a cellular system where bufferless mobiles transmit over a common channel to a base station, using the slotted aloha medium access protocol. We study the performance of this system under several power differentiation schemes. Indeed, we consider a random set of selectable transmission powers and further study the impact of priorities given either to new arrival packets or to the backlogged ones. Later, we address a general capture model where a mobile transmits successfully a packet if its instantaneous SINR (signal to interferences plus noise ratio is lager than some fixed threshold. Under this capture model, we analyze both the cooperative team in which a common goal is jointly optimized as well as the noncooperative game problem where mobiles reach to optimize their own objectives. Furthermore, we derive the throughput and the expected delay and use them as the objectives to optimize and provide a stability analysis as alternative study. Exhaustive performance evaluations were carried out, we show that schemes with power differentiation improve significantly the individual as well as global performances, and could eliminate in some cases the bi-stable nature of slotted aloha.

  18. An equilibrium for frustrated quantum spin systems in the stochastic state selection method

    Energy Technology Data Exchange (ETDEWEB)

    Munehisa, Tomo; Munehisa, Yasuko [Faculty of Engineering, University of Yamanashi, Kofu 400-8511 (Japan)

    2007-05-16

    We develop a new method to calculate eigenvalues in frustrated quantum spin models. It is based on the stochastic state selection (SSS) method, which is an unconventional Monte Carlo technique that we have investigated in recent years. We observe that a kind of equilibrium is realized under some conditions when we repeatedly operate a Hamiltonian and a random choice operator, which is defined by stochastic variables in the SSS method, to a trial state. In this equilibrium, which we call the SSS equilibrium, we can evaluate the lowest eigenvalue of the Hamiltonian using the statistical average of the normalization factor of the generated state. The SSS equilibrium itself has already been observed in unfrustrated models. Our study in this paper shows that we can also see the equilibrium in frustrated models, with some restriction on values of a parameter introduced in the SSS method. As a concrete example, we employ the spin-1/2 frustrated J{sub 1}-J{sub 2} Heisenberg model on the square lattice. We present numerical results on the 20-, 32-, and 36-site systems, which demonstrate that statistical averages of the normalization factors reproduce the known exact eigenvalue to good precision. Finally, we apply the method to the 40-site system. Then we obtain the value of the lowest energy eigenvalue with an error of less than 0.2%.

  19. Biased random key genetic algorithm with insertion and gender selection for capacitated vehicle routing problem with time windows

    Science.gov (United States)

    Rochman, Auliya Noor; Prasetyo, Hari; Nugroho, Munajat Tri

    2017-06-01

    Vehicle Routing Problem (VRP) often occurs when the manufacturers need to distribute their product to some customers/outlets. The distribution process is typically restricted by the capacity of the vehicle and the working hours at the distributor. This type of VRP is also known as Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). A Biased Random Key Genetic Algorithm (BRKGA) was designed and coded in MATLAB to solve the CVRPTW case of soft drink distribution. The standard BRKGA was then modified by applying chromosome insertion into the initial population and defining chromosome gender for parent undergoing crossover operation. The performance of the established algorithms was then compared to a heuristic procedure for solving a soft drink distribution. Some findings are revealed (1) the total distribution cost of BRKGA with insertion (BRKGA-I) results in a cost saving of 39% compared to the total cost of heuristic method, (2) BRKGA with the gender selection (BRKGA-GS) could further improve the performance of the heuristic method. However, the BRKGA-GS tends to yield worse results compared to that obtained from the standard BRKGA.

  20. Experimental research making methodical errors of analog-digital transformation of random processes

    OpenAIRE

    Єременко, В. С.; Вітрук, Ю. В.

    2005-01-01

     The given results of an experimental research a method of statistical modelling of a methodical error of analog-digital transformation random Gauss processes. The received dependences allow to coordinate characteristics of the analog-digital converter to parameters of the measured process.

  1. Experimental research making methodical errors of analog-digital transformation of random processes

    Directory of Open Access Journals (Sweden)

    В.С. Єременко

    2005-01-01

    Full Text Available  The given results of an experimental research a method of statistical modelling of a methodical error of analog-digital transformation random Gauss processes. The received dependences allow to coordinate characteristics of the analog-digital converter to parameters of the measured process.

  2. Evaluation of a Class of Simple and Effective Uncertainty Methods for Sparse Samples of Random Variables and Functions

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bonney, Matthew [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schroeder, Benjamin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-11-01

    When very few samples of a random quantity are available from a source distribution of unknown shape, it is usually not possible to accurately infer the exact distribution from which the data samples come. Under-estimation of important quantities such as response variance and failure probabilities can result. For many engineering purposes, including design and risk analysis, we attempt to avoid under-estimation with a strategy to conservatively estimate (bound) these types of quantities -- without being overly conservative -- when only a few samples of a random quantity are available from model predictions or replicate experiments. This report examines a class of related sparse-data uncertainty representation and inference approaches that are relatively simple, inexpensive, and effective. Tradeoffs between the methods' conservatism, reliability, and risk versus number of data samples (cost) are quantified with multi-attribute metrics use d to assess method performance for conservative estimation of two representative quantities: central 95% of response; and 10-4 probability of exceeding a response threshold in a tail of the distribution. Each method's performance is characterized with 10,000 random trials on a large number of diverse and challenging distributions. The best method and number of samples to use in a given circumstance depends on the uncertainty quantity to be estimated, the PDF character, and the desired reliability of bounding the true value. On the basis of this large data base and study, a strategy is proposed for selecting the method and number of samples for attaining reasonable credibility levels in bounding these types of quantities when sparse samples of random variables or functions are available from experiments or simulations.

  3. EEG-based mild depressive detection using feature selection methods and classifiers.

    Science.gov (United States)

    Li, Xiaowei; Hu, Bin; Sun, Shuting; Cai, Hanshu

    2016-11-01

    Depression has become a major health burden worldwide, and effectively detection of such disorder is a great challenge which requires latest technological tool, such as Electroencephalography (EEG). This EEG-based research seeks to find prominent frequency band and brain regions that are most related to mild depression, as well as an optimal combination of classification algorithms and feature selection methods which can be used in future mild depression detection. An experiment based on facial expression viewing task (Emo_block and Neu_block) was conducted, and EEG data of 37 university students were collected using a 128 channel HydroCel Geodesic Sensor Net (HCGSN). For discriminating mild depressive patients and normal controls, BayesNet (BN), Support Vector Machine (SVM), Logistic Regression (LR), k-nearest neighbor (KNN) and RandomForest (RF) classifiers were used. And BestFirst (BF), GreedyStepwise (GSW), GeneticSearch (GS), LinearForwordSelection (LFS) and RankSearch (RS) based on Correlation Features Selection (CFS) were applied for linear and non-linear EEG features selection. Independent Samples T-test with Bonferroni correction was used to find the significantly discriminant electrodes and features. Data mining results indicate that optimal performance is achieved using a combination of feature selection method GSW based on CFS and classifier KNN for beta frequency band. Accuracies achieved 92.00% and 98.00%, and AUC achieved 0.957 and 0.997, for Emo_block and Neu_block beta band data respectively. T-test results validate the effectiveness of selected features by search method GSW. Simplified EEG system with only FP1, FP2, F3, O2, T3 electrodes was also explored with linear features, which yielded accuracies of 91.70% and 96.00%, AUC of 0.952 and 0.972, for Emo_block and Neu_block respectively. Classification results obtained by GSW + KNN are encouraging and better than previously published results. In the spatial distribution of features, we find

  4. An HPLC method for the determination of selected amino acids in human embryo culture medium.

    Science.gov (United States)

    Drábková, Petra; Andrlová, Lenka; Kanďár, Roman

    2017-02-01

    A method for the determination of selected amino acids in culture medium using HPLC with fluorescence detection is described. Twenty hours after intra-cytoplasmic sperm injection, one randomly selected zygote was transferred to the culture medium. After incubation (72 h after fertilization), the culture medium in which the embryo was incubated and blank medium was immediately stored at -80°C. Filtered medium samples were derivatized with ortho-phthalaldehyde (naphthalene-2,3-dicarboxaldehyde), forming highly fluorescent amino acids derivatives. Reverse-phase columns (LichroCART, Purospher STAR RP18e or Ascentis Express C18 ) were used for the separation. The derivatives were analyzed by gradient elution with a mobile phase containing ethanol and sodium dihydrogen phosphate. The analytical performance of this method is satisfactory for all amino acids; the intra-assay coefficients of variation were amino acids before and after human embryo cultivation were observed. After embryo incubation, the levels of all amino acids in the medium were increased, apart from aspartate and asparagine. After the cultivation of some embryos, amino acids which were not part of the medium were detected. Low amino acids turnover was observed in some embryos. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Aptamer Selection Express: A Novel Method for Rapid Single-Step Selection and Sensing of Aptamers

    National Research Council Canada - National Science Library

    Fan, Maomian; Roper, Shelly; Andrews, Carrie; Allman, Amity; Bruno, John; Kiel, Jonathan

    2008-01-01

    ...). This process has been used to select aptamers against different types of targets (Bacillus anthracis spores, Bacillus thuringiensis spores, MS-2 bacteriophage, ovalbumin, and botulinum neurotoxin...

  6. The effects of predictor method factors on selection outcomes: A modular approach to personnel selection procedures.

    Science.gov (United States)

    Lievens, Filip; Sackett, Paul R

    2017-01-01

    Past reviews and meta-analyses typically conceptualized and examined selection procedures as holistic entities. We draw on the product design literature to propose a modular approach as a complementary perspective to conceptualizing selection procedures. A modular approach means that a product is broken down into its key underlying components. Therefore, we start by presenting a modular framework that identifies the important measurement components of selection procedures. Next, we adopt this modular lens for reviewing the available evidence regarding each of these components in terms of affecting validity, subgroup differences, and applicant perceptions, as well as for identifying new research directions. As a complement to the historical focus on holistic selection procedures, we posit that the theoretical contributions of a modular approach include improved insight into the isolated workings of the different components underlying selection procedures and greater theoretical connectivity among different selection procedures and their literatures. We also outline how organizations can put a modular approach into operation to increase the variety in selection procedures and to enhance the flexibility in designing them. Overall, we believe that a modular perspective on selection procedures will provide the impetus for programmatic and theory-driven research on the different measurement components of selection procedures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Genetic implication of Shorea johorensis Foxw propagation methods in selective cutting and line planting silvicultural system

    Directory of Open Access Journals (Sweden)

    PRIJANTO PAMOENGKAS

    2008-10-01

    Full Text Available Attempt to rehabilitate degraded natural forests in Indonesia is recently carried out by applying selective cutting and line planting (TPTJ silvicultural system. One of the most important aspects of TPTJ silvicultural system is the procurement methods of large number of planting stocks. Shorea johorensis Foxw was investigated in this regards as one of promising Shorea species for rehabilitating degraded forests due to its fast growing character. The species is usually propagated by three different propagation techniques, namely up-rooted seedlings, seeds and cuttings. Genetic consequences due to different propagation methods in this species are poorly known and need to be studied in order to determine genetic variation and differentiation. Material from five origins (populations, namely: (i up-rooted seedlings, (ii seeds, (iii cuttings, (iv young plantation and (v natural forest were randomly taken in the field and subsequently assessed by RAPD using three previously tested random primers of OPO-11, OPO-13 and OPO-16. Results showed that natural tree populations showed the highest levels of genetic variation with mean values na = 1.2593, ne = 1.2070, PLP = 25.93% and He = 0.1109. Cutting populations showed the lowest levels of genetic variation with mean values na = 1.1111, ne = 1.0773, PLP = 11.11% and He = 0.0445. Meanwhile, according to the propagation techniques, up-rooted seedling population revealed the highest levels of genetic variation with mean values na = 1.2222, ne = 1.1613, PLP = 22.22% and He = 0.0886. Particular methods of plant propagation in this company, especially cutting method, reduced significant genetic variation of S. johorensis.

  8. The prevalence of symptoms associated with pulmonary tuberculosis in randomly selected children from a high burden community

    OpenAIRE

    Marais, B.; Obihara, C; Gie, R.; Schaaf, H; Hesseling, A.; Lombard, C.; Enarson, D; Bateman, E; Beyers, N

    2005-01-01

    Background: Diagnosis of childhood tuberculosis is problematic and symptom based diagnostic approaches are often promoted in high burden settings. This study aimed (i) to document the prevalence of symptoms associated with tuberculosis among randomly selected children living in a high burden community, and (ii) to compare the prevalence of these symptoms in children without tuberculosis to those in children with newly diagnosed tuberculosis.

  9. A Randomized Controlled Trial of Cognitive Debiasing Improves Assessment and Treatment Selection for Pediatric Bipolar Disorder

    Science.gov (United States)

    Jenkins, Melissa M.; Youngstrom, Eric A.

    2015-01-01

    Objective This study examined the efficacy of a new cognitive debiasing intervention in reducing decision-making errors in the assessment of pediatric bipolar disorder (PBD). Method The study was a randomized controlled trial using case vignette methodology. Participants were 137 mental health professionals working in different regions of the US (M=8.6±7.5 years of experience). Participants were randomly assigned to a (1) brief overview of PBD (control condition), or (2) the same brief overview plus a cognitive debiasing intervention (treatment condition) that educated participants about common cognitive pitfalls (e.g., base-rate neglect; search satisficing) and taught corrective strategies (e.g., mnemonics, Bayesian tools). Both groups evaluated four identical case vignettes. Primary outcome measures were clinicians’ diagnoses and treatment decisions. The vignette characters’ race/ethnicity was experimentally manipulated. Results Participants in the treatment group showed better overall judgment accuracy, p < .001, and committed significantly fewer decision-making errors, p < .001. Inaccurate and somewhat accurate diagnostic decisions were significantly associated with different treatment and clinical recommendations, particularly in cases where participants missed comorbid conditions, failed to detect the possibility of hypomania or mania in depressed youths, and misdiagnosed classic manic symptoms. In contrast, effects of patient race were negligible. Conclusions The cognitive debiasing intervention outperformed the control condition. Examining specific heuristics in cases of PBD may identify especially problematic mismatches between typical habits of thought and characteristics of the disorder. The debiasing intervention was brief and delivered via the Web; it has the potential to generalize and extend to other diagnoses as well as to various practice and training settings. PMID:26727411

  10. Clustering based gene expression feature selection method: A computational approach to enrich the classifier efficiency of differentially expressed genes

    KAUST Repository

    Abusamra, Heba

    2016-07-20

    The native nature of high dimension low sample size of gene expression data make the classification task more challenging. Therefore, feature (gene) selection become an apparent need. Selecting a meaningful and relevant genes for classifier not only decrease the computational time and cost, but also improve the classification performance. Among different approaches of feature selection methods, however most of them suffer from several problems such as lack of robustness, validation issues etc. Here, we present a new feature selection technique that takes advantage of clustering both samples and genes. Materials and methods We used leukemia gene expression dataset [1]. The effectiveness of the selected features were evaluated by four different classification methods; support vector machines, k-nearest neighbor, random forest, and linear discriminate analysis. The method evaluate the importance and relevance of each gene cluster by summing the expression level for each gene belongs to this cluster. The gene cluster consider important, if it satisfies conditions depend on thresholds and percentage otherwise eliminated. Results Initial analysis identified 7120 differentially expressed genes of leukemia (Fig. 15a), after applying our feature selection methodology we end up with specific 1117 genes discriminating two classes of leukemia (Fig. 15b). Further applying the same method with more stringent higher positive and lower negative threshold condition, number reduced to 58 genes have be tested to evaluate the effectiveness of the method (Fig. 15c). The results of the four classification methods are summarized in Table 11. Conclusions The feature selection method gave good results with minimum classification error. Our heat-map result shows distinct pattern of refines genes discriminating between two classes of leukemia.

  11. A general symplectic method for the response analysis of infinitely periodic structures subjected to random excitations

    Directory of Open Access Journals (Sweden)

    You-Wei Zhang

    Full Text Available A general symplectic method for the random response analysis of infinitely periodic structures subjected to stationary/non-stationary random excitations is developed using symplectic mathematics in conjunction with variable separation and the pseudo-excitation method (PEM. Starting from the equation of motion for a single loaded substructure, symplectic analysis is firstly used to eliminate the dependent degrees of the freedom through condensation. A Fourier expansion of the condensed equation of motion is then applied to separate the variables of time and wave number, thus enabling the necessary recurrence scheme to be developed. The random response is finally determined by implementing PEM. The proposed method is justified by comparison with results available in the literature and is then applied to a more complicated time-dependent coupled system.

  12. Application of QMC methods to PDEs with random coefficients : a survey of analysis and implementation

    KAUST Repository

    Kuo, Frances

    2016-01-05

    In this talk I will provide a survey of recent research efforts on the application of quasi-Monte Carlo (QMC) methods to PDEs with random coefficients. Such PDE problems occur in the area of uncertainty quantification. In recent years many papers have been written on this topic using a variety of methods. QMC methods are relatively new to this application area. I will consider different models for the randomness (uniform versus lognormal) and contrast different QMC algorithms (single-level versus multilevel, first order versus higher order, deterministic versus randomized). I will give a summary of the QMC error analysis and proof techniques in a unified view, and provide a practical guide to the software for constructing QMC points tailored to the PDE problems.

  13. Wavelength Selection Method Based on Differential Evolution for Precise Quantitative Analysis Using Terahertz Time-Domain Spectroscopy.

    Science.gov (United States)

    Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong

    2017-12-01

    Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.

  14. Bayesian dose selection design for a binary outcome using restricted response adaptive randomization.

    Science.gov (United States)

    Meinzer, Caitlyn; Martin, Renee; Suarez, Jose I

    2017-09-08

    In phase II trials, the most efficacious dose is usually not known. Moreover, given limited resources, it is difficult to robustly identify a dose while also testing for a signal of efficacy that would support a phase III trial. Recent designs have sought to be more efficient by exploring multiple doses through the use of adaptive strategies. However, the added flexibility may potentially increase the risk of making incorrect assumptions and reduce the total amount of information available across the dose range as a function of imbalanced sample size. To balance these challenges, a novel placebo-controlled design is presented in which a restricted Bayesian response adaptive randomization (RAR) is used to allocate a majority of subjects to the optimal dose of active drug, defined as the dose with the lowest probability of poor outcome. However, the allocation between subjects who receive active drug or placebo is held constant to retain the maximum possible power for a hypothesis test of overall efficacy comparing the optimal dose to placebo. The design properties and optimization of the design are presented in the context of a phase II trial for subarachnoid hemorrhage. For a fixed total sample size, a trade-off exists between the ability to select the optimal dose and the probability of rejecting the null hypothesis. This relationship is modified by the allocation ratio between active and control subjects, the choice of RAR algorithm, and the number of subjects allocated to an initial fixed allocation period. While a responsive RAR algorithm improves the ability to select the correct dose, there is an increased risk of assigning more subjects to a worse arm as a function of ephemeral trends in the data. A subarachnoid treatment trial is used to illustrate how this design can be customized for specific objectives and available data. Bayesian adaptive designs are a flexible approach to addressing multiple questions surrounding the optimal dose for treatment efficacy

  15. History of the production complex: The methods of site selection

    Energy Technology Data Exchange (ETDEWEB)

    1987-09-01

    Experience taught the Atomic Energy Commission how to select the best possible sites for its production facilities. AEC officials learned from the precedents set by the wartime Manhattan Project and from their own mistakes in the immediate postwar years. This volume discusses several site selections. The sites covered are: (1) the Hanford Reservation, (2) the Idaho reactor site, (3) the Savannah River Plant, (4) the Paducah Gaseous Diffusion Plant, (5) the Portsmouth Gaseous Diffusion Plant, (6) the Fernald Production Center, (7) the PANTEX and Spoon River Plants, (8) the Rocky Flats Fabrication Facility, and (9) the Miamisburg and Pinellas plants. (JDH)

  16. A Framework for Simulation Validation & Verification Method Selection

    NARCIS (Netherlands)

    Roungas, V.; Meijer, S.A.; Verbraeck, A.; Ramezani, Arash; Williams, Edward; Bauer, Marek

    2017-01-01

    Thirty years of research on validation and verification (V\\&V) has returned a plethora of methods, statistical techniques, and reported case studies. It is that abundance of methods that poses a major challenge. Because of overlap between methods and time and budget constraints, it is impossible

  17. Selecting Methods of the Weighting Factors of Local Criteria

    Directory of Open Access Journals (Sweden)

    V. M. Postnikov

    2015-01-01

    Full Text Available The paper considers calculation methods of weight coefficients of local criteria for the person to make decision (PMD. Methods can be used by a decision-maker to form the integrated criteria in the form of the additive, multiplicative, minimax, nonlinear or combined convolution of local criteria, different in degree of importance, as well as to carry out a comparative assessment of the studied alternative options on their basis, range these options, and choose the best option among them.The paper classifies the calculation methods of weight coefficients of criteria and distinguishes three groups, namely: methods based on the paired comparison of criteria, methods based on the analytical interrelation of indicators of criteria preference, and methods based on the formalistic approach. Among the methods based on paired comparison of criteria the following ones are distinguished: a classical method of paired comparison of criteria, and methods of paired comparison of criteria based on the fixed, floating, and exponential floating preferences of criteria. The last two methods of criteria comparison are respectively basic for the practical use of both a method of hierarchy analysis and a multiplicative method of hierarchy analysis. The paper considers in detail calculation methods of weight coefficients of criteria using analytical dependences of interrelation between the indicators of criteria importance based on an arithmetic and geometrical progression. The considered formal methods include the method of consecutive comparison of criteria known as a Cherchmen's method – Akoffa, method and a method of basic criterion.It is shown that when using methods of paired comparison of criteria or methods based on the interrelation of weight coefficients of criteria obeyed to an analytical or geometrical progression with the strictly ranging criteria K K K ......K K 1 2 3 n-1 n , a difference in weight coefficients of the most important and least important

  18. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all...

  19. Model Selection Methods for Mixture Dichotomous IRT Models

    Science.gov (United States)

    Li, Feiming; Cohen, Allan S.; Kim, Seock-Ho; Cho, Sun-Joo

    2009-01-01

    This study examines model selection indices for use with dichotomous mixture item response theory (IRT) models. Five indices are considered: Akaike's information coefficient (AIC), Bayesian information coefficient (BIC), deviance information coefficient (DIC), pseudo-Bayes factor (PsBF), and posterior predictive model checks (PPMC). The five…

  20. Selection for Yield Improvement Using of Multivariate Statistical Methods

    Directory of Open Access Journals (Sweden)

    H Sabouri

    2012-06-01

    Full Text Available In order to providing selection indices using of heritability and correlation of effective traits on grain yield and multiple regression an experiment was conducted by 265 F3 families as well as parents and F1 related to Gharib × Khazar population in 2009 at Gonbad Kavous University fields. Days to ripening (0.97 and panicle number and flag leaf length (0.66 had maximum and minimum heritability, respectively. Positive and significant correlations were detected between plant yield and flag leaf width (0.265**, plant height (0.193**, panicle number (0.734** and biomass (0.828**. Biomass, days to heading and plant height were explained about 98% of total variation of yield and inserted to model respectively. Different combination of phenotypic and genotypic correlations, genetic and phenotypic direct effect in path analysis and heritability with and without yield were used for construct selection vectors. According to this study, increasing of traits is not result of relative efficiency and other comparing parameters. Selection indices were showed that yield, significant genetic correlation with yield and high heritability are three important part of selection index.

  1. Selection for Yield Improvement Using of Multivariate Statistical Methods

    Directory of Open Access Journals (Sweden)

    H Sabouri

    2012-07-01

    Full Text Available In order to providing selection indices using heritability and correlation effective traits on yield and multiple regression an experiment was conducted by 265 F3 families as well as parents and F1 related to Gharib × Khazar population in 2009 at Gonbad High Education Center fields. Days to repining (0.97 and panicle number and flag leaf length (0.66 had maximum and minimum heritability, respectively. Positive and significant correlations were detected between plant yield and flag leaf width (0.265**, plant height (0.193**, panicle number (0.734** and biomass (0.828**. Biomass, days to heading and plant height were explained about 98% of total variation of yield and inserted to model respectively. Phenotypic and genotypic correlations, genetic and phenotypic direct effect in path analysis, heritability were used for construct selection vectors. According to this study, increasing of traits is not result of relative efficiency and compares parameter. Selection indices were showed that yield, significant genetic correlation with yield and high heritability are three important part of selection index. Fifth, Sixth and fourteenth are the most important between discussed indices.

  2. A Careful Look at Modern Case Selection Methods

    Science.gov (United States)

    Herron, Michael C.; Quinn, Kevin M.

    2016-01-01

    Case studies appear prominently in political science, sociology, and other social science fields. A scholar employing a case study research design in an effort to estimate causal effects must confront the question, how should cases be selected for analysis? This question is important because the results derived from a case study research program…

  3. Enhancement web proxy cache performance using Wrapper Feature Selection methods with NB and J48

    Science.gov (United States)

    Mahmoud Al-Qudah, Dua’a.; Funke Olanrewaju, Rashidah; Wong Azman, Amelia

    2017-11-01

    Web proxy cache technique reduces response time by storing a copy of pages between client and server sides. If requested pages are cached in the proxy, there is no need to access the server. Due to the limited size and excessive cost of cache compared to the other storages, cache replacement algorithm is used to determine evict page when the cache is full. On the other hand, the conventional algorithms for replacement such as Least Recently Use (LRU), First in First Out (FIFO), Least Frequently Use (LFU), Randomized Policy etc. may discard important pages just before use. Furthermore, using conventional algorithm cannot be well optimized since it requires some decision to intelligently evict a page before replacement. Hence, most researchers propose an integration among intelligent classifiers and replacement algorithm to improves replacement algorithms performance. This research proposes using automated wrapper feature selection methods to choose the best subset of features that are relevant and influence classifiers prediction accuracy. The result present that using wrapper feature selection methods namely: Best First (BFS), Incremental Wrapper subset selection(IWSS)embedded NB and particle swarm optimization(PSO)reduce number of features and have a good impact on reducing computation time. Using PSO enhance NB classifier accuracy by 1.1%, 0.43% and 0.22% over using NB with all features, using BFS and using IWSS embedded NB respectively. PSO rises J48 accuracy by 0.03%, 1.91 and 0.04% over using J48 classifier with all features, using IWSS-embedded NB and using BFS respectively. While using IWSS embedded NB fastest NB and J48 classifiers much more than BFS and PSO. However, it reduces computation time of NB by 0.1383 and reduce computation time of J48 by 2.998.

  4. Content analysis of a stratified random selection of JVME articles: 1974-2004.

    Science.gov (United States)

    Olson, Lynne E

    2011-01-01

    A content analysis was performed on a random sample (N = 168) of 25% of the articles published in the Journal of Veterinary Medical Education (JVME) per year from 1974 through 2004. Over time, there were increased numbers of authors per paper, more cross-institutional collaborations, greater prevalence of references or endnotes, and lengthier articles, which could indicate a trend toward publications describing more complex or complete work. The number of first authors that could be identified as female was greatest for the most recent time period studied (2000-2004). Two different categorization schemes were created to assess the content of the publications. The first categorization scheme identified the most frequently published topics as admissions, descriptions of courses, the effect of changing teaching methods, issues facing the profession, and examples of uses of technology. The second categorization scheme identified the subset of articles that described medical education research on the basis of the purpose of the research, which represented only 14% of the sample articles (24 of 168). Of that group, only three of 24, or 12%, represented studies based on a firm conceptual framework that could be confirmed or refuted by the study's results. The results indicate that JVME is meeting its broadly based mission and that publications in the veterinary medical education literature have features common to publications in medicine and medical education.

  5. Capturing the Flatness of a peer-to-peer lending network through random and selected perturbations

    Science.gov (United States)

    Karampourniotis, Panagiotis D.; Singh, Pramesh; Uparna, Jayaram; Horvat, Emoke-Agnes; Szymanski, Boleslaw K.; Korniss, Gyorgy; Bakdash, Jonathan Z.; Uzzi, Brian

    Null models are established tools that have been used in network analysis to uncover various structural patterns. They quantify the deviance of an observed network measure to that given by the null model. We construct a null model for weighted, directed networks to identify biased links (carrying significantly different weights than expected according to the null model) and thus quantify the flatness of the system. Using this model, we study the flatness of Kiva, a large international crownfinancing network of borrowers and lenders, aggregated to the country level. The dataset spans the years from 2006 to 2013. Our longitudinal analysis shows that flatness of the system is reducing over time, meaning the proportion of biased inter-country links is growing. We extend our analysis by testing the robustness of the flatness of the network in perturbations on the links' weights or the nodes themselves. Examples of such perturbations are event shocks (e.g. erecting walls) or regulatory shocks (e.g. Brexit). We find that flatness is unaffected by random shocks, but changes after shocks target links with a large weight or bias. The methods we use to capture the flatness are based on analytics, simulations, and numerical computations using Shannon's maximum entropy. Supported by ARL NS-CTA.

  6. Support Vector Machines Parameter Selection Based on Combined Taguchi Method and Staelin Method for E-mail Spam Filtering

    Directory of Open Access Journals (Sweden)

    Wei-Chih Hsu

    2012-04-01

    Full Text Available Support vector machines (SVM are a powerful tool for building good spam filtering models. However, the performance of the model depends on parameter selection. Parameter selection of SVM will affect classification performance seriously during training process. In this study, we use combined Taguchi method and Staelin method to optimize the SVM-based E-mail Spam Filtering model and promote spam filtering accuracy. We compare it with other parameters optimization methods, such as grid search. Six real-world mail data sets are selected to demonstrate the effectiveness and feasibility of the method. The results show that our proposed methods can find the effective model with high classification accuracy

  7. Correlated Random Systems Five Different Methods : CIRM Jean-Morlet Chair

    CERN Document Server

    Kistler, Nicola

    2015-01-01

    This volume presents five different methods recently developed to tackle the large scale behavior of highly correlated random systems, such as spin glasses, random polymers, local times and loop soups and random matrices. These methods, presented in a series of lectures delivered within the Jean-Morlet initiative (Spring 2013), play a fundamental role in the current development of probability theory and statistical mechanics. The lectures were: Random Polymers by E. Bolthausen, Spontaneous Replica Symmetry Breaking and Interpolation Methods by F. Guerra, Derrida's Random Energy Models by N. Kistler, Isomorphism Theorems by J. Rosen and Spectral Properties of Wigner Matrices by B. Schlein. This book is the first in a co-edition between the Jean-Morlet Chair at CIRM and the Springer Lecture Notes in Mathematics which aims to collect together courses and lectures on cutting-edge subjects given during the term of the Jean-Morlet Chair, as well as new material produced in its wake. It is targeted at researchers, i...

  8. Coupling Neumann development and component mode synthesis methods for stochastic analysis of random structures

    Directory of Open Access Journals (Sweden)

    Driss Sarsri

    2014-05-01

    Full Text Available In this paper, we propose a method to calculate the first two moments (mean and variance of the structural dynamics response of a structure with uncertain variables and subjected to random excitation. For this, Newmark method is used to transform the equation of motion of the structure into a quasistatic equilibrium equation in the time domain. The Neumann development method was coupled with Monte Carlo simulations to calculate the statistical values of the random response. The use of modal synthesis methods can reduce the dimensions of the model before integration of the equation of motion. Numerical applications have been developed to highlight effectiveness of the method developed to analyze the stochastic response of large structures.

  9. A preliminary investigation of the jack-bean urease inhibition by randomly selected traditionally used herbal medicine.

    Science.gov (United States)

    Biglar, Mahmood; Soltani, Khadijeh; Nabati, Farzaneh; Bazl, Roya; Mojab, Faraz; Amanlou, Massoud

    2012-01-01

    Helicobacter pylori (H. pylori) infection leads to different clinical and pathological outcomes in humans, including chronic gastritis, peptic ulcer disease and gastric neoplasia and even gastric cancer and its eradiation dependst upon multi-drug therapy. The most effective therapy is still unknown and prompts people to make great efforts to find better and more modern natural or synthetic anti-H. pylori agents. In this report 21 randomly selected herbal methanolic extracts were evaluated for their effect on inhibition of Jack-bean urease using the indophenol method as described by Weatherburn. The inhibition potency was measured by UV spectroscopy technique at 630 nm which attributes to released ammonium. Among these extracts, five showed potent inhibitory activities with IC50 ranges of 18-35 μg/mL. These plants are Matricaria disciforme (IC50:35 μg/mL), Nasturtium officinale (IC50:18 μg/mL), Punica granatum (IC50:30 μg/mL), Camelia sinensis (IC50:35 μg/mL), Citrus aurantifolia (IC50:28 μg/mL).

  10. A relative entropy method to measure non-exponential random data

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Yingjie; Chen, Wen, E-mail: chenwen@hhu.edu.cn

    2015-01-23

    This paper develops a relative entropy method to measure non-exponential random data in conjunction with fractional order moment, logarithmic moment and tail statistics of Mittag–Leffler distribution. The distribution of non-exponential random data follows neither the exponential distribution nor exponential decay. The proposed strategy is validated by analyzing the experiment data, which are generated by Monte Carlo method using Mittag–Leffler distribution. Compared with the traditional Shannon entropy, the relative entropy method is simple to be implemented, and its corresponding relative entropies approximated by the fractional order moment, logarithmic moment and tail statistics can easily and accurately detect the non-exponential random data. - Highlights: • A relative entropy method is developed to measure non-exponential random data. • The fractional order moment, logarithmic moment and tail statistics are employed. • The three strategies of Mittag–Leffler distribution can be accurately established. • Compared with Shannon entropy, the relative entropy method is easy to be implemented.

  11. Sustainable Supplier Performance Evaluation and Selection with Neofuzzy TOPSIS Method

    OpenAIRE

    Chaharsooghi, S. K.; Ashrafi, Mehdi

    2014-01-01

    Supplier selection plays an important role in the supply chain management and traditional criteria such as price, quality, and flexibility are considered for supplier performance evaluation in researches. In recent years sustainability has received more attention in the supply chain management literature with triple bottom line (TBL) describing the sustainability in supply chain management with social, environmental, and economic initiatives. This paper explores sustainability in supply chain...

  12. Improvements in Sample Selection Methods for Image Classification

    Directory of Open Access Journals (Sweden)

    Thales Sehn Körting

    2014-08-01

    Full Text Available Traditional image classification algorithms are mainly divided into unsupervised and supervised paradigms. In the first paradigm, algorithms are designed to automatically estimate the classes’ distributions in the feature space. The second paradigm depends on the knowledge of a domain expert to identify representative examples from the image to be used for estimating the classification model. Recent improvements in human-computer interaction (HCI enable the construction of more intuitive graphic user interfaces (GUIs to help users obtain desired results. In remote sensing image classification, GUIs still need advancements. In this work, we describe our efforts to develop an improved GUI for selecting the representative samples needed to estimate the classification model. The idea is to identify changes in the common strategies for sample selection to create a user-driven sample selection, which focuses on different views of each sample, and to help domain experts identify explicit classification rules, which is a well-established technique in geographic object-based image analysis (GEOBIA. We also propose the use of the well-known nearest neighbor algorithm to identify similar samples and accelerate the classification.

  13. Selection of an Appropriate Interpolation Method for Rainfall Data In ...

    African Journals Online (AJOL)

    Interpolation technique can be used to establish the rainfall data at the location of interest from available data. There are many interpolation methods in use with various limitations and likelihood of errors. This study applied five interpolation methods to existing rainfall data in central Nigeria to determine the most appropriate ...

  14. EFFECT OF MCKENZIE METHOD WITH TENS ON LUMBAR RADICULOPATHY A RANDOMIZED CONTROLLED TRIAL

    Directory of Open Access Journals (Sweden)

    Jay Indravadan Patel

    2016-02-01

    Full Text Available Background: Lumbar radiculopathy is a disease of the spinal nerve root generally accompanied by radicular pain in dermatomal distribution and/or neurologic symptoms. The previous studies were focusing on finding the disability and pain caused due to Lumbar Radiculopathy. This study is focusing on the disability, pain, range of motion of the spine and SLR. The objective of the study is to evaluate the effectiveness of Mckenzie method with TENS on reducing symptoms and disability of Lumbar radiculopathy. Methods: In the present prospective study patients with Lumbar radicular pain due to disc herniation or prolapse at level L4, L5 & S1 were randomized into two groups – Group A and Group B. the study included 40 patients, with 20 in each group. The selection criteria was based on the following - with age group 22-55years, both sexes – male and female, with radicular pain in L4, L5 & S1 dermatomes, disabling leg pain for 6-12 weeks duration, evidence of disc herniation confirmed on MR imaging. The radicular pain was measured using the SLR test, pain was measured using the VAS scale of 0 – 100, disability was measured using the MODI and Lumbar Spine ROM was measured using the MMST. Group-A were treated with McKenzie methods with TENS and Group-B were treated with general exercise with TENS. Results: This study showed that there was a significant reduction of pain on the VAS, improvement in SLR, lumbar spine range of motion using MMST and disability using MODI for both the groups. The statistical analysis found that experimental group showed earlier control of all the outcome measures when compared to controlled group at the end of the 6th week. Conclusion: After 6 weeks of Mckenzie method with TENS intervention for 30 minutes for 5 days in week the statistical analysis concluded that the experimental group had significantly faster rates of reducing the symptoms of lumbar radiculopathy and reducing the disability due to lumbar radiculopathy.

  15. Evaluation of methods and marker Systems in Genomic Selection of oil palm (Elaeis guineensis Jacq.).

    Science.gov (United States)

    Kwong, Qi Bin; Teh, Chee Keng; Ong, Ai Ling; Chew, Fook Tim; Mayes, Sean; Kulaveerasingam, Harikrishna; Tammi, Martti; Yeoh, Suat Hui; Appleton, David Ross; Harikrishna, Jennifer Ann

    2017-12-11

    Genomic selection (GS) uses genome-wide markers as an attempt to accelerate genetic gain in breeding programs of both animals and plants. This approach is particularly useful for perennial crops such as oil palm, which have long breeding cycles, and for which the optimal method for GS is still under debate. In this study, we evaluated the effect of different marker systems and modeling methods for implementing GS in an introgressed dura family derived from a Deli dura x Nigerian dura (Deli x Nigerian) with 112 individuals. This family is an important breeding source for developing new mother palms for superior oil yield and bunch characters. The traits of interest selected for this study were fruit-to-bunch (F/B), shell-to-fruit (S/F), kernel-to-fruit (K/F), mesocarp-to-fruit (M/F), oil per palm (O/P) and oil-to-dry mesocarp (O/DM). The marker systems evaluated were simple sequence repeats (SSRs) and single nucleotide polymorphisms (SNPs). RR-BLUP, Bayesian A, B, Cπ, LASSO, Ridge Regression and two machine learning methods (SVM and Random Forest) were used to evaluate GS accuracy of the traits. The kinship coefficient between individuals in this family ranged from 0.35 to 0.62. S/F and O/DM had the highest genomic heritability, whereas F/B and O/P had the lowest. The accuracies using 135 SSRs were low, with accuracies of the traits around 0.20. The average accuracy of machine learning methods was 0.24, as compared to 0.20 achieved by other methods. The trait with the highest mean accuracy was F/B (0.28), while the lowest were both M/F and O/P (0.18). By using whole genomic SNPs, the accuracies for all traits, especially for O/DM (0.43), S/F (0.39) and M/F (0.30) were improved. The average accuracy of machine learning methods was 0.32, compared to 0.31 achieved by other methods. Due to high genomic resolution, the use of whole-genome SNPs improved the efficiency of GS dramatically for oil palm and is recommended for dura breeding programs. Machine learning slightly

  16. A simple method for analyzing data from a randomized trial with a missing binary outcome

    Directory of Open Access Journals (Sweden)

    Freedman Laurence S

    2003-05-01

    Full Text Available Abstract Background Many randomized trials involve missing binary outcomes. Although many previous adjustments for missing binary outcomes have been proposed, none of these makes explicit use of randomization to bound the bias when the data are not missing at random. Methods We propose a novel approach that uses the randomization distribution to compute the anticipated maximum bias when missing at random does not hold due to an unobserved binary covariate (implying that missingness depends on outcome and treatment group. The anticipated maximum bias equals the product of two factors: (a the anticipated maximum bias if there were complete confounding of the unobserved covariate with treatment group among subjects with an observed outcome and (b an upper bound factor that depends only on the fraction missing in each randomization group. If less than 15% of subjects are missing in each group, the upper bound factor is less than .18. Results We illustrated the methodology using data from the Polyp Prevention Trial. We anticipated a maximum bias under complete confounding of .25. With only 7% and 9% missing in each arm, the upper bound factor, after adjusting for age and sex, was .10. The anticipated maximum bias of .25 × .10 =.025 would not have affected the conclusion of no treatment effect. Conclusion This approach is easy to implement and is particularly informative when less than 15% of subjects are missing in each arm.

  17. ABCLS method for high-reliability aerospace mechanism with truncated random uncertainties

    Directory of Open Access Journals (Sweden)

    Peng Wensheng

    2015-08-01

    Full Text Available The random variables are always truncated in aerospace engineering and the truncated distribution is more feasible and effective for the random variables due to the limited samples available. For high-reliability aerospace mechanism with truncated random variables, a method based on artificial bee colony (ABC algorithm and line sampling (LS is proposed. The artificial bee colony-based line sampling (ABCLS method presents a multi-constrained optimization model to solve the potential non-convergence problem when calculating design point (is also as most probable point, MPP of performance function with truncated variables; by implementing ABC algorithm to search for MPP in the standard normal space, the optimization efficiency and global searching ability are increased with this method dramatically. When calculating the reliability of aerospace mechanism with too small failure probability, the Monte Carlo simulation method needs too large sample size. The ABCLS method could overcome this drawback. For reliability problems with implicit functions, this paper combines the ABCLS with Kriging response surface method, therefore could alleviate computational burden of calculating the reliability of complex aerospace mechanism. A numerical example and an engineering example are carried out to verify this method and prove the applicability.

  18. Sample Selected Averaging Method for Analyzing the Event Related Potential

    Science.gov (United States)

    Taguchi, Akira; Ono, Youhei; Kimura, Tomoaki

    The event related potential (ERP) is often measured through the oddball task. On the oddball task, subjects are given “rare stimulus” and “frequent stimulus”. Measured ERPs were analyzed by the averaging technique. In the results, amplitude of the ERP P300 becomes large when the “rare stimulus” is given. However, measured ERPs are included samples without an original feature of ERP. Thus, it is necessary to reject unsuitable measured ERPs when using the averaging technique. In this paper, we propose the rejection method for unsuitable measured ERPs for the averaging technique. Moreover, we combine the proposed method and Woody's adaptive filter method.

  19. The Stochastic Galerkin Method for Darcy Flow Problem with Log-Normal Random

    Czech Academy of Sciences Publication Activity Database

    Beres, Michal; Domesová, Simona

    2017-01-01

    Roč. 15, č. 2 (2017), s. 267-279 ISSN 1336-1376 R&D Projects: GA MŠk LQ1602 Institutional support: RVO:68145535 Keywords : Darcy flow * Gaussian random field * Karhunen-Loeve decomposition * polynomial chaos * Stochastic Galerkin method Subject RIV: BA - General Mathematics http://advances.utc.sk/index.php/AEEE/article/view/2280

  20. Random Qualitative Validation: A Mixed-Methods Approach to Survey Validation

    Science.gov (United States)

    Van Duzer, Eric

    2012-01-01

    The purpose of this paper is to introduce the process and value of Random Qualitative Validation (RQV) in the development and interpretation of survey data. RQV is a method of gathering clarifying qualitative data that improves the validity of the quantitative analysis. This paper is concerned with validity in relation to the participants'…

  1. THE EFFICIENCY OF RANDOM FOREST METHOD FOR SHORELINE EXTRACTION FROM LANDSAT-8 AND GOKTURK-2 IMAGERIES

    Directory of Open Access Journals (Sweden)

    B. Bayram

    2017-11-01

    Full Text Available Coastal monitoring plays a vital role in environmental planning and hazard management related issues. Since shorelines are fundamental data for environment management, disaster management, coastal erosion studies, modelling of sediment transport and coastal morphodynamics, various techniques have been developed to extract shorelines. Random Forest is one of these techniques which is used in this study for shoreline extraction.. This algorithm is a machine learning method based on decision trees. Decision trees analyse classes of training data creates rules for classification. In this study, Terkos region has been chosen for the proposed method within the scope of "TUBITAK Project (Project No: 115Y718 titled "Integration of Unmanned Aerial Vehicles for Sustainable Coastal Zone Monitoring Model – Three-Dimensional Automatic Coastline Extraction and Analysis: Istanbul-Terkos Example". Random Forest algorithm has been implemented to extract the shoreline of the Black Sea where near the lake from LANDSAT-8 and GOKTURK-2 satellite imageries taken in 2015. The MATLAB environment was used for classification. To obtain land and water-body classes, the Random Forest method has been applied to NIR bands of LANDSAT-8 (5th band and GOKTURK-2 (4th band imageries. Each image has been digitized manually and shorelines obtained for accuracy assessment. According to accuracy assessment results, Random Forest method is efficient for both medium and high resolution images for shoreline extraction studies.

  2. The Efficiency of Random Forest Method for Shoreline Extraction from LANDSAT-8 and GOKTURK-2 Imageries

    Science.gov (United States)

    Bayram, B.; Erdem, F.; Akpinar, B.; Ince, A. K.; Bozkurt, S.; Catal Reis, H.; Seker, D. Z.

    2017-11-01

    Coastal monitoring plays a vital role in environmental planning and hazard management related issues. Since shorelines are fundamental data for environment management, disaster management, coastal erosion studies, modelling of sediment transport and coastal morphodynamics, various techniques have been developed to extract shorelines. Random Forest is one of these techniques which is used in this study for shoreline extraction.. This algorithm is a machine learning method based on decision trees. Decision trees analyse classes of training data creates rules for classification. In this study, Terkos region has been chosen for the proposed method within the scope of "TUBITAK Project (Project No: 115Y718) titled "Integration of Unmanned Aerial Vehicles for Sustainable Coastal Zone Monitoring Model - Three-Dimensional Automatic Coastline Extraction and Analysis: Istanbul-Terkos Example". Random Forest algorithm has been implemented to extract the shoreline of the Black Sea where near the lake from LANDSAT-8 and GOKTURK-2 satellite imageries taken in 2015. The MATLAB environment was used for classification. To obtain land and water-body classes, the Random Forest method has been applied to NIR bands of LANDSAT-8 (5th band) and GOKTURK-2 (4th band) imageries. Each image has been digitized manually and shorelines obtained for accuracy assessment. According to accuracy assessment results, Random Forest method is efficient for both medium and high resolution images for shoreline extraction studies.

  3. Reference satellite selection method for GNSS high-precision relative positioning

    OpenAIRE

    Xiao Gao; Wujiao Dai; Zhiyong Song; Changsheng Cai

    2017-01-01

    Selecting the optimal reference satellite is an important component of high-precision relative positioning because the reference satellite directly influences the strength of the normal equation. The reference satellite selection methods based on elevation and positional dilution of precision (PDOP) value were compared. Results show that all the above methods cannot select the optimal reference satellite. We introduce condition number of the design matrix in the reference satellite selection ...

  4. Selection of suitable NDT methods for building inspection

    Science.gov (United States)

    Pauzi Ismail, Mohamad

    2017-11-01

    Construction of modern structures requires good quality concrete with adequate strength and durability. Several accidents occurred in the civil constructions and were reported in the media. Such accidents were due to poor workmanship and lack of systematic monitoring during the constructions. In addition, water leaking and cracking in residential houses was commonly reported too. Based on these facts, monitoring the quality of concrete in structures is becoming more and more important subject. This paper describes major Non-destructive Testing (NDT) methods for evaluating structural integrity of concrete building. Some interesting findings during actual NDT inspections on site are presented. The NDT methods used are explained, compared and discussed. The suitable methods are suggested as minimum NDT methods to cover parameters required in the inspection.

  5. Auto Regressive Moving Average (ARMA) Modeling Method for Gyro Random Noise Using a Robust Kalman Filter.

    Science.gov (United States)

    Huang, Lei

    2015-09-30

    To solve the problem in which the conventional ARMA modeling methods for gyro random noise require a large number of samples and converge slowly, an ARMA modeling method using a robust Kalman filtering is developed. The ARMA model parameters are employed as state arguments. Unknown time-varying estimators of observation noise are used to achieve the estimated mean and variance of the observation noise. Using the robust Kalman filtering, the ARMA model parameters are estimated accurately. The developed ARMA modeling method has the advantages of a rapid convergence and high accuracy. Thus, the required sample size is reduced. It can be applied to modeling applications for gyro random noise in which a fast and accurate ARMA modeling method is required.

  6. Method of preparing size-selected metal clusters

    Science.gov (United States)

    Elam, Jeffrey W.; Pellin, Michael J.; Stair, Peter C.

    2010-05-11

    The invention provides a method for depositing catalytic clusters on a surface, the method comprising confining the surface to a controlled atmosphere; contacting the surface with catalyst containing vapor for a first period of time; removing the vapor from the controlled atmosphere; and contacting the surface with a reducing agent for a second period of time so as to produce catalyst-containing nucleation sites.

  7. Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness

    OpenAIRE

    Chelouche, Doron; Nuñez, Francisco Pozo; Zucker, Shay

    2017-01-01

    A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann's mean-square successive-difference estimator are found ...

  8. A Statistical Model Updating Method of Beam Structures with Random Parameters under Static Load

    Directory of Open Access Journals (Sweden)

    Zhifeng Wu

    2017-06-01

    Full Text Available This paper presents a new statistical model updating method of beam structures with random parameters under static load. The new updating method considers structural parameters and measurement errors to be random. To reduce the unmeasured degrees of freedom in the finite element model, a static condensation technique is used in this method. A statistical model updating equation with respect to element updated factors is established afterwards. The element updated factors are expanded as random multivariate power series. Using a high-order perturbation technique, the statistical model updating equation can be solved to obtain the coefficients of the power series expansions of the element updated factors. The results of two numerical examples show that for the solution of the statistical model updating equation, the accuracy of the proposed method agrees with that of the Monte Carlo simulation method very well. The static responses obtained by the updated finite element model coincide with the measured results very well. Finally, a series of static load tests of the concrete beam are conducted to testify the effectiveness of the proposed method.

  9. A two-level stochastic collocation method for semilinear elliptic equations with random coefficients

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Luoping; Zheng, Bin; Lin, Guang; Voulgarakis, Nikolaos

    2017-05-01

    In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse mesh $\\mathcal{T}_H$ with a low level stochastic collocation (corresponding to the polynomial space $\\mathcal{P}_{P}$) and solve linearized equations on a fine mesh $\\mathcal{T}_h$ using high level stochastic collocation (corresponding to the polynomial space $\\mathcal{P}_p$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $\\mathcal{T}_h$ and $\\mathcal{P}_p$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.

  10. Nonlinear stability and time step selection for the MPM method

    Science.gov (United States)

    Berzins, Martin

    2018-01-01

    The Material Point Method (MPM) has been developed from the Particle in Cell (PIC) method over the last 25 years and has proved its worth in solving many challenging problems involving large deformations. Nevertheless there are many open questions regarding the theoretical properties of MPM. For example in while Fourier methods, as applied to PIC may provide useful insight, the non-linear nature of MPM makes it necessary to use a full non-linear stability analysis to determine a stable time step for MPM. In order to begin to address this the stability analysis of Spigler and Vianello is adapted to MPM and used to derive a stable time step bound for a model problem. This bound is contrasted against traditional Speed of sound and CFL bounds and shown to be a realistic stability bound for a model problem.

  11. Selected methods for reducing hazards of spontaneous coal oxidation

    Energy Technology Data Exchange (ETDEWEB)

    Skalicka, J.; Vydra, J.

    1988-01-01

    Discusses methods of preventing coal spontaneous combustion in underground mines. Injection methods and the spraying of inhibiting compounds are comparatively evaluated: injection is aimed at reducing air flow from a ventilation system to a center of spontaneous combustion; spraying reduces the active surface of coal exposed to oxidation. Injection of polyurethane foam and urea-formaldehyde foam is described. Spraying coal with the following compounds is discussed: a mixture of bentonite with calcium chloride, a mixture of water solution of water glass and limestone dust, the Plamor mixture developed by the Bratislava Technical University, the Neoxiplast mixture on the basis of acrylates. Combined use of injection methods, spraying inhibiting mixtures and reducing air access is discussed. Effects of mining schemes without leaving support pillars and reducing coal losses on hazards of coal spontaneous combustion are discussed. 2 refs.

  12. The basic science and mathematics of random mutation and natural selection.

    Science.gov (United States)

    Kleinman, Alan

    2014-12-20

    The mutation and natural selection phenomenon can and often does cause the failure of antimicrobial, herbicidal, pesticide and cancer treatments selection pressures. This phenomenon operates in a mathematically predictable behavior, which when understood leads to approaches to reduce and prevent the failure of the use of these selection pressures. The mathematical behavior of mutation and selection is derived using the principles given by probability theory. The derivation of the equations describing the mutation and selection phenomenon is carried out in the context of an empirical example. Copyright © 2014 John Wiley & Sons, Ltd.

  13. An evaluation of the effectiveness of recruitment methods: the staying well after depression randomized controlled trial.

    Science.gov (United States)

    Krusche, Adele; Rudolf von Rohr, Isabelle; Muse, Kate; Duggan, Danielle; Crane, Catherine; Williams, J Mark G

    2014-04-01

    Randomized controlled trials (RCTs) are widely accepted as being the most efficient way of investigating the efficacy of psychological therapies. However, researchers conducting RCTs commonly report difficulties in recruiting an adequate sample within planned timescales. In an effort to overcome recruitment difficulties, researchers often are forced to expand their recruitment criteria or extend the recruitment phase, thus increasing costs and delaying publication of results. Research investigating the effectiveness of recruitment strategies is limited, and trials often fail to report sufficient details about the recruitment sources and resources utilized. We examined the efficacy of strategies implemented during the Staying Well after Depression RCT in Oxford to recruit participants with a history of recurrent depression. We describe eight recruitment methods utilized and two further sources not initiated by the research team and examine their efficacy in terms of (1) the return, including the number of potential participants who contacted the trial and the number who were randomized into the trial; (2) cost-effectiveness, comprising direct financial cost and manpower for initial contacts and randomized participants; and (3) comparison of sociodemographic characteristics of individuals recruited from different sources. Poster advertising, web-based advertising, and mental health worker referrals were the cheapest methods per randomized participant; however, the ratio of randomized participants to initial contacts differed markedly per source. Advertising online, via posters, and on a local radio station were the most cost-effective recruitment methods for soliciting participants who subsequently were randomized into the trial. Advertising across many sources (saturation) was found to be important. It may not be feasible to employ all the recruitment methods used in this trial to obtain participation from other populations, such as those currently unwell, or in

  14. Supplier Selection based on the Performance by using PROMETHEE Method

    Science.gov (United States)

    Sinaga, T. S.; Siregar, K.

    2017-03-01

    Generally, companies faced problem to identify vendors that can provide excellent service in availability raw material and on time delivery. The performance of suppliers in a company have to be monitored to ensure the availability to fulfill the company needs. This research is intended to explain how to assess suppliers to improve manufacturing performance. The criteria that considered in evaluating suppliers is criteria of Dickson. There are four main criteria which further split into seven sub-criteria, namely compliance with accuracy, consistency, on-time delivery, right quantity order, flexibility and negotiation, timely of order confirmation, and responsiveness. This research uses PROMETHEE methodology in assessing the supplier performances and obtaining a selected supplier as the best one that shown from the degree of alternative comparison preference between suppliers.

  15. Statistical methods and applications from a historical perspective selected issues

    CERN Document Server

    Mignani, Stefania

    2014-01-01

    The book showcases a selection of peer-reviewed papers, the preliminary versions of which were presented at a conference held 11-13 June 2011 in Bologna and organized jointly by the Italian Statistical Society (SIS), the National Institute of Statistics (ISTAT) and the Bank of Italy. The theme of the conference was "Statistics in the 150 years of the Unification of Italy." The celebration of the anniversary of Italian unification provided the opportunity to examine and discuss the methodological aspects and applications from a historical perspective and both from a national and international point of view. The critical discussion on the issues of the past has made it possible to focus on recent advances, considering the studies of socio-economic and demographic changes in European countries.

  16. Application of the selected physical methods in biological research

    Directory of Open Access Journals (Sweden)

    Jaromír Tlačbaba

    2013-01-01

    Full Text Available This paper deals with the application of acoustic emission (AE, which is a part of the non-destructive methods, currently having an extensive application. This method is used for measuring the internal defects of materials. AE has a high potential in further research and development to extend the application of this method even in the field of process engineering. For that matter, it is the most elaborate acoustic emission monitoring in laboratory conditions with regard to external stimuli. The aim of the project is to apply the acoustic emission recording the activity of bees in different seasons. The mission is to apply a new perspective on the behavior of colonies by means of acoustic emission, which collects a sound propagation in the material. Vibration is one of the integral part of communication in the community. Sensing colonies with the support of this method is used for understanding of colonies biological behavior to stimuli clutches, colony development etc. Simulating conditions supported by acoustic emission monitoring system the illustrate colonies activity. Collected information will be used to represent a comprehensive view of the life cycle and behavior of honey bees (Apis mellifera. Use of information about the activities of bees gives a comprehensive perspective on using of acoustic emission in the field of biological research.

  17. Selecting appropriate methods of knowledge synthesis to inform biodiversity policy

    NARCIS (Netherlands)

    Pullin, Andrew; Frampton, Geoff; Jongman, Rob; Kohl, Christian; Livoreil, Barbara; Lux, Alexandra; Pataki, György; Petrokofsky, Gillian; Podhora, Aranka; Saarikoski, Heli; Santamaria, Luis; Schindler, Stefan; Sousa-pinto, Isabel; Vandewalle, Marie; Wittmer, Heidi

    2016-01-01

    Responding to different questions generated by biodiversity and ecosystem services policy or management requires different forms of knowledge (e.g. scientific, experiential) and knowledge synthesis. Additionally, synthesis methods need to be appropriate to policy context (e.g. question types,

  18. LOMA: A fast method to generate efficient tagged-random primers despite amplification bias of random PCR on pathogens

    Directory of Open Access Journals (Sweden)

    Lee Wah

    2008-09-01

    Full Text Available Abstract Background Pathogen detection using DNA microarrays has the potential to become a fast and comprehensive diagnostics tool. However, since pathogen detection chips currently utilize random primers rather than specific primers for the RT-PCR step, bias inherent in random PCR amplification becomes a serious problem that causes large inaccuracies in hybridization signals. Results In this paper, we study how the efficiency of random PCR amplification affects hybridization signals. We describe a model that predicts the amplification efficiency of a given random primer on a target viral genome. The prediction allows us to filter false-negative probes of the genome that lie in regions of poor random PCR amplification and improves the accuracy of pathogen detection. Subsequently, we propose LOMA, an algorithm to generate random primers that have good amplification efficiency. Wet-lab validation showed that the generated random primers improve the amplification efficiency significantly. Conclusion The blind use of a random primer with attached universal tag (random-tagged primer in a PCR reaction on a pathogen sample may not lead to a successful amplification. Thus, the design of random-tagged primers is an important consideration when performing PCR.

  19. Reference satellite selection method for GNSS high-precision relative positioning

    Directory of Open Access Journals (Sweden)

    Xiao Gao

    2017-03-01

    Full Text Available Selecting the optimal reference satellite is an important component of high-precision relative positioning because the reference satellite directly influences the strength of the normal equation. The reference satellite selection methods based on elevation and positional dilution of precision (PDOP value were compared. Results show that all the above methods cannot select the optimal reference satellite. We introduce condition number of the design matrix in the reference satellite selection method to improve structure of the normal equation, because condition number can indicate the ill condition of the normal equation. The experimental results show that the new method can improve positioning accuracy and reliability in precise relative positioning.

  20. On analysis-based two-step interpolation methods for randomly sampled seismic data

    Science.gov (United States)

    Yang, Pengliang; Gao, Jinghuai; Chen, Wenchao

    2013-02-01

    Interpolating the missing traces of regularly or irregularly sampled seismic record is an exceedingly important issue in the geophysical community. Many modern acquisition and reconstruction methods are designed to exploit the transform domain sparsity of the few randomly recorded but informative seismic data using thresholding techniques. In this paper, to regularize randomly sampled seismic data, we introduce two accelerated, analysis-based two-step interpolation algorithms, the analysis-based FISTA (fast iterative shrinkage-thresholding algorithm) and the FPOCS (fast projection onto convex sets) algorithm from the IST (iterative shrinkage-thresholding) algorithm and the POCS (projection onto convex sets) algorithm. A MATLAB package is developed for the implementation of these thresholding-related interpolation methods. Based on this package, we compare the reconstruction performance of these algorithms, using synthetic and real seismic data. Combined with several thresholding strategies, the accelerated convergence of the proposed methods is also highlighted.

  1. Randomized gradient-free method for multiagent optimization over time-varying networks.

    Science.gov (United States)

    Yuan, Deming; Ho, Daniel W C

    2015-06-01

    In this brief, we consider the multiagent optimization over a network where multiple agents try to minimize a sum of nonsmooth but Lipschitz continuous functions, subject to a convex state constraint set. The underlying network topology is modeled as time varying. We propose a randomized derivative-free method, where in each update, the random gradient-free oracles are utilized instead of the subgradients (SGs). In contrast to the existing work, we do not require that agents are able to compute the SGs of their objective functions. We establish the convergence of the method to an approximate solution of the multiagent optimization problem within the error level depending on the smoothing parameter and the Lipschitz constant of each agent's objective function. Finally, a numerical example is provided to demonstrate the effectiveness of the method.

  2. An inversion method based on random sampling for real-time MEG neuroimaging

    CERN Document Server

    Pascarella, Annalisa

    2016-01-01

    The MagnetoEncephaloGraphy (MEG) has gained great interest in neurorehabilitation training due to its high temporal resolution. The challenge is to localize the active regions of the brain in a fast and accurate way. In this paper we use an inversion method based on random spatial sampling to solve the real-time MEG inverse problem. Several numerical tests on synthetic but realistic data show that the method takes just a few hundredths of a second on a laptop to produce an accurate map of the electric activity inside the brain. Moreover, it requires very little memory storage. For this reasons the random sampling method is particularly attractive in real-time MEG applications.

  3. SELECTION OF NON-CONVENTIONAL MACHINING PROCESSES USING THE OCRA METHOD

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2015-04-01

    Full Text Available Selection of the most suitable nonconventional machining process (NCMP for a given machining application can be viewed as multi-criteria decision making (MCDM problem with many conflicting and diverse criteria. To aid these selection processes, different MCDM methods have been proposed. This paper introduces the use of an almost unexplored MCDM method, i.e. operational competitiveness ratings analysis (OCRA method for solving the NCMP selection problems. Applicability, suitability and computational procedure of OCRA method have been demonstrated while solving three case studies dealing with selection of the most suitable NCMP. In each case study the obtained rankings were compared with those derived by the past researchers using different MCDM methods. The results obtained using the OCRA method have good correlation with those derived by the past researchers which validate the usefulness of this method while solving complex NCMP selection problems.

  4. Analysis of cost data in a cluster-randomized, controlled trial: comparison of methods

    DEFF Research Database (Denmark)

    Sokolowski, Ineta; Ørnbøl, Eva; Rosendal, Marianne

    is commonly used for skewed distributions. For health care data, however, we need to recover the total cost in a given patient population. Thus, we focus, on making inferences on population means. Furthermore, a problem of clustered data is added as data related to patients in primary care are organized...... in clusters of general practices.   There have been suggestions to apply different methods, e.g., the non-parametric bootstrap, to highly skewed data from pragmatic randomized trials without clusters, but there is very little information about how to analyse skewed data from cluster-randomized trials. Many......  We consider health care data from a cluster-randomized intervention study in primary care to test whether the average health care costs among study patients differ between the two groups. The problems of analysing cost data are that most data are severely skewed. Median instead of mean...

  5. A new compound control method for sine-on-random mixed vibration test

    Science.gov (United States)

    Zhang, Buyun; Wang, Ruochen; Zeng, Falin

    2017-09-01

    Vibration environmental test (VET) is one of the important and effective methods to provide supports for the strength design, reliability and durability test of mechanical products. A new separation control strategy was proposed to apply in multiple-input multiple-output (MIMO) sine on random (SOR) mixed mode vibration test, which is the advanced and intensive test type of VET. As the key problem of the strategy, correlation integral method was applied to separate the mixed signals which included random and sinusoidal components. The feedback control formula of MIMO linear random vibration system was systematically deduced in frequency domain, and Jacobi control algorithm was proposed in view of the elements, such as self-spectrum, coherence, and phase of power spectral density (PSD) matrix. Based on the excessive correction of excitation in sine vibration test, compression factor was introduced to reduce the excitation correction, avoiding the destruction to vibration table or other devices. The two methods were synthesized to be applied in MIMO SOR vibration test system. In the final, verification test system with the vibration of a cantilever beam as the control object was established to verify the reliability and effectiveness of the methods proposed in the paper. The test results show that the exceeding values can be controlled in the tolerance range of references accurately, and the method can supply theory and application supports for mechanical engineering.

  6. A stochastic collocation method for the second order wave equation with a discontinuous random speed

    KAUST Repository

    Motamed, Mohammad

    2012-08-31

    In this paper we propose and analyze a stochastic collocation method for solving the second order wave equation with a random wave speed and subjected to deterministic boundary and initial conditions. The speed is piecewise smooth in the physical space and depends on a finite number of random variables. The numerical scheme consists of a finite difference or finite element method in the physical space and a collocation in the zeros of suitable tensor product orthogonal polynomials (Gauss points) in the probability space. This approach leads to the solution of uncoupled deterministic problems as in the Monte Carlo method. We consider both full and sparse tensor product spaces of orthogonal polynomials. We provide a rigorous convergence analysis and demonstrate different types of convergence of the probability error with respect to the number of collocation points for full and sparse tensor product spaces and under some regularity assumptions on the data. In particular, we show that, unlike in elliptic and parabolic problems, the solution to hyperbolic problems is not in general analytic with respect to the random variables. Therefore, the rate of convergence may only be algebraic. An exponential/fast rate of convergence is still possible for some quantities of interest and for the wave solution with particular types of data. We present numerical examples, which confirm the analysis and show that the collocation method is a valid alternative to the more traditional Monte Carlo method for this class of problems. © 2012 Springer-Verlag.

  7. Dynamic ensemble selection methods for heterogeneous data mining

    OpenAIRE

    Ballard, Chris; Wang, Wenjia

    2016-01-01

    Big data is often collected from multiple sources with possibly different features, representations and granularity and hence is defined as heterogeneous data. Such multiple datasets need to be fused together in some ways for further analysis. Data fusion at feature level requires domain knowledge and can be time-consuming and ineffective, but it could be avoided if decision-level fusion is applied properly. Ensemble methods appear to be an appropriate paradigm to do just that as each subset ...

  8. The Comparison of Selected Risk Management Methods for Project Management

    OpenAIRE

    Obrová, Vladěna; Smolíková, Lenka

    2013-01-01

    Part 7: Environmental Management/-Accounting and -Statistics; International audience; Project management is a set of validated and described procedures that comprehensively solve the implementation and management of defined activities that relate to a specific project. In the Czech Republic, the issue of risk management in projects often neglected and began to be more used to the ESF projects where is the risk management required. There are used most often for risk analysis 3 methods - sensit...

  9. Method selection for mercury removal from hard coal

    OpenAIRE

    Dziok Tadeusz; Strugała Andrzej

    2017-01-01

    Mercury is commonly found in coal and the coal utilization processes constitute one of the main sources of mercury emission to the environment. This issue is particularly important for Poland, because the Polish energy production sector is based on brown and hard coal. The forecasts show that this trend in energy production will continue in the coming years. At the time of the emission limits introduction, methods of reducing the mercury emission will have to be implemented in Poland. Mercury...

  10. Indicators for Monitoring Water, Sanitation, and Hygiene: A Systematic Review of Indicator Selection Methods

    Science.gov (United States)

    Schwemlein, Stefanie; Cronk, Ryan; Bartram, Jamie

    2016-01-01

    Monitoring water, sanitation, and hygiene (WaSH) is important to track progress, improve accountability, and demonstrate impacts of efforts to improve conditions and services, especially in low- and middle-income countries. Indicator selection methods enable robust monitoring of WaSH projects and conditions. However, selection methods are not always used and there are no commonly-used methods for selecting WaSH indicators. To address this gap, we conducted a systematic review of indicator selection methods used in WaSH-related fields. We present a summary of indicator selection methods for environment, international development, and water. We identified six methodological stages for selecting indicators for WaSH: define the purpose and scope; select a conceptual framework; search for candidate indicators; determine selection criteria; score indicators against criteria; and select a final suite of indicators. This summary of indicator selection methods provides a foundation for the critical assessment of existing methods. It can be used to inform future efforts to construct indicator sets in WaSH and related fields. PMID:26999180

  11. Feature selection method based on multi-fractal dimension and harmony search algorithm and its application

    Science.gov (United States)

    Zhang, Chen; Ni, Zhiwei; Ni, Liping; Tang, Na

    2016-10-01

    Feature selection is an important method of data preprocessing in data mining. In this paper, a novel feature selection method based on multi-fractal dimension and harmony search algorithm is proposed. Multi-fractal dimension is adopted as the evaluation criterion of feature subset, which can determine the number of selected features. An improved harmony search algorithm is used as the search strategy to improve the efficiency of feature selection. The performance of the proposed method is compared with that of other feature selection algorithms on UCI data-sets. Besides, the proposed method is also used to predict the daily average concentration of PM2.5 in China. Experimental results show that the proposed method can obtain competitive results in terms of both prediction accuracy and the number of selected features.

  12. Novel random peptide libraries displayed on AAV serotype 9 for selection of endothelial cell-directed gene transfer vectors.

    Science.gov (United States)

    Varadi, K; Michelfelder, S; Korff, T; Hecker, M; Trepel, M; Katus, H A; Kleinschmidt, J A; Müller, O J

    2012-08-01

    We have demonstrated the potential of random peptide libraries displayed on adeno-associated virus (AAV)2 to select for AAV2 vectors with improved efficiency for cell type-directed gene transfer. AAV9, however, may have advantages over AAV2 because of a lower prevalence of neutralizing antibodies in humans and more efficient gene transfer in vivo. Here we provide evidence that random peptide libraries can be displayed on AAV9 and can be utilized to select for AAV9 capsids redirected to the cell type of interest. We generated an AAV9 peptide display library, which ensures that the displayed peptides correspond to the packaged genomes and performed four consecutive selection rounds on human coronary artery endothelial cells in vitro. This screening yielded AAV9 library capsids with distinct peptide motifs enabling up to 40-fold improved transduction efficiencies compared with wild-type (wt) AAV9 vectors. Incorporating sequences selected from AAV9 libraries into AAV2 capsids could not increase transduction as efficiently as in the AAV9 context. To analyze the potential on endothelial cells in the intact natural vascular context, human umbilical veins were incubated with the selected AAV in situ and endothelial cells were isolated. Fluorescence-activated cell sorting analysis revealed a 200-fold improved transduction efficiency compared with wt AAV9 vectors. Furthermore, AAV9 vectors with targeting sequences selected from AAV9 libraries revealed an increased transduction efficiency in the presence of human intravenous immunoglobulins, suggesting a reduced immunogenicity. We conclude that our novel AAV9 peptide library is functional and can be used to select for vectors for future preclinical and clinical gene transfer applications.

  13. Selected asymptotic methods with applications to electromagnetics and antennas

    CERN Document Server

    Fikioris, George; Bakas, Odysseas N

    2013-01-01

    This book describes and illustrates the application of several asymptotic methods that have proved useful in the authors' research in electromagnetics and antennas. We first define asymptotic approximations and expansions and explain these concepts in detail. We then develop certain prerequisites from complex analysis such as power series, multivalued functions (including the concepts of branch points and branch cuts), and the all-important gamma function. Of particular importance is the idea of analytic continuation (of functions of a single complex variable); our discussions here include som

  14. Techno-economic method for evaluation and selection of flexible manufacturing systems (FMS

    Directory of Open Access Journals (Sweden)

    V. Todić

    2012-07-01

    Full Text Available To find best FMS solutions, experts use numerous multicriteria methods for evaluation and ranking, methods based on artificial intelligence, and multicriteria optimization methods. Presented in this paper is a developed technoeconomic method for evaluation and selection of FMS based on productivity. The method is based on group technology (GT process planning.

  15. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn [School of Information Science and Technology, ShanghaiTech University, Shanghai 200031 (China); Lin, Guang, E-mail: guanglin@purdue.edu [Department of Mathematics & School of Mechanical Engineering, Purdue University, West Lafayette, IN 47907 (United States)

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  16. Wave propagation through random media: A local method of small perturbations based on the Helmholtz equation

    Science.gov (United States)

    Grosse, Ralf

    1990-01-01

    Propagation of sound through the turbulent atmosphere is a statistical problem. The randomness of the refractive index field causes sound pressure fluctuations. Although no general theory to predict sound pressure statistics from given refractive index statistics exists, there are several approximate solutions to the problem. The most common approximation is the parabolic equation method. Results obtained by this method are restricted to small refractive index fluctuations and to small wave lengths. While the first condition is generally met in the atmosphere, it is desirable to overcome the second. A generalization of the parabolic equation method with respect to the small wave length restriction is presented.

  17. Automation in the high-throughput selection of random combinatorial libraries--different approaches for select applications.

    Science.gov (United States)

    Glökler, Jörn; Schütze, Tatjana; Konthur, Zoltán

    2010-04-08

    Automation in combination with high throughput screening methods has revolutionised molecular biology in the last two decades. Today, many combinatorial libraries as well as several systems for automation are available. Depending on scope, budget and time, a different combination of library and experimental handling might be most effective. In this review we will discuss several concepts of combinatorial libraries and provide information as what to expect from these depending on the given context.

  18. Stochastic finite element method for random harmonic analysis of composite plates with uncertain modal damping parameters

    Science.gov (United States)

    Sepahvand, K.

    2017-07-01

    Damping parameters of fiber-reinforced composite possess significant uncertainty due to the structural complexity of such materials. Considering the parameters as random variables, this paper uses the generalized polynomial chaos (gPC) expansion to capture the uncertainty in the damping and frequency response function of composite plate structures. A spectral stochastic finite element formulation for damped vibration analysis of laminate plates is employed. Experimental modal data for samples of plates is used to identify and realize the range and probability distributions of uncertain damping parameters. The constructed gPC expansions for the uncertain parameters are used as inputs to a deterministic finite element model to realize random frequency responses on a few numbers of collocation points generated in random space. The realizations then are employed to estimate the unknown deterministic functions of the gPC expansion approximating the responses. Employing modal superposition method to solve harmonic analysis problem yields an efficient sparse gPC expansion representing the responses. The results show while the responses are influenced by the damping uncertainties at the mid and high frequency ranges, the impact in low frequency modes can be safely ignored. Utilizing a few random collocation points, the method indicates also a very good agreement compared to the sampling-based Monte Carlo simulations with large number of realizations. As the deterministic finite element model serves as black-box solver, the procedure can be efficiently adopted to complex structural systems with uncertain parameters in terms of computational time.

  19. An Efficient Method of HOG Feature Extraction Using Selective Histogram Bin and PCA Feature Reduction

    National Research Council Canada - National Science Library

    LAI, C. Q; TEOH, S. S

    2016-01-01

    .... In this paper, a time-efficient HOG-based feature extraction method is proposed. The method uses selective number of histogram bins to perform feature extraction on different regions in the image...

  20. Best Basis Selection Method Using Learning Weights for Face Recognition

    Directory of Open Access Journals (Sweden)

    Wonju Lee

    2013-09-01

    Full Text Available In the face recognition field, principal component analysis is essential to the reduction of the image dimension. In spite of frequent use of this analysis, it is commonly believed that the basis faces with large eigenvalues are chosen as the best subset in the nearest neighbor classifiers. We propose an alternative that can predict the classification error during the training steps and find the useful basis faces for the similarity metrics of the classical pattern algorithms. In addition, we also show the need for the eye-aligned dataset to have the pure face. The experiments using face images verify that our method reduces the negative effect on the misaligned face images and decreases the weights of the useful basis faces in order to improve the classification accuracy.

  1. A selective and mild glycosylation method of natural phenolic alcohols

    Directory of Open Access Journals (Sweden)

    Mária Mastihubová

    2016-03-01

    Full Text Available Several bioactive natural p-hydroxyphenylalkyl β-D-glucopyranosides, such as vanillyl β-D-glucopyranoside, salidroside and isoconiferin, and their glycosyl analogues were prepared by a simple reaction sequence. The highly efficient synthetic approach was achieved by utilizing acetylated glycosyl bromides as well as aromatic moieties and mild glycosylation promoters. The aglycones, p-O-acetylated arylalkyl alcohols, were prepared by the reduction of the corresponding acetylated aldehydes or acids. Various stereoselective 1,2-trans-O-glycosylation methods were studied, including the DDQ–iodine or ZnO–ZnCl2 catalyst combination. Among them, ZnO–iodine has been identified as a new glycosylation promoter and successfully applied to the stereoselective glycoside synthesis. The final products were obtained by conventional Zemplén deacetylation.

  2. Evaluation of Randomly Selected Completed Medical Records Sheets in Teaching Hospitals of Jahrom University of Medical Sciences, 2009

    Directory of Open Access Journals (Sweden)

    Mohammad Parsa Mahjob

    2011-06-01

    Full Text Available Background and objective: Medical record documentation, often use to protect the patients legal rights, also providing information for medical researchers, general studies, education of health care staff and qualitative surveys is used. There is a need to control the amount of data entered in the medical record sheets of patients, considering the completion of these sheets is often carried out after completion of service delivery to the patients. Therefore, in this study the prevalence of completeness of medical history, operation reports, and physician order sheets by different documentaries in Jahrom teaching hospitals during year 2009 was analyzed. Methods and Materials: In this descriptive / retrospective study, the 400 medical record sheets of the patients from two teaching hospitals affiliated to Jahrom medical university was randomly selected. The tool of data collection was a checklist based on the content of medical history sheet, operation report and physician order sheets. The data were analyzed by SPSS (Version10 software and Microsoft Office Excel 2003. Results: Average of personal (Demography data entered in medical history, physician order and operation report sheets which is done by department's secretaries were 32.9, 35.8 and 40.18 percent. Average of clinical data entered by physician in medical history sheet is 38 percent. Surgical data entered by the surgeon in operation report sheet was 94.77 percent. Average of data entered by operation room's nurse in operation report sheet was 36.78 percent; Average of physician order data in physician order sheet entered by physician was 99.3 percent. Conclusion: According to this study, the rate of completed record papers reviewed by documentary in Jahrom teaching hospitals were not desirable and in some cases were very weak and incomplete. This deficiency was due to different reason such as medical record documentaries negligence, lack of adequate education for documentaries, High work

  3. The Jackprot Simulation Couples Mutation Rate with Natural Selection to Illustrate How Protein Evolution Is Not Random

    Science.gov (United States)

    Espinosa, Avelina; Bai, Chunyan Y.

    2016-01-01

    Protein evolution is not a random process. Views which attribute randomness to molecular change, deleterious nature to single-gene mutations, insufficient geological time, or population size for molecular improvements to occur, or invoke “design creationism” to account for complexity in molecular structures and biological processes, are unfounded. Scientific evidence suggests that natural selection tinkers with molecular improvements by retaining adaptive peptide sequence. We used slot-machine probabilities and ion channels to show biological directionality on molecular change. Because ion channels reside in the lipid bilayer of cell membranes, their residue location must be in balance with the membrane's hydrophobic/philic nature; a selective “pore” for ion passage is located within the hydrophobic region. We contrasted the random generation of DNA sequence for KcsA, a bacterial two-transmembrane-domain (2TM) potassium channel, from Streptomyces lividans, with an under-selection scenario, the “jackprot,” which predicted much faster evolution than by chance. We wrote a computer program in JAVA APPLET version 1.0 and designed an online interface, The Jackprot Simulation http://faculty.rwu.edu/cbai/JackprotSimulation.htm, to model a numerical interaction between mutation rate and natural selection during a scenario of polypeptide evolution. Winning the “jackprot,” or highest-fitness complete-peptide sequence, required cumulative smaller “wins” (rewarded by selection) at the first, second, and third positions in each of the 161 KcsA codons (“jackdons” that led to “jackacids” that led to the “jackprot”). The “jackprot” is a didactic tool to demonstrate how mutation rate coupled with natural selection suffices to explain the evolution of specialized proteins, such as the complex six-transmembrane (6TM) domain potassium, sodium, or calcium channels. Ancestral DNA sequences coding for 2TM-like proteins underwent nucleotide

  4. The Jackprot Simulation Couples Mutation Rate with Natural Selection to Illustrate How Protein Evolution Is Not Random.

    Science.gov (United States)

    Paz-Y-Miño C, Guillermo; Espinosa, Avelina; Bai, Chunyan Y

    2011-09-01

    Protein evolution is not a random process. Views which attribute randomness to molecular change, deleterious nature to single-gene mutations, insufficient geological time, or population size for molecular improvements to occur, or invoke "design creationism" to account for complexity in molecular structures and biological processes, are unfounded. Scientific evidence suggests that natural selection tinkers with molecular improvements by retaining adaptive peptide sequence. We used slot-machine probabilities and ion channels to show biological directionality on molecular change. Because ion channels reside in the lipid bilayer of cell membranes, their residue location must be in balance with the membrane's hydrophobic/philic nature; a selective "pore" for ion passage is located within the hydrophobic region. We contrasted the random generation of DNA sequence for KcsA, a bacterial two-transmembrane-domain (2TM) potassium channel, from Streptomyces lividans, with an under-selection scenario, the "jackprot," which predicted much faster evolution than by chance. We wrote a computer program in JAVA APPLET version 1.0 and designed an online interface, The Jackprot Simulation http://faculty.rwu.edu/cbai/JackprotSimulation.htm, to model a numerical interaction between mutation rate and natural selection during a scenario of polypeptide evolution. Winning the "jackprot," or highest-fitness complete-peptide sequence, required cumulative smaller "wins" (rewarded by selection) at the first, second, and third positions in each of the 161 KcsA codons ("jackdons" that led to "jackacids" that led to the "jackprot"). The "jackprot" is a didactic tool to demonstrate how mutation rate coupled with natural selection suffices to explain the evolution of specialized proteins, such as the complex six-transmembrane (6TM) domain potassium, sodium, or calcium channels. Ancestral DNA sequences coding for 2TM-like proteins underwent nucleotide "edition" and gene duplications to generate the 6

  5. Methods of blinding in reports of randomized controlled trials assessing pharmacologic treatments: a systematic review.

    Directory of Open Access Journals (Sweden)

    Isabelle Boutron

    2006-10-01

    Full Text Available BACKGROUND: Blinding is a cornerstone of therapeutic evaluation because lack of blinding can bias treatment effect estimates. An inventory of the blinding methods would help trialists conduct high-quality clinical trials and readers appraise the quality of results of published trials. We aimed to systematically classify and describe methods to establish and maintain blinding of patients and health care providers and methods to obtain blinding of outcome assessors in randomized controlled trials of pharmacologic treatments. METHODS AND FINDINGS: We undertook a systematic review of all reports of randomized controlled trials assessing pharmacologic treatments with blinding published in 2004 in high impact-factor journals from Medline and the Cochrane Methodology Register. We used a standardized data collection form to extract data. The blinding methods were classified according to whether they primarily (1 established blinding of patients or health care providers, (2 maintained the blinding of patients or health care providers, and (3 obtained blinding of assessors of the main outcomes. We identified 819 articles, with 472 (58% describing the method of blinding. Methods to establish blinding of patients and/or health care providers concerned mainly treatments provided in identical form, specific methods to mask some characteristics of the treatments (e.g., added flavor or opaque coverage, or use of double dummy procedures or simulation of an injection. Methods to avoid unblinding of patients and/or health care providers involved use of active placebo, centralized assessment of side effects, patients informed only in part about the potential side effects of each treatment, centralized adapted dosage, or provision of sham results of complementary investigations. The methods reported for blinding outcome assessors mainly relied on a centralized assessment of complementary investigations, clinical examination (i.e., use of video, audiotape, or

  6. Bias in the prediction of genetic gain due to mass and half-sib selection in random mating populations

    Directory of Open Access Journals (Sweden)

    José Marcelo Soriano Viana

    2009-01-01

    Full Text Available The prediction of gains from selection allows the comparison of breeding methods and selection strategies, although these estimates may be biased. The objective of this study was to investigate the extent of such bias in predicting genetic gain. For this, we simulated 10 cycles of a hypothetical breeding program that involved seven traits, three population classes, three experimental conditions and two breeding methods (mass and half-sib selection. Each combination of trait, population, heritability, method and cycle was repeated 10 times. The predicted gains were biased, even when the genetic parameters were estimated without error. Gain from selection in both genders is twice the gain from selection in a single gender only in the absence of dominance. The use of genotypic variance or broad sense heritability in the predictions represented an additional source of bias. Predictions based on additive variance and narrow sense heritability were equivalent, as were predictions based on genotypic variance and broad sense heritability. The predictions based on mass and family selection were suitable for comparing selection strategies, whereas those based on selection within progenies showed the largest bias and lower association with the realized gain.

  7. Supplier Selection Based On AHP Method : Supplier from China for Suomen Koristetuonti

    OpenAIRE

    Jounio, Chengjing

    2013-01-01

    As international purchasing becomes a common practice and markets increasingly competitive, supplier selection – as an important part of purchasing process and supply chain management – evolves to be more complex and attention-catching. Especially, supplier selection assumes a strategic role in determining the success of a start-up company. Due to its growing importance, supplier selection has gained much attention in research and studies. Many evaluation and selection methods have evolv...

  8. Selepressin, a novel selective vasopressin V1A agonist, is an effective substitute for norepinephrine in a phase IIa randomized, placebo-controlled trial in septic shock patients

    DEFF Research Database (Denmark)

    Russell, James A; Vincent, Jean-Louis; Kjølbye, Anne Louise

    2017-01-01

    BACKGROUND: Vasopressin is widely used for vasopressor support in septic shock patients, but experimental evidence suggests that selective V1A agonists are superior. The initial pharmacodynamic effects, pharmacokinetics, and safety of selepressin, a novel V1A-selective vasopressin analogue......, was examined in a phase IIa trial in septic shock patients. METHODS: This was a randomized, double-blind, placebo-controlled multicenter trial in 53 patients in early septic shock (aged ≥18 years, fluid resuscitation, requiring vasopressor support) who received selepressin 1.25 ng/kg/minute (n = 10), 2.5 ng...... for selepressin 2.5 ng/kg/minute and placebo. Two patients were infused at 3.75 ng/kg/minute, one of whom had the study drug infusion discontinued for possible safety reasons, with subsequent discontinuation of this dose group. CONCLUSIONS: In septic shock patients, selepressin 2.5 ng/kg/minute was able...

  9. Supplier Portfolio Selection and Optimum Volume Allocation: A Knowledge Based Method

    NARCIS (Netherlands)

    Aziz, Romana; Aziz, R.; van Hillegersberg, Jos; Kersten, W.; Blecker, T.; Luthje, C.

    2010-01-01

    Selection of suppliers and allocation of optimum volumes to suppliers is a strategic business decision. This paper presents a decision support method for supplier selection and the optimal allocation of volumes in a supplier portfolio. The requirements for the method were gathered during a case

  10. Validity of a structured method of selecting abstracts for a plastic surgical scientific meeting

    NARCIS (Netherlands)

    van der Steen, LPE; Hage, JJ; Kon, M; Monstrey, SJ

    In 1999, the European Association of Plastic Surgeons accepted a structured method to assess and select the abstracts that are submitted for its yearly scientific meeting. The two criteria used to evaluate whether such a selection method is accurate were reliability and validity. The authors

  11. The method of selection of leukocytes in images of preparations of peripheral blood and bone marrow

    Science.gov (United States)

    Zakharenko, Y. V.; Nikitaev, V. G.; Polyakov, E. V.; Seldyukov, S. O.

    2017-01-01

    Study of the segmentation method on the basis of histogram analysis for the selection of leukocytes in the images of blood and bone marrow in the diagnosis of acute leukemia was conducted in this paper. Method of filtering was offered to eliminate the artifacts, resulting from the selection of leukocytes.

  12. Group Eigenvalue Method for Food Supplier Selection Model with Ordinal Interval Preference Information

    OpenAIRE

    Wanzhen Liu

    2014-01-01

    With the economic globalization, market competition is more and more fierce. The best food supplier selection is important for a food company maintaining a sustainable competitive advantage. The food supplier selection problem is a complex group decision making problem. To food supplier selection problem, which the evaluation information is the ordinal interval preference information, a new decision making method is proposed based on the concept of group eigenvalue method. A practical example...

  13. Determination of slope safety factor with analytical solution and searching critical slip surface with genetic-traversal random method.

    Science.gov (United States)

    Niu, Wen-jie

    2014-01-01

    In the current practice, to determine the safety factor of a slope with two-dimensional circular potential failure surface, one of the searching methods for the critical slip surface is Genetic Algorithm (GA), while the method to calculate the slope safety factor is Fellenius' slices method. However GA needs to be validated with more numeric tests, while Fellenius' slices method is just an approximate method like finite element method. This paper proposed a new method to determine the minimum slope safety factor which is the determination of slope safety factor with analytical solution and searching critical slip surface with Genetic-Traversal Random Method. The analytical solution is more accurate than Fellenius' slices method. The Genetic-Traversal Random Method uses random pick to utilize mutation. A computer automatic search program is developed for the Genetic-Traversal Random Method. After comparison with other methods like slope/w software, results indicate that the Genetic-Traversal Random Search Method can give very low safety factor which is about half of the other methods. However the obtained minimum safety factor with Genetic-Traversal Random Search Method is very close to the lower bound solutions of slope safety factor given by the Ansys software.

  14. A Novel Method of Failure Sample Selection for Electrical Systems Using Ant Colony Optimization.

    Science.gov (United States)

    Xiong, Jian; Tian, Shulin; Yang, Chenglin; Liu, Cheng

    2016-01-01

    The influence of failure propagation is ignored in failure sample selection based on traditional testability demonstration experiment method. Traditional failure sample selection generally causes the omission of some failures during the selection and this phenomenon could lead to some fearful risks of usage because these failures will lead to serious propagation failures. This paper proposes a new failure sample selection method to solve the problem. First, the method uses a directed graph and ant colony optimization (ACO) to obtain a subsequent failure propagation set (SFPS) based on failure propagation model and then we propose a new failure sample selection method on the basis of the number of SFPS. Compared with traditional sampling plan, this method is able to improve the coverage of testing failure samples, increase the capacity of diagnosis, and decrease the risk of using.

  15. Integration of MACBETH and COPRAS methods to select air compressor for a textile company

    Directory of Open Access Journals (Sweden)

    Nilsen Kundakcı

    2016-09-01

    Full Text Available The selection of air compressor is a Multiple Criteria Decision Making (MCDM problem including conflicting criteria and various alternatives. Selecting the appropriate air compressor is an important decision for the company as it affects the energy consumption and operating cost. To aid the decision making process in the companies, MCDM methods are proposed in the literature. In all MCDM methods, the main goal is to select the best alternative or to rank a set of given alternatives. In this paper, the air compressor is selected for a spinning mill of a textile company with an integrated approach based on MACBETH (Measuring Attractiveness by a Categorical Based Evaluation TecHnique and COPRAS (COmplex PRoportional ASsessment methods. MACBETH method is utilized to determine the weights of the criteria. Then COPRAS method is used to determine the ranking of the alternatives and select the best one.

  16. Random walk-based similarity measure method for patterns in complex object

    Directory of Open Access Journals (Sweden)

    Liu Shihu

    2017-04-01

    Full Text Available This paper discusses the similarity of the patterns in complex objects. The complex object is composed both of the attribute information of patterns and the relational information between patterns. Bearing in mind the specificity of complex object, a random walk-based similarity measurement method for patterns is constructed. In this method, the reachability of any two patterns with respect to the relational information is fully studied, and in the case of similarity of patterns with respect to the relational information can be calculated. On this bases, an integrated similarity measurement method is proposed, and algorithms 1 and 2 show the performed calculation procedure. One can find that this method makes full use of the attribute information and relational information. Finally, a synthetic example shows that our proposed similarity measurement method is validated.

  17. Presence of psychoactive substances in oral fluid from randomly selected drivers in Denmark

    DEFF Research Database (Denmark)

    Simonsen, K. Wiese; Steentoft, A.; Hels, Tove

    2012-01-01

    This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season...... of narcotic drugs. It can be concluded that driving under the influence of drugs is as serious a road safety problem as drunk driving.......This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season...

  18. Presence of psychoactive substances in oral fluid from randomly selected drivers in Denmark

    DEFF Research Database (Denmark)

    Simonsen, Kirsten Wiese; Steentoft, Anni; Hels, Tove

    2012-01-01

    This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season....... It can be concluded that driving under the influence of drugs is as serious a road safety problem as drunk driving.......This roadside study is the Danish part of the EU-project DRUID (Driving under the Influence of Drugs, Alcohol, and Medicines) and included three representative regions in Denmark. Oral fluid samples (n = 3002) were collected randomly from drivers using a sampling scheme stratified by time, season...

  19. Feature selection and classification of mechanical fault of an induction motor using random forest classifier

    Directory of Open Access Journals (Sweden)

    Raj Kumar Patel

    2016-09-01

    Full Text Available Fault detection and diagnosis is the most important technology in condition-based maintenance (CBM system for rotating machinery. This paper experimentally explores the development of a random forest (RF classifier, a recently emerged machine learning technique, for multi-class mechanical fault diagnosis in bearing of an induction motor. Firstly, the vibration signals are collected from the bearing using accelerometer sensor. Parameters from the vibration signal are extracted in the form of statistical features and used as input feature for the classification problem. These features are classified through RF classifiers for four class problems. The prime objective of this paper is to evaluate effectiveness of random forest classifier on bearing fault diagnosis. The obtained results compared with the existing artificial intelligence techniques, neural network. The analysis of results shows the better performance and higher accuracy than the well existing techniques.

  20. A Quality Control Method Based on an Improved Random Forest Algorithm for Surface Air Temperature Observations

    Directory of Open Access Journals (Sweden)

    Xiaoling Ye

    2017-01-01

    Full Text Available A spatial quality control method, ARF, is proposed. The ARF method incorporates the optimization ability of the artificial fish swarm algorithm and the random forest regression function to provide quality control for multiple surface air temperature stations. Surface air temperature observations were recorded at stations in mountainous and plain regions and at neighboring stations to test the performance of the method. Observations from 2005 to 2013 were used as a training set, and observations from 2014 were used as a testing set. The results indicate that the ARF method is able to identify inaccurate observations; and it has a higher rate of detection, lower rate of change for the quality control parameters, and fewer type I errors than traditional methods. Notably, the ARF method yielded low performance indexes in areas with complex terrain, where traditional methods were considerably less effective. In addition, for stations near the ocean without sufficient neighboring stations, different neighboring stations were used to test the different methods. Whereas the traditional methods were affected by station distribution, the ARF method exhibited fewer errors and higher stability. Thus, the method is able to effectively reduce the effects of geographical factors on spatial quality control.

  1. Method of model reduction and multifidelity models for solute transport in random layered porous media

    Science.gov (United States)

    Xu, Zhijie; Tartakovsky, Alexandre M.

    2017-09-01

    This work presents a method of model reduction that leads to models with three solutions of increasing fidelity (multifidelity models) for solute transport in a bounded layered porous media with random permeability. The model generalizes the Taylor-Aris dispersion theory to stochastic transport in random layered porous media with a known velocity covariance function. In the reduced model, we represent (random) concentration in terms of its cross-sectional average and a variation function. We derive a one-dimensional stochastic advection-dispersion-type equation for the average concentration and a stochastic Poisson equation for the variation function, as well as expressions for the effective velocity and dispersion coefficient. In contrast to the linear scaling with the correlation length and the mean velocity from macrodispersion theory, our model predicts a nonlinear and a quadratic dependence of the effective dispersion on the correlation length and the mean velocity, respectively. We observe that velocity fluctuations enhance dispersion in a nonmonotonic fashion (a stochastic spike phenomenon): The dispersion initially increases with correlation length λ, reaches a maximum, and decreases to zero at infinity (correlation). Maximum enhancement in dispersion can be obtained at a correlation length about 0.25 the size of the porous media perpendicular to flow. This information can be useful for engineering such random layered porous media. Numerical simulations are implemented to compare solutions with varying fidelity.

  2. Method of glass selection for color correction in optical system design.

    Science.gov (United States)

    de Albuquerque, Bráulio Fonseca Carneiro; Sasian, Jose; de Sousa, Fabiano Luis; Montes, Amauri Silva

    2012-06-18

    A method of glass selection for the design of optical systems with reduced chromatic aberration is presented. This method is based on the unification of two previously published methods adding new contributions and using a multi-objective approach. This new method makes it possible to select sets of compatible glasses suitable for the design of super-apochromatic optical systems. As an example, we present the selection of compatible glasses and the effective designs for all-refractive optical systems corrected in five spectral bands, with central wavelengths going from 485 nm to 1600 nm.

  3. A NEW METHOD HIGHLIGHTING PSYCHOMOTOR SKILLS AND COGNITIVE ATTRIBUTES IN ATHLETE SELECTIONS

    Directory of Open Access Journals (Sweden)

    Engin Sagdilek

    2015-05-01

    Full Text Available Talents are extraordinary but not completely developed characteristics in a field. These attributes cover a relatively wide range in sports. Tests perused in selection of athletes are generally motoric sports tests and measure predominantly conditional attributes. It is known that in sports, performance is related to cognitive skills as well as physical features and motor skills. This study explored a new method that could be utilized in the selection and tracking the level of improvement of athletes, and evaluate their attention, perception and learning levels, on athlete and other female students. 9 female table tennis athletes that trained for 16 hours per week for the last 5 years and 9 female students that never played in any sports, aged between 10 and 14 years, were participated in our study. For the Selective Action Array, developed for this study, a table tennis robot was utilized. Robot was set up to send a total of 26 balls in 3 different colors (6 whites, 10 yellows, 10 pinks to different areas of the table, in random colors and at the rate of 90 balls per minute. The participants were asked to ignore the white balls, to touch the yellow balls and to grab the pink balls using their dominant hands. Pursuant to explaining the task to the participants, two consecutive trials were executed and recorded using a camera. Every action performed/not performed by the participants was transformed into points in the scoring system. First trial total points in the Selective Action Array were 104±17 for athletes and 102±19 for non-athletes, whereas on the second trial total points were 122±11 and 105±20, respectively. The higher scores obtained in the second trial were significant for the athletes; the difference in the scores for non-athletes was minor. Non-athletes scored 33% better for the white balls as compared to the table tennis athletes. For the yellow balls, athletes and non-athletes scored similar points on the first trial, whereas

  4. Comparison of fuzzy AHP and fuzzy TODIM methods for landfill location selection.

    Science.gov (United States)

    Hanine, Mohamed; Boutkhoum, Omar; Tikniouine, Abdessadek; Agouti, Tarik

    2016-01-01

    Landfill location selection is a multi-criteria decision problem and has a strategic importance for many regions. The conventional methods for landfill location selection are insufficient in dealing with the vague or imprecise nature of linguistic assessment. To resolve this problem, fuzzy multi-criteria decision-making methods are proposed. The aim of this paper is to use fuzzy TODIM (the acronym for Interactive and Multi-criteria Decision Making in Portuguese) and the fuzzy analytic hierarchy process (AHP) methods for the selection of landfill location. The proposed methods have been applied to a landfill location selection problem in the region of Casablanca, Morocco. After determining the criteria affecting the landfill location decisions, fuzzy TODIM and fuzzy AHP methods are applied to the problem and results are presented. The comparisons of these two methods are also discussed.

  5. Reflective Random Indexing and indirect inference: a scalable method for discovery of implicit connections.

    Science.gov (United States)

    Cohen, Trevor; Schvaneveldt, Roger; Widdows, Dominic

    2010-04-01

    The discovery of implicit connections between terms that do not occur together in any scientific document underlies the model of literature-based knowledge discovery first proposed by Swanson. Corpus-derived statistical models of semantic distance such as Latent Semantic Analysis (LSA) have been evaluated previously as methods for the discovery of such implicit connections. However, LSA in particular is dependent on a computationally demanding method of dimension reduction as a means to obtain meaningful indirect inference, limiting its ability to scale to large text corpora. In this paper, we evaluate the ability of Random Indexing (RI), a scalable distributional model of word associations, to draw meaningful implicit relationships between terms in general and biomedical language. Proponents of this method have achieved comparable performance to LSA on several cognitive tasks while using a simpler and less computationally demanding method of dimension reduction than LSA employs. In this paper, we demonstrate that the original implementation of RI is ineffective at inferring meaningful indirect connections, and evaluate Reflective Random Indexing (RRI), an iterative variant of the method that is better able to perform indirect inference. RRI is shown to lead to more clearly related indirect connections and to outperform existing RI implementations in the prediction of future direct co-occurrence in the MEDLINE corpus. 2009 Elsevier Inc. All rights reserved.

  6. Random fields generation on the GPU with the spectral turning bands method

    Science.gov (United States)

    Hunger, L.; Cosenza, B.; Kimeswenger, S.; Fahringer, T.

    2014-08-01

    Random field (RF) generation algorithms are of paramount importance for many scientific domains, such as astrophysics, geostatistics, computer graphics and many others. Some examples are the generation of initial conditions for cosmological simulations or hydrodynamical turbulence driving. In the latter a new random field is needed every time-step. Current approaches commonly make use of 3D FFT (Fast Fourier Transform) and require the whole generated field to be stored in memory. Moreover, they are limited to regular rectilinear meshes and need an extra processing step to support non-regular meshes. In this paper, we introduce TBARF (Turning BAnd Random Fields), a RF generation algorithm based on the turning band method that is optimized for massively parallel hardware such as GPUs. Our algorithm replaces the 3D FFT with a lower order, one-dimensional FFT followed by a projection step, and is further optimized with loop unrolling and blocking. We show that TBARF can easily generate RF on non-regular (non uniform) meshes and can afford mesh sizes bigger than the available GPU memory by using a streaming, out-of-core approach. TBARF is 2 to 5 times faster than the traditional methods when generating RFs with more than 16M cells. It can also generate RF on non-regular meshes, and has been successfully applied to two real case scenarios: planetary nebulae and cosmological simulations.

  7. Sensor Selection method for IoT systems – focusing on embedded system requirements

    Directory of Open Access Journals (Sweden)

    Hirayama Masayuki

    2016-01-01

    Full Text Available Recently, various types of sensors have been developed. Using these sensors, IoT systems have become hot topics in embedded system domain. However, sensor selections for embedded systems are not well discussed up to now. This paper focuses on embedded system’s features and architecture, and proposes a sensor selection method which is composed seven steps. In addition, we applied the proposed method to a simple example – a sensor selection for computer scored answer sheet reader unit. From this case study, an idea to use FTA in sensor selection is also discussed.

  8. Heuristic methods using variable neighborhood random local search for the clustered traveling salesman problem

    Directory of Open Access Journals (Sweden)

    Mário Mestria

    2014-11-01

    Full Text Available In this paper, we propose new heuristic methods for solver the Clustered Traveling Salesman Problem (CTSP. The CTSP is a generalization of the Traveling Salesman Problem (TSP in which the set of vertices is partitioned into disjoint clusters and objective is to find a minimum cost Hamiltonian cycle such that the vertices of each cluster are visited contiguously. We develop two Variable Neighborhood Random Descent with Iterated Local for solver the CTSP. The heuristic methods proposed were tested in types of instances with data at different level of granularity for the number of vertices and clusters. The computational results showed that the heuristic methods outperform recent existing methods in the literature and they are competitive with an exact algorithm using the Parallel CPLEX software.

  9. Methods for synthesizing findings on moderation effects across multiple randomized trials.

    Science.gov (United States)

    Brown, C Hendricks; Sloboda, Zili; Faggiano, Fabrizio; Teasdale, Brent; Keller, Ferdinand; Burkhart, Gregor; Vigna-Taglianti, Federica; Howe, George; Masyn, Katherine; Wang, Wei; Muthén, Bengt; Stephens, Peggy; Grey, Scott; Perrino, Tatiana

    2013-04-01

    This paper presents new methods for synthesizing results from subgroup and moderation analyses across different randomized trials. We demonstrate that such a synthesis generally results in additional power to detect significant moderation findings above what one would find in a single trial. Three general methods for conducting synthesis analyses are discussed, with two methods, integrative data analysis and parallel analyses, sharing a large advantage over traditional methods available in meta-analysis. We present a broad class of analytic models to examine moderation effects across trials that can be used to assess their overall effect and explain sources of heterogeneity, and present ways to disentangle differences across trials due to individual differences, contextual level differences, intervention, and trial design.

  10. Convergence of quasi-optimal Stochastic Galerkin methods for a class of PDES with random coefficients

    KAUST Repository

    Beck, Joakim

    2014-03-01

    In this work we consider quasi-optimal versions of the Stochastic Galerkin method for solving linear elliptic PDEs with stochastic coefficients. In particular, we consider the case of a finite number N of random inputs and an analytic dependence of the solution of the PDE with respect to the parameters in a polydisc of the complex plane CN. We show that a quasi-optimal approximation is given by a Galerkin projection on a weighted (anisotropic) total degree space and prove a (sub)exponential convergence rate. As a specific application we consider a thermal conduction problem with non-overlapping inclusions of random conductivity. Numerical results show the sharpness of our estimates. © 2013 Elsevier Ltd. All rights reserved.

  11. Random noise de-noising and direct wave eliminating based on SVD method for ground penetrating radar signals

    Science.gov (United States)

    Liu, Cai; Song, Chao; Lu, Qi

    2017-09-01

    In this paper, we present a method using singular value decomposition (SVD) which aims at eliminating the random noise and direct wave from ground penetrating radar (GPR) signals. To demonstrate the validity and high efficiency of the SVD method in eliminating random noise, we compare the SVD de-noising method with wavelet threshold de-noising method and bandpass filtering method on both noisy synthetic data and field data. After that, we compare the SVD method with the mean trace deleting in eliminating direct wave on synthetic data and field data. We set general and quantitative criteria on choosing singular values to carry out the random noise de-noising and direct wave eliminating process. We find that by choosing appropriate singular values, SVD method can eliminate the random noise and direct wave in the GPR data validly and efficiently to improve the signal-to-noise ratio (SNR) of the GPR profiles and make effective reflection signals clearer.

  12. New Interval-Valued Intuitionistic Fuzzy Behavioral MADM Method and Its Application in the Selection of Photovoltaic Cells

    Directory of Open Access Journals (Sweden)

    Xiaolu Zhang

    2016-10-01

    Full Text Available As one of the emerging renewable resources, the use of photovoltaic cells has become a promise for offering clean and plentiful energy. The selection of a best photovoltaic cell for a promoter plays a significant role in aspect of maximizing income, minimizing costs and conferring high maturity and reliability, which is a typical multiple attribute decision making (MADM problem. Although many prominent MADM techniques have been developed, most of them are usually to select the optimal alternative under the hypothesis that the decision maker or expert is completely rational and the decision data are represented by crisp values. However, in the selecting processes of photovoltaic cells the decision maker is usually bounded rational and the ratings of alternatives are usually imprecise and vague. To address these kinds of complex and common issues, in this paper we develop a new interval-valued intuitionistic fuzzy behavioral MADM method. We employ interval-valued intuitionistic fuzzy numbers (IVIFNs to express the imprecise ratings of alternatives; and we construct LINMAP-based nonlinear programming models to identify the reference points under IVIFNs contexts, which avoid the subjective randomness of selecting the reference points. Finally we develop a prospect theory-based ranking method to identify the optimal alternative, which takes fully into account the decision maker’s behavioral characteristics such as reference dependence, diminishing sensitivity and loss aversion in the decision making process.

  13. Statistical parameters of random heterogeneity estimated by analysing coda waves based on finite difference method

    Science.gov (United States)

    Emoto, K.; Saito, T.; Shiomi, K.

    2017-12-01

    Short-period (seismic waves and randomly distributed small-scale heterogeneities. Statistical properties of the random heterogeneities have been estimated by analysing short-period seismograms. However, generally, the small-scale random heterogeneity is not taken into account for the modelling of long-period (>2 s) seismograms. We found that the energy of the coda of long-period seismograms shows a spatially flat distribution. This phenomenon is well known in short-period seismograms and results from the scattering by small-scale heterogeneities. We estimate the statistical parameters that characterize the small-scale random heterogeneity by modelling the spatiotemporal energy distribution of long-period seismograms. We analyse three moderate-size earthquakes that occurred in southwest Japan. We calculate the spatial distribution of the energy density recorded by a dense seismograph network in Japan at the period bands of 8-16 s, 4-8 s and 2-4 s and model them by using 3-D finite difference (FD) simulations. Compared to conventional methods based on statistical theories, we can calculate more realistic synthetics by using the FD simulation. It is not necessary to assume a uniform background velocity, body or surface waves and scattering properties considered in general scattering theories. By taking the ratio of the energy of the coda area to that of the entire area, we can separately estimate the scattering and the intrinsic absorption effects. Our result reveals the spectrum of the random inhomogeneity in a wide wavenumber range including the intensity around the corner wavenumber as P(m) = 8πε2a3/(1 + a2m2)2, where ε = 0.05 and a = 3.1 km, even though past studies analysing higher-frequency records could not detect the corner. Finally, we estimate the intrinsic attenuation by modelling the decay rate of the energy. The method proposed in this study is suitable for quantifying the statistical properties of long-wavelength subsurface random inhomogeneity, which

  14. A probabilistic method for testing and estimating selection differences between populations.

    Science.gov (United States)

    He, Yungang; Wang, Minxian; Huang, Xin; Li, Ran; Xu, Hongyang; Xu, Shuhua; Jin, Li

    2015-12-01

    Human populations around the world encounter various environmental challenges and, consequently, develop genetic adaptations to different selection forces. Identifying the differences in natural selection between populations is critical for understanding the roles of specific genetic variants in evolutionary adaptation. Although numerous methods have been developed to detect genetic loci under recent directional selection, a probabilistic solution for testing and quantifying selection differences between populations is lacking. Here we report the development of a probabilistic method for testing and estimating selection differences between populations. By use of a probabilistic model of genetic drift and selection, we showed that logarithm odds ratios of allele frequencies provide estimates of the differences in selection coefficients between populations. The estimates approximate a normal distribution, and variance can be estimated using genome-wide variants. This allows us to quantify differences in selection coefficients and to determine the confidence intervals of the estimate. Our work also revealed the link between genetic association testing and hypothesis testing of selection differences. It therefore supplies a solution for hypothesis testing of selection differences. This method was applied to a genome-wide data analysis of Han and Tibetan populations. The results confirmed that both the EPAS1 and EGLN1 genes are under statistically different selection in Han and Tibetan populations. We further estimated differences in the selection coefficients for genetic variants involved in melanin formation and determined their confidence intervals between continental population groups. Application of the method to empirical data demonstrated the outstanding capability of this novel approach for testing and quantifying differences in natural selection. © 2015 He et al.; Published by Cold Spring Harbor Laboratory Press.

  15. Application of penalized linear regression methods to the selection of environmental enteropathy biomarkers.

    Science.gov (United States)

    Lu, Miao; Zhou, Jianhui; Naylor, Caitlin; Kirkpatrick, Beth D; Haque, Rashidul; Petri, William A; Ma, Jennie Z

    2017-01-01

    Environmental Enteropathy (EE) is a subclinical condition caused by constant fecal-oral contamination and resulting in blunting of intestinal villi and intestinal inflammation. Of primary interest in the clinical research is to evaluate the association between non-invasive EE biomarkers and malnutrition in a cohort of Bangladeshi children. The challenges are that the number of biomarkers/covariates is relatively large, and some of them are highly correlated. Many variable selection methods are available in the literature, but which are most appropriate for EE biomarker selection remains unclear. In this study, different variable selection approaches were applied and the performance of these methods was assessed numerically through simulation studies, assuming the correlations among covariates were similar to those in the Bangladesh cohort. The suggested methods from simulations were applied to the Bangladesh cohort to select the most relevant biomarkers for the growth response, and bootstrapping methods were used to evaluate the consistency of selection results. Through simulation studies, SCAD (Smoothly Clipped Absolute Deviation), Adaptive LASSO (Least Absolute Shrinkage and Selection Operator) and MCP (Minimax Concave Penalty) are the suggested variable selection methods, compared to traditional stepwise regression method. In the Bangladesh data, predictors such as mother weight, height-for-age z-score (HAZ) at week 18, and inflammation markers (Myeloperoxidase (MPO) at week 12 and soluable CD14 at week 18) are informative biomarkers associated with children's growth. Penalized linear regression methods are plausible alternatives to traditional variable selection methods, and the suggested methods are applicable to other biomedical studies. The selected early-stage biomarkers offer a potential explanation for the burden of malnutrition problems in low-income countries, allow early identification of infants at risk, and suggest pathways for intervention. This

  16. Determining optimal sample sizes for multi-stage randomized clinical trials using value of information methods.

    Science.gov (United States)

    Willan, Andrew; Kowgier, Matthew

    2008-01-01

    Traditional sample size calculations for randomized clinical trials depend on somewhat arbitrarily chosen factors, such as Type I and II errors. An effectiveness trial (otherwise known as a pragmatic trial or management trial) is essentially an effort to inform decision-making, i.e., should treatment be adopted over standard? Taking a societal perspective and using Bayesian decision theory, Willan and Pinto (Stat. Med. 2005; 24:1791-1806 and Stat. Med. 2006; 25:720) show how to determine the sample size that maximizes the expected net gain, i.e., the difference between the cost of doing the trial and the value of the information gained from the results. These methods are extended to include multi-stage adaptive designs, with a solution given for a two-stage design. The methods are applied to two examples. As demonstrated by the two examples, substantial increases in the expected net gain (ENG) can be realized by using multi-stage adaptive designs based on expected value of information methods. In addition, the expected sample size and total cost may be reduced. Exact solutions have been provided for the two-stage design. Solutions for higher-order designs may prove to be prohibitively complex and approximate solutions may be required. The use of multi-stage adaptive designs for randomized clinical trials based on expected value of sample information methods leads to substantial gains in the ENG and reductions in the expected sample size and total cost.

  17. Multi-label spacecraft electrical signal classification method based on DBN and random forest.

    Science.gov (United States)

    Li, Ke; Yu, Nan; Li, Pengfei; Song, Shimin; Wu, Yalei; Li, Yang; Liu, Meng

    2017-01-01

    In spacecraft electrical signal characteristic data, there exists a large amount of data with high-dimensional features, a high computational complexity degree, and a low rate of identification problems, which causes great difficulty in fault diagnosis of spacecraft electronic load systems. This paper proposes a feature extraction method that is based on deep belief networks (DBN) and a classification method that is based on the random forest (RF) algorithm; The proposed algorithm mainly employs a multi-layer neural network to reduce the dimension of the original data, and then, classification is applied. Firstly, we use the method of wavelet denoising, which was used to pre-process the data. Secondly, the deep belief network is used to reduce the feature dimension and improve the rate of classification for the electrical characteristics data. Finally, we used the random forest algorithm to classify the data and comparing it with other algorithms. The experimental results show that compared with other algorithms, the proposed method shows excellent performance in terms of accuracy, computational efficiency, and stability in addressing spacecraft electrical signal data.

  18. Specific and selective probes for Staphylococcus aureus from phage-displayed random peptide libraries.

    Science.gov (United States)

    De Plano, Laura M; Carnazza, Santina; Messina, Grazia M L; Rizzo, Maria Giovanna; Marletta, Giovanni; Guglielmino, Salvatore P P

    2017-09-01

    Staphylococcus aureus is a major human pathogen causing health care-associated and community-associated infections. Early diagnosis is essential to prevent disease progression and to reduce complications that can be serious. In this study, we selected, from a 9-mer phage peptide library, a phage clone displaying peptide capable of specific binding to S. aureus cell surface, namely St.au9IVS5 (sequence peptide RVRSAPSSS).The ability of the isolated phage clone to interact specifically with S. aureus and the efficacy of its bacteria-binding properties were established by using enzyme linked immune-sorbent assay (ELISA). We also demonstrated by Western blot analysis that the most reactive and selective phage peptide binds a 78KDa protein on the bacterial cell surface. Furthermore, we observed selectivity of phage-bacteria-binding allowing to identify clinical isolates of S. aureus in comparison with a panel of other bacterial species. In order to explore the possibility of realizing a selective bacteria biosensor device, based on immobilization of affinity-selected phage, we have studied the physisorbed phage deposition onto a mica surface. Atomic Force Microscopy (AFM) was used to determine the organization of phage on mica surface and then the binding performance of mica-physisorbed phage to bacterial target was evaluated during the time by fluorescent microscopy. The system is able to bind specifically about 50% of S. aureus cells after 15' and 90% after one hour. Due to specificity and rapidness, this biosensing strategy paves the way to the further development of new cheap biosensors to be used in developing countries, as lab-on-chip (LOC) to detect bacterial agents in clinical diagnostics applications. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. A new method for feature selection based on fuzzy similarity measures using multi objective genetic algorithm

    Directory of Open Access Journals (Sweden)

    Hassan Nosrati Nahook

    2014-06-01

    Full Text Available Feature selection (FS is considered to be an important preprocessing step in machine learning and pattern recognition, and feature evaluation is the key issue for constructing a feature selection algorithm. Feature selection process can also reduce noise and this way enhance the classification accuracy. In this article, feature selection method based on fuzzy similarity measures by multi objective genetic algorithm (FSFSM - MOGA is introduced and performance of the proposed method on published data sets from UCI was evaluated. The results show the efficiency of the method is compared with the conventional version. When this method multi-objective genetic algorithms and fuzzy similarity measures used in CFS method can improve it.

  20. Selection of locations of knots for linear splines in random regression test-day models.

    Science.gov (United States)

    Jamrozik, J; Bohmanova, J; Schaeffer, L R

    2010-04-01

    Using spline functions (segmented polynomials) in regression models requires the knowledge of the location of the knots. Knots are the points at which independent linear segments are connected. Optimal positions of knots for linear splines of different orders were determined in this study for different scenarios, using existing estimates of covariance functions and an optimization algorithm. The traits considered were test-day milk, fat and protein yields, and somatic cell score (SCS) in the first three lactations of Canadian Holsteins. Two ranges of days in milk (from 5 to 305 and from 5 to 365) were taken into account. In addition, four different populations of Holstein cows, from Australia, Canada, Italy and New Zealand, were examined with respect to first lactation (305 days) milk only. The estimates of genetic and permanent environmental covariance functions were based on single- and multiple-trait test-day models, with Legendre polynomials of order 4 as random regressions. A differential evolution algorithm was applied to find the best location of knots for splines of orders 4 to 7 and the criterion for optimization was the goodness-of-fit of the spline covariance function. Results indicated that the optimal position of knots for linear splines differed between genetic and permanent environmental effects, as well as between traits and lactations. Different populations also exhibited different patterns of optimal knot locations. With linear splines, different positions of knots should therefore be used for different effects and traits in random regression test-day models when analysing milk production traits.

  1. Zeta Sperm Selection Improves Pregnancy Rate and Alters Sex Ratio in Male Factor Infertility Patients: A Double-Blind, Randomized Clinical Trial

    Directory of Open Access Journals (Sweden)

    Nasr Esfahani Mohammad Hossein

    2016-07-01

    Full Text Available Background Selection of sperm for intra-cytoplasmic sperm injection (ICSI is usually considered as the ultimate technique to alleviate male-factor infertility. In routine ICSI, selection is based on morphology and viability which does not necessarily preclude the chance injection of DNA-damaged or apoptotic sperm into the oocyte. Sperm with high negative surface electrical charge, named “Zeta potential”, are mature and more likely to have intact chromatin. In addition, X-bearing spermatozoa carry more negative charge. Therefore, we aimed to compare the clinical outcomes of Zeta procedure with routine sperm selection in infertile men candidate for ICSI. Materials and Methods From a total of 203 ICSI cycles studied, 101 cycles were allocated to density gradient centrifugation (DGC/Zeta group and the remaining 102 were included in the DGC group in this prospective study. Clinical outcomes were com- pared between the two groups. The ratios of Xand Y bearing sperm were assessed by fluorescence in situ hybridization (FISH and quantitative polymerase chain reaction (qPCR methods in 17 independent semen samples. Results In the present double-blind randomized clinical trial, a significant increase in top quality embryos and pregnancy rate were observed in DGC/Zeta group compared to DGC group. Moreover, sex ratio (XY/XX at birth significantly was lower in the DGC/Zeta group compared to DGC group despite similar ratio of X/Y bearings sper- matozoa following Zeta selection. Conclusion Zeta method not only improves the percentage of top embryo quality and pregnancy outcome but also alters the sex ratio compared to the conventional DGC method, despite no significant change in the ratio of Xand Ybearing sperm population (Registration number: IRCT201108047223N1.

  2. Use of hyaluronan in the selection of sperm for intracytoplasmic sperm injection (ICSI): significant improvement in clinical outcomes--multicenter, double-blinded and randomized controlled trial.

    Science.gov (United States)

    Worrilow, K C; Eid, S; Woodhouse, D; Perloe, M; Smith, S; Witmyer, J; Ivani, K; Khoury, C; Ball, G D; Elliot, T; Lieberman, J

    2013-02-01

    Does the selection of sperm for ICSI based on their ability to bind to hyaluronan improve the clinical pregnancy rates (CPR) (primary end-point), implantation (IR) and pregnancy loss rates (PLR)? In couples where ≤ 65% of sperm bound hyaluronan, the selection of hyaluronan-bound (HB) sperm for ICSI led to a statistically significant reduction in PLR. HB sperm demonstrate enhanced developmental parameters which have been associated with successful fertilization and embryogenesis. Sperm selected for ICSI using a liquid source of hyaluronan achieved an improvement in IR. A pilot study by the primary author demonstrated that the use of HB sperm in ICSI was associated with improved CPR. The current study represents the single largest prospective, multicenter, double-blinded and randomized controlled trial to evaluate the use of hyaluronan in the selection of sperm for ICSI. Using the hyaluronan binding assay, an HB score was determined for the fresh or initial (I-HB) and processed or final semen specimen (F-HB). Patients were classified as >65% or ≤ 65% I-HB and stratified accordingly. Patients with I-HB scores ≤ 65% were randomized into control and HB selection (HYAL) groups whereas patients with I-HB >65% were randomized to non-participatory (NP), control or HYAL groups, in a ratio of 2:1:1. The NP group was included in the >65% study arm to balance the higher prevalence of patients with I-HB scores >65%. In the control group, oocytes received sperm selected via the conventional assessment of motility and morphology. In the HYAL group, HB sperm meeting the same visual criteria were selected for injection. Patient participants and clinical care providers were blinded to group assignment. Eight hundred two couples treated with ICSI in 10 private and hospital-based IVF programs were enrolled in this study. Of the 484 patients stratified to the I-HB > 65% arm, 115 participants were randomized to the control group, 122 participants were randomized to the HYAL group

  3. Aptamers and methods for their in vitro selection and uses thereof

    Energy Technology Data Exchange (ETDEWEB)

    Doyle, Sharon A [Walnut Creek, CA; Murphy, Michael B [Severna Park, MD

    2012-01-31

    The present method is an improved in vitro selection protocol that relies on magnetic separations for DNA aptamer production that is relatively easy and scalable without the need for expensive robotics. The ability of aptamers selected by this method to recognize and bind their target protein with high affinity and specificity, and detail their uses in a number of assays is also described. Specific TTF1 and His6 aptamers were selected using the method described, and shown to be useful for enzyme-linked assays, Western blots, and affinity purification.

  4. Application of fuzzy TOPSIS and generalized Choquet integral methods to select the best supplier

    Directory of Open Access Journals (Sweden)

    Aytac Yildiz

    2017-04-01

    Full Text Available Supplier selection is a complex multi-criteria decision making (MCDM problem. There are literally various methods for choosing appropriate supplier but there are several criteria involved in complex decision making process. The classical MCDM methods cannot effectively solve real-world problems however fuzzy MCDM methods facilitate the solution fairly and enable the decision-makers to reach accurate decisions in this selection process. In this study, a supplier selection problem is handled, in a firm in automotive industry of Turkey. Fuzzy TOPSIS (Technique for Order Performance by Similarity to Ideal Solution and generalized Choquet integral are used individually in the solution of the problem.

  5. Flow “Fine” Synthesis: High Yielding and Selective Organic Synthesis by Flow Methods

    Science.gov (United States)

    2015-01-01

    Abstract The concept of flow “fine” synthesis, that is, high yielding and selective organic synthesis by flow methods, is described. Some examples of flow “fine” synthesis of natural products and APIs are discussed. Flow methods have several advantages over batch methods in terms of environmental compatibility, efficiency, and safety. However, synthesis by flow methods is more difficult than synthesis by batch methods. Indeed, it has been considered that synthesis by flow methods can be applicable for the production of simple gasses but that it is difficult to apply to the synthesis of complex molecules such as natural products and APIs. Therefore, organic synthesis of such complex molecules has been conducted by batch methods. On the other hand, syntheses and reactions that attain high yields and high selectivities by flow methods are increasingly reported. Flow methods are leading candidates for the next generation of manufacturing methods that can mitigate environmental concerns toward sustainable society. PMID:26337828

  6. H-DROP: an SVM based helical domain linker predictor trained with features optimized by combining random forest and stepwise selection.

    Science.gov (United States)

    Ebina, Teppei; Suzuki, Ryosuke; Tsuji, Ryotaro; Kuroda, Yutaka

    2014-08-01

    Domain linker prediction is attracting much interest as it can help identifying novel domains suitable for high throughput proteomics analysis. Here, we report H-DROP, an SVM-based Helical Domain linker pRediction using OPtimal features. H-DROP is, to the best of our knowledge, the first predictor for specifically and effectively identifying helical linkers. This was made possible first because a large training dataset became available from IS-Dom, and second because we selected a small number of optimal features from a huge number of potential ones. The training helical linker dataset, which included 261 helical linkers, was constructed by detecting helical residues at the boundary regions of two independent structural domains listed in our previously reported IS-Dom dataset. 45 optimal feature candidates were selected from 3,000 features by random forest, which were further reduced to 26 optimal features by stepwise selection. The prediction sensitivity and precision of H-DROP were 35.2 and 38.8%, respectively. These values were over 10.7% higher than those of control methods including our previously developed DROP, which is a coil linker predictor, and PPRODO, which is trained with un-differentiated domain boundary sequences. Overall, these results indicated that helical linkers can be predicted from sequence information alone by using a strictly curated training data set for helical linkers and carefully selected set of optimal features. H-DROP is available at http://domserv.lab.tuat.ac.jp.

  7. Evaluation of Strip Footing Bearing Capacity Built on the Anthropogenic Embankment by Random Finite Element Method

    Science.gov (United States)

    Pieczynska-Kozlowska, Joanna

    2014-05-01

    One of a geotechnical problem in the area of Wroclaw is an anthropogenic embankment layer delaying to the depth of 4-5m, arising as a result of historical incidents. In such a case an assumption of bearing capacity of strip footing might be difficult. The standard solution is to use a deep foundation or foundation soil replacement. However both methods generate significant costs. In the present paper the authors focused their attention on the influence of anthropogenic embankment variability on bearing capacity. Soil parameters were defined on the basis of CPT test and modeled as 2D anisotropic random fields and the assumption of bearing capacity were made according deterministic finite element methods. Many repeated of the different realizations of random fields lead to stable expected value of bearing capacity. The algorithm used to estimate the bearing capacity of strip footing was the random finite element method (e.g. [1]). In traditional approach of bearing capacity the formula proposed by [2] is taken into account. qf = c'Nc + qNq + 0.5γBN- γ (1) where: qf is the ultimate bearing stress, cis the cohesion, qis the overburden load due to foundation embedment, γ is the soil unit weight, Bis the footing width, and Nc, Nq and Nγ are the bearing capacity factors. The method of evaluation the bearing capacity of strip footing based on finite element method incorporate five parameters: Young's modulus (E), Poisson's ratio (ν), dilation angle (ψ), cohesion (c), and friction angle (φ). In the present study E, ν and ψ are held constant while c and φ are randomized. Although the Young's modulus does not affect the bearing capacity it governs the initial elastic response of the soil. Plastic stress redistribution is accomplished using a viscoplastic algorithm merge with an elastic perfectly plastic (Mohr - Coulomb) failure criterion. In this paper a typical finite element mesh was assumed with 8-node elements consist in 50 columns and 20 rows. Footings width B

  8. An Improved Hybrid Method for Enhanced Road Feature Selection in Map Generalization

    Directory of Open Access Journals (Sweden)

    Jianchen Zhang

    2017-07-01

    Full Text Available Road selection is a critical component of road network generalization that directly affects its accuracy. However, most conventional selection methods are based solely on either a linear or an areal representation mode, often resulting in low selection accuracy and biased structural selection. In this paper we propose an improved hybrid method combining the linear and areal representation modes to increase the accuracy of road selection. The proposed method offers two primary advantages. First, it improves the stroke generation algorithm in a linear representation mode by using an ordinary least square (OLS model to consider overall information for the roads to be connected. Second, by taking advantage of the areal representation mode, the proposed method partitions road networks and calculates road density based on weighted Voronoi diagrams. Roads were selected using stroke importance and a density threshold. Finally, experiments were conducted comparing the proposed technique with conventional single representation methods. Results demonstrate the increased stroke generation accuracy and improved road selection achieved by this method.

  9. A new method for wavelength interval selection that intelligently optimizes the locations, widths and combinations of the intervals.

    Science.gov (United States)

    Deng, Bai-Chuan; Yun, Yong-Huan; Ma, Pan; Lin, Chen-Chen; Ren, Da-Bing; Liang, Yi-Zeng

    2015-03-21

    In this study, a new algorithm for wavelength interval selection, known as interval variable iterative space shrinkage approach (iVISSA), is proposed based on the VISSA algorithm. It combines global and local searches to iteratively and intelligently optimize the locations, widths and combinations of the spectral intervals. In the global search procedure, it inherits the merit of soft shrinkage from VISSA to search the locations and combinations of informative wavelengths, whereas in the local search procedure, it utilizes the information of continuity in spectroscopic data to determine the widths of wavelength intervals. The global and local search procedures are carried out alternatively to realize wavelength interval selection. This method was tested using three near infrared (NIR) datasets. Some high-performing wavelength selection methods, such as synergy interval partial least squares (siPLS), moving window partial least squares (MW-PLS), competitive adaptive reweighted sampling (CARS), genetic algorithm PLS (GA-PLS) and interval random frog (iRF), were used for comparison. The results show that the proposed method is very promising with good results both on prediction capability and stability. The MATLAB codes for implementing iVISSA are freely available on the website: .

  10. Gamma/hadron segregation for a ground based imaging atmospheric Cherenkov telescope using machine learning methods: Random Forest leads

    Science.gov (United States)

    Sharma, Mradul; Nayak, Jitadeepa; Krishna Koul, Maharaj; Bose, Smarajit; Mitra, Abhas

    2014-11-01

    A detailed case study of γ-hadron segregation for a ground based atmospheric Cherenkov telescope is presented. We have evaluated and compared various supervised machine learning methods such as the Random Forest method, Artificial Neural Network, Linear Discriminant method, Naive Bayes Classifiers, Support Vector Machines as well as the conventional dynamic supercut method by simulating triggering events with the Monte Carlo method and applied the results to a Cherenkov telescope. It is demonstrated that the Random Forest method is the most sensitive machine learning method for γ-hadron segregation.

  11. Principal Feature Analysis: A Multivariate Feature Selection Method for fMRI Data

    Directory of Open Access Journals (Sweden)

    Lijun Wang

    2013-01-01

    Full Text Available Brain decoding with functional magnetic resonance imaging (fMRI requires analysis of complex, multivariate data. Multivoxel pattern analysis (MVPA has been widely used in recent years. MVPA treats the activation of multiple voxels from fMRI data as a pattern and decodes brain states using pattern classification methods. Feature selection is a critical procedure of MVPA because it decides which features will be included in the classification analysis of fMRI data, thereby improving the performance of the classifier. Features can be selected by limiting the analysis to specific anatomical regions or by computing univariate (voxel-wise or multivariate statistics. However, these methods either discard some informative features or select features with redundant information. This paper introduces the principal feature analysis as a novel multivariate feature selection method for fMRI data processing. This multivariate approach aims to remove features with redundant information, thereby selecting fewer features, while retaining the most information.

  12. A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2016-02-01

    Full Text Available Currently, with the rapid increasing of data scales in network traffic classifications, how to select traffic features efficiently is becoming a big challenge. Although a number of traditional feature selection methods using the Hadoop-MapReduce framework have been proposed, the execution time was still unsatisfactory with numeral iterative computations during the processing. To address this issue, an efficient feature selection method for network traffic based on a new parallel computing framework called Spark is proposed in this paper. In our approach, the complete feature set is firstly preprocessed based on Fisher score, and a sequential forward search strategy is employed for subsets. The optimal feature subset is then selected using the continuous iterations of the Spark computing framework. The implementation demonstrates that, on the precondition of keeping the classification accuracy, our method reduces the time cost of modeling and classification, and improves the execution efficiency of feature selection significantly.

  13. Application of empirical mode decomposition method for characterization of random vibration signals

    Directory of Open Access Journals (Sweden)

    Setyamartana Parman

    2016-07-01

    Full Text Available Characterization of finite measured signals is a great of importance in dynamical modeling and system identification. This paper addresses an approach for characterization of measured random vibration signals where the approach rests on a method called empirical mode decomposition (EMD. The applicability of proposed approach is tested in one numerical and experimental data from a structural system, namely spar platform. The results are three main signal components, comprising: noise embedded in the measured signal as the first component, first intrinsic mode function (IMF called as the wave frequency response (WFR as the second component and second IMF called as the low frequency response (LFR as the third component while the residue is the trend. Band-pass filter (BPF method is taken as benchmark for the results obtained from EMD method.

  14. Method selection for sustainability assessments: The case of recovery of resources from waste water.

    Science.gov (United States)

    Zijp, M C; Waaijers-van der Loop, S L; Heijungs, R; Broeren, M L M; Peeters, R; Van Nieuwenhuijzen, A; Shen, L; Heugens, E H W; Posthuma, L

    2017-07-15

    Sustainability assessments provide scientific support in decision procedures towards sustainable solutions. However, in order to contribute in identifying and choosing sustainable solutions, the sustainability assessment has to fit the decision context. Two complicating factors exist. First, different stakeholders tend to have different views on what a sustainability assessment should encompass. Second, a plethora of sustainability assessment methods exist, due to the multi-dimensional characteristic of the concept. Different methods provide other representations of sustainability. Based on a literature review, we present a protocol to facilitate method selection together with stakeholders. The protocol guides the exploration of i) the decision context, ii) the different views of stakeholders and iii) the selection of pertinent assessment methods. In addition, we present an online tool for method selection. This tool identifies assessment methods that meet the specifications obtained with the protocol, and currently contains characteristics of 30 sustainability assessment methods. The utility of the protocol and the tool are tested in a case study on the recovery of resources from domestic waste water. In several iterations, a combination of methods was selected, followed by execution of the selected sustainability assessment methods. The assessment results can be used in the first phase of the decision procedure that leads to a strategic choice for sustainable resource recovery from waste water in the Netherlands. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Application of the ROV method for the selection of cutting fluids

    Directory of Open Access Journals (Sweden)

    Miloš Madić

    2016-06-01

    Full Text Available Production engineers are frequently faced with the multi-criteria selection problems in the manufacturing environment. Over the years, many multi-criteria decision making (MCDM methods have been proposed to help decision makers in solving different complex selection problems. This paper introduces the use of an almost unexplored MCDM method, i.e. range of value (ROV method for solving cutting fluid selection problems. The main motivation of using the ROV method is that it offers a very simple computational procedure compared to other MCDM methods. Applicability and effectiveness of the ROV method have been demonstrated while solving four case studies dealing with selection of the most suitable cutting fluid for the given machining application. In each case study the obtained complete rankings were compared with those derived by the past researchers using different MCDM methods. The results obtained using the ROV method have excellent correlation with those derived by the past researchers which validate the usefulness and effectiveness of this simple MCDM method for solving cutting fluid selection problems.

  16. Modelling and Simulation of Photosynthetic Microorganism Growth: Random Walk vs. Finite Difference Method

    Czech Academy of Sciences Publication Activity Database

    Papáček, Š.; Matonoha, Ctirad; Štumbauer, V.; Štys, D.

    2012-01-01

    Roč. 82, č. 10 (2012), s. 2022-2032 ISSN 0378-4754. [Modelling 2009. IMACS Conference on Mathematical Modelling and Computational Methods in Applied Sciences and Engineering /4./. Rožnov pod Radhoštěm, 22.06.2009-26.06.2009] Grant - others:CENAKVA(CZ) CZ.1.05/2.1.00/01.0024; GA JU(CZ) 152//2010/Z Institutional research plan: CEZ:AV0Z10300504 Keywords : multiscale modelling * distributed parameter system * boundary value problem * random walk * photosynthetic factory Subject RIV: EI - Biotechnology ; Bionics Impact factor: 0.836, year: 2012

  17. Recommended Minimum Test Requirements and Test Methods for Assessing Durability of Random-Glass-Fiber Composites

    Energy Technology Data Exchange (ETDEWEB)

    Battiste, R.L.; Corum, J.M.; Ren, W.; Ruggles, M.B.

    1999-06-01

    This report provides recommended minimum test requirements are suggested test methods for establishing the durability properties and characteristics of candidate random-glass-fiber polymeric composites for automotive structural applications. The recommendations and suggestions are based on experience and results developed at Oak Ridge National Laboratory (ORNL) under a US Department of Energy Advanced Automotive Materials project entitled ''Durability of Lightweight Composite Structures,'' which is closely coordinated with the Automotive Composites Consortium. The report is intended as an aid to suppliers offering new structural composites for automotive applications and to testing organizations that are called on to characterize the composites.

  18. TESTING BRAND VALUE MEASUREMENT METHODS IN A RANDOM COEFFICIENT MODELING FRAMEWORK

    Directory of Open Access Journals (Sweden)

    Szõcs Attila

    2014-07-01

    Full Text Available Our objective is to provide a framework for measuring brand equity, that is, the added value to the product endowed by the brand. Based on a demand and supply model, we propose a structural model that enables testing the structural effect of brand equity (demand side effect on brand value (supply side effect, using Monte Carlo simulation. Our main research question is which of the three brand value measurement methods (price premium, revenue premium and profit premium is more suitable from the perspective of the structural link between brand equity and brand value. Our model is based on recent developments in random coefficients model applications.

  19. Optimization of Wind Farm Layout: A Refinement Method by Random Search

    DEFF Research Database (Denmark)

    Feng, Ju; Shen, Wen Zhong

    2013-01-01

    Wind farm layout optimization is to find the optimal positions of wind turbines inside a wind farm, so as to maximize and/or minimize a single objective or multiple objectives, while satisfying certain constraints. Most of the works in the literature divide the wind farm into cells in which...... turbines can be placed, hence, simplifying the problem from with continuous variables to with discrete variables. In this paper, a refinement method, based on continuous formulation, by using random search is proposed to improve the optimization results based on discrete formulations. Two sets...

  20. Effects of advanced selection methods on sperm quality and ART outcome : a systematic review

    NARCIS (Netherlands)

    Said, Tamer M.; Land, Jolande A.

    2011-01-01

    BACKGROUND: Current routine semen preparation techniques do not inclusively target all intrinsic sperm characteristics that may impact the fertilization potential. In order to address these characteristics, several methods have been recently developed and applied to sperm selection. The objective of

  1. Basic Information for EPA's Selected Analytical Methods for Environmental Remediation and Recovery (SAM)

    Science.gov (United States)

    Contains basic information on the role and origins of the Selected Analytical Methods including the formation of the Homeland Security Laboratory Capacity Work Group and the Environmental Evaluation Analytical Process Roadmap for Homeland Security Events

  2. Recent use of condoms and emergency contraception by women who selected condoms as their contraceptive method

    National Research Council Canada - National Science Library

    Nelson, Anita L

    2006-01-01

    This study was undertaken to determine how consistently indigent, predominantly Hispanic women who had previously selected male condoms as their contraceptive method had used condoms and emergency contraception (EC...

  3. Comparison of selection methods to deduce natural background levels for groundwater units

    NARCIS (Netherlands)

    Griffioen, J.; Passier, H.F.; Klein, J.

    2008-01-01

    Establishment of natural background levels (NBL) for groundwater is commonly performed to serve as reference when assessing the contamination status of groundwater units. We compare various selection methods to establish NBLs using groundwater quality data forfour hydrogeologically different areas

  4. Application of fuzzy TOPSIS and generalized Choquet integral methods to select the best supplier

    National Research Council Canada - National Science Library

    Yildiz, Aytac; Yesim Yayla, A

    2017-01-01

    Supplier selection is a complex multi-criteria decision making (MCDM) problem. There are literally various methods for choosing appropriate supplier but there are several criteria involved in complex decision making process...

  5. Description of selected structural elements of composite foams using statistical methods

    Directory of Open Access Journals (Sweden)

    K. Gawdzińska

    2011-04-01

    Full Text Available This article makes use of images from a computer tomograph for the description of selected structure elements of metal and compositefoams by means of statistical methods. Besides, compression stress of the tested materials has been determined.

  6. A transdisciplinary model to inform randomized clinical trial methods for electronic cigarette evaluation

    Directory of Open Access Journals (Sweden)

    Alexa A. Lopez

    2016-03-01

    Full Text Available Abstract Background This study is a systematic evaluation of a novel tobacco product, electronic cigarettes (ECIGs using a two-site, four-arm, 6-month, parallel-group randomized controlled trial (RCT with a follow-up to 9 months. Virginia Commonwealth University is the primary site and Penn State University is the secondary site. This RCT design is important because it is informed by analytical work, clinical laboratory results, and qualitative/quantitative findings regarding the specific ECIG products used. Methods Participants (N = 520 will be randomized across sites and must be healthy smokers of >9 cigarettes for at least one year, who have not had a quit attempt in the prior month, are not planning to quit in the next 6 months, and are interested in reducing cigarette intake. Participants will be randomized into one of four 24-week conditions: a cigarette substitute that does not produce an inhalable aerosol; or one of three ECIG conditions that differ by nicotine concentration 0, 8, or 36 mg/ml. Blocked randomization will be accomplished with a 1:1:1:1 ratio of condition assignments at each site. Specific aims are to: characterize ECIG influence on toxicants, biomarkers, health indicators, and disease risk; determine tobacco abstinence symptom and adverse event profile associated with real-world ECIG use; and examine the influence of ECIG use on conventional tobacco product use. Liquid nicotine concentration-related differences on these study outcomes are predicted. Participants and research staff in contact with participants will be blinded to the nicotine concentration in the ECIG conditions. Discussion Results from this study will inform knowledge concerning ECIG use as well as demonstrate a model that may be applied to other novel tobacco products. The model of using prior empirical testing of ECIG devices should be considered in other RCT evaluations. Trial registration TRN: NCT02342795 , registered December 16, 2014.

  7. Design and methods for a randomized clinical trial treating comorbid obesity and major depressive disorder

    Directory of Open Access Journals (Sweden)

    Crawford Sybil

    2008-09-01

    Full Text Available Abstract Background Obesity is often comorbid with depression and individuals with this comorbidity fare worse in behavioral weight loss treatment. Treating depression directly prior to behavioral weight loss treatment might bolster weight loss outcomes in this population, but this has not yet been tested in a randomized clinical trial. Methods and design This randomized clinical trial will examine whether behavior therapy for depression administered prior to standard weight loss treatment produces greater weight loss than standard weight loss treatment alone. Obese women with major depressive disorder (N = 174 will be recruited from primary care clinics and the community and randomly assigned to one of the two treatment conditions. Treatment will last 2 years, and will include a 6-month intensive treatment phase followed by an 18-month maintenance phase. Follow-up assessment will occur at 6-months and 1- and 2 years following randomization. The primary outcome is weight loss. The study was designed to provide 90% power for detecting a weight change difference between conditions of 3.1 kg (standard deviation of 5.5 kg at 1-year assuming a 25% rate of loss to follow-up. Secondary outcomes include depression, physical activity, dietary intake, psychosocial variables and cardiovascular risk factors. Potential mediators (e.g., adherence, depression, physical activity and caloric intake of the intervention effect on weight change will also be examined. Discussion Treating depression before administering intensive health behavior interventions could potentially boost the impact on both mental and physical health outcomes. Trial registration NCT00572520

  8. Echocardiographic methods, quality review, and measurement accuracy in a randomized multicenter clinical trial of Marfan syndrome.

    Science.gov (United States)

    Selamet Tierney, Elif Seda; Levine, Jami C; Chen, Shan; Bradley, Timothy J; Pearson, Gail D; Colan, Steven D; Sleeper, Lynn A; Campbell, M Jay; Cohen, Meryl S; De Backer, Julie; Guey, Lin T; Heydarian, Haleh; Lai, Wyman W; Lewin, Mark B; Marcus, Edward; Mart, Christopher R; Pignatelli, Ricardo H; Printz, Beth F; Sharkey, Angela M; Shirali, Girish S; Srivastava, Shubhika; Lacro, Ronald V

    2013-06-01

    The Pediatric Heart Network is conducting a large international randomized trial to compare aortic root growth and other cardiovascular outcomes in 608 subjects with Marfan syndrome randomized to receive atenolol or losartan for 3 years. The authors report here the echocardiographic methods and baseline echocardiographic characteristics of the randomized subjects, describe the interobserver agreement of aortic measurements, and identify factors influencing agreement. Individuals aged 6 months to 25 years who met the original Ghent criteria and had body surface area-adjusted maximum aortic root diameter (ROOTmax) Z scores > 3 were eligible for inclusion. The primary outcome measure for the trial is the change over time in ROOTmaxZ score. A detailed echocardiographic protocol was established and implemented across 22 centers, with an extensive training and quality review process. Interobserver agreement for the aortic measurements was excellent, with intraclass correlation coefficients ranging from 0.921 to 0.989. Lower interobserver percentage error in ROOTmax measurements was independently associated (model R(2) = 0.15) with better image quality (P = .002) and later study reading date (P < .001). Echocardiographic characteristics of the randomized subjects did not differ by treatment arm. Subjects with ROOTmaxZ scores ≥ 4.5 (36%) were more likely to have mitral valve prolapse and dilation of the main pulmonary artery and left ventricle, but there were no differences in aortic regurgitation, aortic stiffness indices, mitral regurgitation, or left ventricular function compared with subjects with ROOTmaxZ scores < 4.5. The echocardiographic methodology, training, and quality review process resulted in a robust evaluation of aortic root dimensions, with excellent reproducibility. Copyright © 2013 American Society of Echocardiography. Published by Mosby, Inc. All rights reserved.

  9. Computational Inference Methods for Selective Sweeps Arising in Acute HIV Infection

    OpenAIRE

    Leviyang, Sivan

    2013-01-01

    During the first weeks of human immunodeficiency virus-1 (HIV-1) infection, cytotoxic T-lymphocytes (CTLs) select for multiple escape mutations in the infecting HIV population. In recent years, methods that use escape mutation data to estimate rates of HIV escape have been developed, thereby providing a quantitative framework for exploring HIV escape from CTL response. Current methods for escape-rate inference focus on a specific HIV mutant selected by a single CTL response. However, recent s...

  10. Role of selective V2-receptor-antagonism in septic shock: a randomized, controlled, experimental study

    OpenAIRE

    Rehberg, Sebastian; Ertmer, Christian; Lange, Matthias; Morelli, Andrea; Whorton, Elbert; Strohhäcker, Anne-Katrin; Dünser, Martin Wolfgang; Lipke, Erik; Kampmeier, Tim G; Aken, Hugo; Traber, Daniel L; Westphal, Martin

    2010-01-01

    ABSTRACT : INTRODUCTION : V2-receptor (V2R) stimulation potentially aggravates sepsis-induced vasodilation, fluid accumulation and microvascular thrombosis. Therefore, the present study was performed to determine the effects of a first-line therapy with the selective V2R-antagonist (Propionyl1-D-Tyr(Et)2-Val4-Abu6-Arg8,9)-Vasopressin on cardiopulmonary hemodynamics and organ function vs. the mixed V1aR/V2R-agonist arginine vasopressin (AVP) or placebo in an established ovine model of septic s...

  11. A sequence-based method to predict the impact of regulatory variants using random forest.

    Science.gov (United States)

    Liu, Qiao; Gan, Mingxin; Jiang, Rui

    2017-03-14

    Most disease-associated variants identified by genome-wide association studies (GWAS) exist in noncoding regions. In spite of the common agreement that such variants may disrupt biological functions of their hosting regulatory elements, it remains a great challenge to characterize the risk of a genetic variant within the implicated genome sequence. Therefore, it is essential to develop an effective computational model that is not only capable of predicting the potential risk of a genetic variant but also valid in interpreting how the function of the genome is affected with the occurrence of the variant. We developed a method named kmerForest that used a random forest classifier with k-mer counts to predict accessible chromatin regions purely based on DNA sequences. We demonstrated that our method outperforms existing methods in distinguishing known accessible chromatin regions from random genomic sequences. Furthermore, the performance of our method can further be improved with the incorporation of sequence conservation features. Based on this model, we assessed importance of the k-mer features by a series of permutation experiments, and we characterized the risk of a single nucleotide polymorphism (SNP) on the function of the genome using the difference between the importance of the k-mer features affected by the occurrence of the SNP. We conducted a series of experiments and showed that our model can well discriminate between pathogenic and normal SNPs. Particularly, our model correctly prioritized SNPs that are proved to be enriched for the binding sites of FOXA1 in breast cancer cell lines from previous studies. We presented a novel method to interpret functional genetic variants purely base on DNA sequences. The proposed k-mer based score offers an effective means of measuring the impact of SNPs on the function of the genome, and thus shedding light on the identification of genetic risk factors underlying complex traits and diseases.

  12. Selection of similar single domain antibodies from two immune VHH libraries obtained from two alpacas by using different selection methods.

    Science.gov (United States)

    Li, Tengfei; Vandesquille, Matthias; Bay, Sylvie; Dhenain, Marc; Delatour, Benoît; Lafaye, Pierre

    2017-08-01

    The two most used methods to select camelid single-domain antibody-fragments (VHHs) are: displaying their repertoires on the surface of filamentous bacteriophages (phage display) or linking them to ribosomes (ribosome display). In this study, we compared specific VHHs isolated from two different immune libraries coming from two different alpacas by using these two selection methods. Three anti-GFAP (glial fibrillary acidic protein) VHHs were derived from an immune library obtained by ribosome display after immunization of one alpaca with purified GFAP, a protein expressed by astroglial cells. In parallel, three other anti-GFAP VHHs were derived from an immune library by phage display after immunization of another alpaca with a human brain tissue extract containing GFAP. All the VHHs were closely related and one VHH was found to be strictly identical in both studies. This highlights the selection pressure exerted by the camelid immune system to shape the paratope of an antibody against a defined antigen. Copyright © 2017 European Federation of Immunological Societies. Published by Elsevier B.V. All rights reserved.

  13. Conic sampling: an efficient method for solving linear and quadratic programming by randomly linking constraints within the interior.

    Science.gov (United States)

    Serang, Oliver

    2012-01-01

    Linear programming (LP) problems are commonly used in analysis and resource allocation, frequently surfacing as approximations to more difficult problems. Existing approaches to LP have been dominated by a small group of methods, and randomized algorithms have not enjoyed popularity in practice. This paper introduces a novel randomized method of solving LP problems by moving along the facets and within the interior of the polytope along rays randomly sampled from the polyhedral cones defined by the bounding constraints. This conic sampling method is then applied to randomly sampled LPs, and its runtime performance is shown to compare favorably to the simplex and primal affine-scaling algorithms, especially on polytopes with certain characteristics. The conic sampling method is then adapted and applied to solve a certain quadratic program, which compute a projection onto a polytope; the proposed method is shown to outperform the proprietary software Mathematica on large, sparse QP problems constructed from mass spectometry-based proteomics.

  14. Improvements in recall and food choices using a graphical method to deliver information of select nutrients.

    Science.gov (United States)

    Pratt, Nathan S; Ellison, Brenna D; Benjamin, Aaron S; Nakamura, Manabu T

    2016-01-01

    Consumers have difficulty using nutrition information. We hypothesized that graphically delivering information of select nutrients relative to a target would allow individuals to process information in time-constrained settings more effectively than numerical information. Objectives of the study were to determine the efficacy of the graphical method in (1) improving memory of nutrient information and (2) improving consumer purchasing behavior in a restaurant. Values of fiber and protein per calorie were 2-dimensionally plotted alongside a target box. First, a randomized cued recall experiment was conducted (n=63). Recall accuracy of nutrition information improved by up to 43% when shown graphically instead of numerically. Second, the impact of graphical nutrition signposting on diner choices was tested in a cafeteria. Saturated fat and sodium information was also presented using color coding. Nutrient content of meals (n=362) was compared between 3 signposting phases: graphical, nutrition facts panels (NFP), or no nutrition label. Graphical signposting improved nutrient content of purchases in the intended direction, whereas NFP had no effect compared with the baseline. Calories ordered from total meals, entrées, and sides were significantly less during graphical signposting than no-label and NFP periods. For total meal and entrées, protein per calorie purchased was significantly higher and saturated fat significantly lower during graphical signposting than the other phases. Graphical signposting remained a predictor of calories and protein per calorie purchased in regression modeling. These findings demonstrate that graphically presenting nutrition information makes that information more available for decision making and influences behavior change in a realistic setting. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Participant-selected music and physical activity in older adults following cardiac rehabilitation: a randomized controlled trial.

    Science.gov (United States)

    Clark, Imogen N; Baker, Felicity A; Peiris, Casey L; Shoebridge, Georgie; Taylor, Nicholas F

    2017-03-01

    To evaluate effects of participant-selected music on older adults' achievement of activity levels recommended in the physical activity guidelines following cardiac rehabilitation. A parallel group randomized controlled trial with measurements at Weeks 0, 6 and 26. A multisite outpatient rehabilitation programme of a publicly funded metropolitan health service. Adults aged 60 years and older who had completed a cardiac rehabilitation programme. Experimental participants selected music to support walking with guidance from a music therapist. Control participants received usual care only. The primary outcome was the proportion of participants achieving activity levels recommended in physical activity guidelines. Secondary outcomes compared amounts of physical activity, exercise capacity, cardiac risk factors, and exercise self-efficacy. A total of 56 participants, mean age 68.2 years (SD = 6.5), were randomized to the experimental ( n = 28) and control groups ( n = 28). There were no differences between groups in proportions of participants achieving activity recommended in physical activity guidelines at Week 6 or 26. Secondary outcomes demonstrated between-group differences in male waist circumference at both measurements (Week 6 difference -2.0 cm, 95% CI -4.0 to 0; Week 26 difference -2.8 cm, 95% CI -5.4 to -0.1), and observed effect sizes favoured the experimental group for amounts of physical activity (d = 0.30), exercise capacity (d = 0.48), and blood pressure (d = -0.32). Participant-selected music did not increase the proportion of participants achieving recommended amounts of physical activity, but may have contributed to exercise-related benefits.

  16. Methods of learning in statistical education: Design and analysis of a randomized trial

    Science.gov (United States)

    Boyd, Felicity Turner

    Background. Recent psychological and technological advances suggest that active learning may enhance understanding and retention of statistical principles. A randomized trial was designed to evaluate the addition of innovative instructional methods within didactic biostatistics courses for public health professionals. Aims. The primary objectives were to evaluate and compare the addition of two active learning methods (cooperative and internet) on students' performance; assess their impact on performance after adjusting for differences in students' learning style; and examine the influence of learning style on trial participation. Methods. Consenting students enrolled in a graduate introductory biostatistics course were randomized to cooperative learning, internet learning, or control after completing a pretest survey. The cooperative learning group participated in eight small group active learning sessions on key statistical concepts, while the internet learning group accessed interactive mini-applications on the same concepts. Controls received no intervention. Students completed evaluations after each session and a post-test survey. Study outcome was performance quantified by examination scores. Intervention effects were analyzed by generalized linear models using intent-to-treat analysis and marginal structural models accounting for reported participation. Results. Of 376 enrolled students, 265 (70%) consented to randomization; 69, 100, and 96 students were randomized to the cooperative, internet, and control groups, respectively. Intent-to-treat analysis showed no differences between study groups; however, 51% of students in the intervention groups had dropped out after the second session. After accounting for reported participation, expected examination scores were 2.6 points higher (of 100 points) after completing one cooperative learning session (95% CI: 0.3, 4.9) and 2.4 points higher after one internet learning session (95% CI: 0.0, 4.7), versus

  17. Benefits of Selected Physical Exercise Programs in Detention: A Randomized Controlled Study

    Directory of Open Access Journals (Sweden)

    Claudia Battaglia

    2013-10-01

    Full Text Available The aim of the study was to determine which kind of physical activity could be useful to inmate populations to improve their health status and fitness levels. A repeated measure design was used to evaluate the effects of two different training protocols on subjects in a state of detention, tested pre- and post-experimental protocol.Seventy-five male subjects were enrolled in the studyand randomly allocated to three groups: the cardiovascular plus resistance training protocol group (CRT (n = 25; mean age 30.9 ± 8.9 years,the high-intensity strength training protocol group (HIST (n = 25; mean age 33.9 ± 6.8 years, and a control group (C (n = 25; mean age 32.9 ± 8.9 years receiving no treatment. All subjects underwent a clinical assessmentandfitness tests. MANOVA revealed significant multivariate effects on group (p < 0.01 and group-training interaction (p < 0.05. CRT protocol resulted the most effective protocol to reach the best outcome in fitness tests. Both CRT and HIST protocols produced significant gains in the functional capacity (cardio-respiratory capacity and cardiovascular disease risk decrease of incarcerated males. The significant gains obtained in functional capacity reflect the great potential of supervised exercise interventions for improving the health status of incarcerated people.

  18. The Influence of Endmember Selection Method in Extracting Impervious Surface from Airborne Hyperspectral Imagery

    Science.gov (United States)

    Wang, J.; Feng, B.

    2016-12-01

    Impervious surface area (ISA) has long been studied as an important input into moisture flux models. In general, ISA impedes groundwater recharge, increases stormflow/flood frequency, and alters in-stream and riparian habitats. Urban area is recognized as one of the richest ISA environment. Urban ISA mapping assists flood prevention and urban planning. Hyperspectral imagery (HI), for its ability to detect subtle spectral signature, becomes an ideal candidate in urban ISA mapping. To map ISA from HI involves endmember (EM) selection. The high degree of spatial and spectral heterogeneity of urban environment puts great difficulty in this task: a compromise point is needed between the automatic degree and the good representativeness of the method. The study tested one manual and two semi-automatic EM selection strategies. The manual and the first semi-automatic methods have been widely used in EM selection. The second semi-automatic EM selection method is rather new and has been only proposed for moderate spatial resolution satellite. The manual method visually selected the EM candidates from eight landcover types in the original image. The first semi-automatic method chose the EM candidates using a threshold over the pixel purity index (PPI) map. The second semi-automatic method used the triangle shape of the HI scatter plot in the n-Dimension visualizer to identify the V-I-S (vegetation-impervious surface-soil) EM candidates: the pixels locate at the triangle points. The initial EM candidates from the three methods were further refined by three indexes (EM average RMSE, minimum average spectral angle, and count based EM selection) and generated three spectral libraries, which were used to classify the test image. Spectral angle mapper was applied. The accuracy reports for the classification results were generated. The overall accuracy are 85% for the manual method, 81% for the PPI method, and 87% for the V-I-S method. The V-I-S EM selection method performs best in

  19. Research on Big Data Attribute Selection Method in Submarine Optical Fiber Network Fault Diagnosis Database

    Directory of Open Access Journals (Sweden)

    Chen Ganlang

    2017-11-01

    Full Text Available At present, in the fault diagnosis database of submarine optical fiber network, the attribute selection of large data is completed by detecting the attributes of the data, the accuracy of large data attribute selection cannot be guaranteed. In this paper, a large data attribute selection method based on support vector machines (SVM for fault diagnosis database of submarine optical fiber network is proposed. Mining large data in the database of optical fiber network fault diagnosis, and calculate its attribute weight, attribute classification is completed according to attribute weight, so as to complete attribute selection of large data. Experimental results prove that ,the proposed method can improve the accuracy of large data attribute selection in fault diagnosis database of submarine optical fiber network, and has high use value.

  20. Financial and non-financial methods of motivating employees in selected company

    OpenAIRE

    Nguyen, Hai Anh

    2015-01-01

    This bachelor thesis is focused on financial and non-financial methods, which are used to motivate employees in selected company. The objective is to find out, whether employees are satisfied with existed methods, what the weaknesses of those methods are and their impacts on employees' motivation affecting their productivity at work. Findings on motivational methods and employee satisfaction is based on author observation as an employee in the company and based on the results of an electronic...

  1. A Combined Weighting Method Based on Hybrid of Interval Evidence Fusion and Random Sampling

    Directory of Open Access Journals (Sweden)

    Ying Yan

    2017-01-01

    Full Text Available Due to the complexity of system and lack of expertise, epistemic uncertainties may present in the experts’ judgment on the importance of certain indices during group decision-making. A novel combination weighting method is proposed to solve the index weighting problem when various uncertainties are present in expert comments. Based on the idea of evidence theory, various types of uncertain evaluation information are uniformly expressed through interval evidence structures. Similarity matrix between interval evidences is constructed, and expert’s information is fused. Comment grades are quantified using the interval number, and cumulative probability function for evaluating the importance of indices is constructed based on the fused information. Finally, index weights are obtained by Monte Carlo random sampling. The method can process expert’s information with varying degrees of uncertainties, which possesses good compatibility. Difficulty in effectively fusing high-conflict group decision-making information and large information loss after fusion is avertible. Original expert judgments are retained rather objectively throughout the processing procedure. Cumulative probability function constructing and random sampling processes do not require any human intervention or judgment. It can be implemented by computer programs easily, thus having an apparent advantage in evaluation practices of fairly huge index systems.

  2. The supersymmetry method for chiral random matrix theory with arbitrary rotation-invariant weights

    Science.gov (United States)

    Kaymak, Vural; Kieburg, Mario; Guhr, Thomas

    2014-07-01

    In the past few years, the supersymmetry method has been generalized to real symmetric, Hermitian, and Hermitian self-dual random matrices drawn from ensembles invariant under the orthogonal, unitary, and unitary symplectic groups, respectively. We extend this supersymmetry approach to chiral random matrix theory invariant under the three chiral unitary groups in a unifying way. Thereby we generalize a projection formula providing a direct link and, hence, a ‘short cut’ between the probability density in ordinary space and that in superspace. We emphasize that this point was one of the main problems and shortcomings of the supersymmetry method, since only implicit dualities between ordinary space and superspace were known before. To provide examples, we apply this approach to the calculation of the supersymmetric analogue of a Lorentzian (Cauchy) ensemble and an ensemble with a quartic potential. Moreover, we consider the partially quenched partition function of the three chiral Gaussian ensembles corresponding to four-dimensional continuum quantum chromodynamics. We identify a natural splitting of the chiral Lagrangian in its lowest order into a part for the physical mesons and a part associated with source terms generating the observables, e.g. the level density of the Dirac operator.

  3. Evaluation of Selection Methods for use in Life Cycle Impact Assessment

    DEFF Research Database (Denmark)

    Larsen, Henrik Fred; Birkved, Morten; Hauschild, Michael Zwicky

    2003-01-01

    Today very few LCA studies include ecotoxicity and human toxicity in the impact assessment and if they do it is typically highly incomplete. The reason for this seems to be that in many cases an extremely high number of chemical emissions from the inventory potentially contribute to the toxicity...... related impact categories and only for a small part of them there are characterisation factors provided by the applied impact assessment method. This calls for a method that is able to select/prioritise those chemical emissions that contribute significantly to the toxicity related impact categories....... Such a method is called a selection method and its overall aim is to focus the effort on significant chemical emissions when Life Cycle Impact Assessment is done on toxic releases. Today experience from application of the few existing selection methods is very sparse and the need for research within this area...

  4. Impedance measurement using a two-microphone, random-excitation method

    Science.gov (United States)

    Seybert, A. F.; Parrott, T. L.

    1978-01-01

    The feasibility of using a two-microphone, random-excitation technique for the measurement of acoustic impedance was studied. Equations were developed, including the effect of mean flow, which show that acoustic impedance is related to the pressure ratio and phase difference between two points in a duct carrying plane waves only. The impedances of a honeycomb ceramic specimen and a Helmholtz resonator were measured and compared with impedances obtained using the conventional standing-wave method. Agreement between the two methods was generally good. A sensitivity analysis was performed to pinpoint possible error sources and recommendations were made for future study. The two-microphone approach evaluated in this study appears to have some advantages over other impedance measuring techniques.

  5. Method and apparatus for monitoring a hydrocarbon-selective catalytic reduction device

    Science.gov (United States)

    Schmieg, Steven J; Viola, Michael B; Cheng, Shi-Wai S; Mulawa, Patricia A; Hilden, David L; Sloane, Thompson M; Lee, Jong H

    2014-05-06

    A method for monitoring a hydrocarbon-selective catalytic reactor device of an exhaust aftertreatment system of an internal combustion engine operating lean of stoichiometry includes injecting a reductant into an exhaust gas feedstream upstream of the hydrocarbon-selective catalytic reactor device at a predetermined mass flowrate of the reductant, and determining a space velocity associated with a predetermined forward portion of the hydrocarbon-selective catalytic reactor device. When the space velocity exceeds a predetermined threshold space velocity, a temperature differential across the predetermined forward portion of the hydrocarbon-selective catalytic reactor device is determined, and a threshold temperature as a function of the space velocity and the mass flowrate of the reductant is determined. If the temperature differential across the predetermined forward portion of the hydrocarbon-selective catalytic reactor device is below the threshold temperature, operation of the engine is controlled to regenerate the hydrocarbon-selective catalytic reactor device.

  6. Reduced plasma aldosterone concentrations in randomly selected patients with insulin-dependent diabetes mellitus.

    LENUS (Irish Health Repository)

    Cronin, C C

    2012-02-03

    Abnormalities of the renin-angiotensin system have been reported in patients with diabetes mellitus and with diabetic complications. In this study, plasma concentrations of prorenin, renin, and aldosterone were measured in a stratified random sample of 110 insulin-dependent (Type 1) diabetic patients attending our outpatient clinic. Fifty-four age- and sex-matched control subjects were also examined. Plasma prorenin concentration was higher in patients without complications than in control subjects when upright (geometric mean (95% confidence intervals (CI): 75.9 (55.0-105.6) vs 45.1 (31.6-64.3) mU I-1, p < 0.05). There was no difference in plasma prorenin concentration between patients without and with microalbuminuria and between patients without and with background retinopathy. Plasma renin concentration, both when supine and upright, was similar in control subjects, in patients without complications, and in patients with varying degrees of diabetic microangiopathy. Plasma aldosterone was suppressed in patients without complications in comparison to control subjects (74 (58-95) vs 167 (140-199) ng I-1, p < 0.001) and was also suppressed in patients with microvascular disease. Plasma potassium was significantly higher in patients than in control subjects (mean +\\/- standard deviation: 4.10 +\\/- 0.36 vs 3.89 +\\/- 0.26 mmol I-1; p < 0.001) and plasma sodium was significantly lower (138 +\\/- 4 vs 140 +\\/- 2 mmol I-1; p < 0.001). We conclude that plasma prorenin is not a useful early marker for diabetic microvascular disease. Despite apparently normal plasma renin concentrations, plasma aldosterone is suppressed in insulin-dependent diabetic patients.

  7. An improved random walk algorithm for the implicit Monte Carlo method

    Energy Technology Data Exchange (ETDEWEB)

    Keady, Kendra P., E-mail: keadyk@lanl.gov; Cleveland, Mathew A.

    2017-01-01

    In this work, we introduce a modified Implicit Monte Carlo (IMC) Random Walk (RW) algorithm, which increases simulation efficiency for multigroup radiative transfer problems with strongly frequency-dependent opacities. To date, the RW method has only been implemented in “fully-gray” form; that is, the multigroup IMC opacities are group-collapsed over the full frequency domain of the problem to obtain a gray diffusion problem for RW. This formulation works well for problems with large spatial cells and/or opacities that are weakly dependent on frequency; however, the efficiency of the RW method degrades when the spatial cells are thin or the opacities are a strong function of frequency. To address this inefficiency, we introduce a RW frequency group cutoff in each spatial cell, which divides the frequency domain into optically thick and optically thin components. In the modified algorithm, opacities for the RW diffusion problem are obtained by group-collapsing IMC opacities below the frequency group cutoff. Particles with frequencies above the cutoff are transported via standard IMC, while particles below the cutoff are eligible for RW. This greatly increases the total number of RW steps taken per IMC time-step, which in turn improves the efficiency of the simulation. We refer to this new method as Partially-Gray Random Walk (PGRW). We present numerical results for several multigroup radiative transfer problems, which show that the PGRW method is significantly more efficient than standard RW for several problems of interest. In general, PGRW decreases runtimes by a factor of ∼2–4 compared to standard RW, and a factor of ∼3–6 compared to standard IMC. While PGRW is slower than frequency-dependent Discrete Diffusion Monte Carlo (DDMC), it is also easier to adapt to unstructured meshes and can be used in spatial cells where DDMC is not applicable. This suggests that it may be optimal to employ both DDMC and PGRW in a single simulation.

  8. Methods, computer readable media, and graphical user interfaces for analysis of frequency selective surfaces

    Science.gov (United States)

    Kotter, Dale K [Shelley, ID; Rohrbaugh, David T [Idaho Falls, ID

    2010-09-07

    A frequency selective surface (FSS) and associated methods for modeling, analyzing and designing the FSS are disclosed. The FSS includes a pattern of conductive material formed on a substrate to form an array of resonance elements. At least one aspect of the frequency selective surface is determined by defining a frequency range including multiple frequency values, determining a frequency dependent permittivity across the frequency range for the substrate, determining a frequency dependent conductivity across the frequency range for the conductive material, and analyzing the frequency selective surface using a method of moments analysis at each of the multiple frequency values for an incident electromagnetic energy impinging on the frequency selective surface. The frequency dependent permittivity and the frequency dependent conductivity are included in the method of moments analysis.

  9. Effectiveness of a selective, personality-targeted prevention program for adolescent alcohol use and misuse: a cluster randomized controlled trial.

    Science.gov (United States)

    Conrod, Patricia J; O'Leary-Barrett, Maeve; Newton, Nicola; Topper, Lauren; Castellanos-Ryan, Natalie; Mackie, Clare; Girard, Alain

    2013-03-01

    Selective school-based alcohol prevention programs targeting youth with personality risk factors for addiction and mental health problems have been found to reduce substance use and misuse in those with elevated personality profiles. To report 24-month outcomes of the Teacher-Delivered Personality-Targeted Interventions for Substance Misuse Trial (Adventure trial) in which school staff were trained to provide interventions to students with 1 of 4 high-risk (HR) profiles: anxiety sensitivity, hopelessness, impulsivity, and sensation seeking and to examine the indirect herd effects of this program on the broader low-risk (LR) population of students who were not selected for intervention. Cluster randomized controlled trial. Secondary schools in London, United Kingdom. A total of 1210 HR and 1433 LR students in the ninth grade (mean [SD] age, 13.7 [0.33] years). Schools were randomized to provide brief personality-targeted interventions to HR youth or treatment as usual (statutory drug education in class). Participants were assessed for drinking, binge drinking, and problem drinking before randomization and at 6-monthly intervals for 2 years. Two-part latent growth models indicated long-term effects of the intervention on drinking rates (β = -0.320, SE = 0.145, P = .03) and binge drinking rates (β = -0.400, SE = 0.179, P = .03) and growth in binge drinking (β = -0.716, SE = 0.274, P = .009) and problem drinking (β = -0.452, SE = 0.193, P = .02) for HR youth. The HR youth were also found to benefit from the interventions during the 24-month follow-up on drinking quantity (β = -0.098, SE = 0.047, P = .04), growth in drinking quantity (β = -0.176, SE = 0.073, P = .02), and growth in binge drinking frequency (β = -0.183, SE = 0.092, P = .047). Some herd effects in LR youth were observed, specifically on drinking rates (β = -0.259, SE = 0.132, P = .049) and growth of binge drinking (β = -0.244, SE = 0.073, P = .001), during the 24-month follow-up. Findings further

  10. Appropriate statistical methods were infrequently used in cluster-randomized crossover trials.

    Science.gov (United States)

    Arnup, Sarah J; Forbes, Andrew B; Kahan, Brennan C; Morgan, Katy E; McKenzie, Joanne E

    2016-06-01

    To assess the design and statistical methods used in cluster-randomized crossover (CRXO) trials. We undertook a systematic review of CRXO trials. Searches of MEDLINE, EMBASE, and CINAHL Plus; and citation searches of CRXO methodological articles were conducted to December 2014. We extracted data on design characteristics and statistical methods for sample size, data analysis, and handling of missing data. Ninety-one trials including 139 end point analyses met the inclusion criteria. Trials had a median of nine clusters [interquartile range (IQR), 4-21] and median cluster-period size of 30 individuals (IQR, 14-77); 58 (69%) trials had two periods, and 27 trials (30%) included the same individuals in all periods. A rationale for the design was reported in only 25 trials (27%). A sample size justification was provided in 53 (58%) trials. Only nine (10%) trials accounted appropriately for the design in their sample size calculation. Ten of the 12 cluster-level analyses used a method that accounted for the clustering and multiple-period aspects of the design. In contrast, only 4 of the 127 individual-level analyses used a potentially appropriate method. There is a need for improved application of appropriate analysis and sample size methods, and reporting, in CRXO trials. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Random Vibration Analysis of Train Moving over Slab Track on Bridge under Track Irregularities and Earthquakes by Pseudoexcitation Method

    National Research Council Canada - National Science Library

    Zeng, Zhiping; Zhu, Kunteng; He, Xianfeng; Xu, Wentao; Chen, Lingkun; Lou, Ping

    2015-01-01

      This paper investigates the random vibration and the dynamic reliability of operation stability of train moving over slab track on bridge under track irregularities and earthquakes by the pseudoexcitation method (PEM...

  12. Selection of single blastocysts for fresh transfer via standard morphology assessment alone and with array CGH for good prognosis IVF patients: results from a randomized pilot study

    Science.gov (United States)

    2012-01-01

    Background Single embryo transfer (SET) remains underutilized as a strategy to reduce multiple gestation risk in IVF, and its overall lower pregnancy rate underscores the need for improved techniques to select one embryo for fresh transfer. This study explored use of comprehensive chromosomal screening by array CGH (aCGH) to provide this advantage and improve pregnancy rate from SET. Methods First-time IVF patients with a good prognosis (age IVF program to select single blastocysts for fresh SET in good prognosis patients. The observed aneuploidy rate (44.9%) among biopsied blastocysts highlights the inherent imprecision of SET when conventional morphology is used alone. Embryos randomized to the aCGH group implanted with greater efficiency, resulted in clinical pregnancy more often, and yielded a lower miscarriage rate than those selected without aCGH. Additional studies are needed to verify our pilot data and confirm a role for on-site, rapid aCGH for IVF patients contemplating fresh SET. PMID:22551456

  13. A high order regularisation method for solving the Poisson equation and selected applications using vortex methods

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm

    A regularisation method for solving the Poisson equation using Green’s functions is presented.The method is shown to obtain a convergence rate which corresponds to the design of the regularised Green’s function and a spectral-like convergence rate is obtained using a spectrally ideal regularisation....... It is shown that the regularised Poisson solver can be extended to handle mixed periodic and free-space boundary conditions. This is done by solving the equation spectrally in the periodic directions which yields a modified Helmholtz equation for the free-space directions which in turn is solved by deriving...... the appropriate regularised Green’s functions. Using an analogy to the particle-particle particle-mesh method, a framework for calculating multi-resolution solutions using local refinement patches is presented. The regularised Poisson solver is shown to maintain a high order converging solution for different...

  14. Preference option randomized design (PORD) for comparative effectiveness research: Statistical power for testing comparative effect, preference effect, selection effect, intent-to-treat effect, and overall effect.

    Science.gov (United States)

    Heo, Moonseong; Meissner, Paul; Litwin, Alain H; Arnsten, Julia H; McKee, M Diane; Karasz, Alison; McKinley, Paula; Rehm, Colin D; Chambers, Earle C; Yeh, Ming-Chin; Wylie-Rosett, Judith

    2017-01-01

    Comparative effectiveness research trials in real-world settings may require participants to choose between preferred intervention options. A randomized clinical trial with parallel experimental and control arms is straightforward and regarded as a gold standard design, but by design it forces and anticipates the participants to comply with a randomly assigned intervention regardless of their preference. Therefore, the randomized clinical trial may impose impractical limitations when planning comparative effectiveness research trials. To accommodate participants' preference if they are expressed, and to maintain randomization, we propose an alternative design that allows participants' preference after randomization, which we call a "preference option randomized design (PORD)". In contrast to other preference designs, which ask whether or not participants consent to the assigned intervention after randomization, the crucial feature of preference option randomized design is its unique informed consent process before randomization. Specifically, the preference option randomized design consent process informs participants that they can opt out and switch to the other intervention only if after randomization they actively express the desire to do so. Participants who do not independently express explicit alternate preference or assent to the randomly assigned intervention are considered to not have an alternate preference. In sum, preference option randomized design intends to maximize retention, minimize possibility of forced assignment for any participants, and to maintain randomization by allowing participants with no or equal preference to represent random assignments. This design scheme enables to define five effects that are interconnected with each other through common design parameters-comparative, preference, selection, intent-to-treat, and overall/as-treated-to collectively guide decision making between interventions. Statistical power functions for testing

  15. Porosity testing methods for the quality assessment of selective laser melted parts

    NARCIS (Netherlands)

    Wits, Wessel W.; Carmignato, Simone; Zanini, Filippo; Vaneker, Thomas H.J.

    2016-01-01

    This study focuses on the comparison of porosity testing methods for the quality assessment of selective laser melted parts. Porosity is regarded as important quality indicator in metal additive manufacturing. Various destructive and non-destructive testing methods are compared, ranging from global

  16. Determination of Selected Volatiles in Cigarette Mainstream Smoke. The CORESTA 2009 Collaborative Study and Recommended Method

    Directory of Open Access Journals (Sweden)

    Intorp M

    2014-12-01

    Full Text Available A recommended method has been developed and published by CORESTA, applicable to the quantification of selected volatiles (1,3-butadiene, isoprene, acrylonitrile, benzene, and toluene in the gas phase of cigarette mainstream smoke. The method involved smoke collection in impinger traps and detection and measurement using gas chromatography/mass spectrometry techniques.

  17. Comparison of Selected Methods for the Enumeration of Fecal Coliforms and Escherichia coli in Shellfish

    Science.gov (United States)

    Grabow, W. O. K.; De Villiers, J. C.; Schildhauer, C. I.

    1992-01-01

    In a comparison of five selected methods for the enumeration of fecal coliforms and Escherichia coli in naturally contaminated and sewage-seeded mussels (Choromytilus spp.) and oysters (Ostrea spp.), a spread-plate procedure with mFC agar without rosolic acid and preincubation proved the method of choice for routine quality assessment. PMID:1444438

  18. Contraceptive Method Initiation: Using the Centers for Disease Control and Prevention Selected Practice Guidelines.

    Science.gov (United States)

    Wu, Wan-Ju; Edelman, Alison

    2015-12-01

    The US Selected Practice Recommendations is a companion document to the Medical Eligibility Criteria for Contraceptive Use that focuses on how providers can use contraceptive methods most effectively as well as problem-solve common issues that may arise. These guidelines serve to help clinicians provide contraception safely as well as to decrease barriers that prevent or delay a woman from obtaining a desired method. This article summarizes the Selected Practice Recommendations on timing of contraceptive initiation, examinations, and tests needed prior to starting a method and any necessary follow-up. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. A Dynamic and Adaptive Selection Radar Tracking Method Based on Information Entropy

    Directory of Open Access Journals (Sweden)

    Ge Jianjun

    2017-12-01

    Full Text Available Nowadays, the battlefield environment has become much more complex and variable. This paper presents a quantitative method and lower bound for the amount of target information acquired from multiple radar observations to adaptively and dynamically organize the detection of battlefield resources based on the principle of information entropy. Furthermore, for minimizing the given information entropy’s lower bound for target measurement at every moment, a method to dynamically and adaptively select radars with a high amount of information for target tracking is proposed. The simulation results indicate that the proposed method has higher tracking accuracy than that of tracking without adaptive radar selection based on entropy.

  20. Computational Experiment Study on Selection Mechanism of Project Delivery Method Based on Complex Factors

    Directory of Open Access Journals (Sweden)

    Xiang Ding

    2014-01-01

    Full Text Available Project delivery planning is a key stage used by the project owner (or project investor for organizing design, construction, and other operations in a construction project. The main task in this stage is to select an appropriate project delivery method. In order to analyze different factors affecting the PDM selection, this paper establishes a multiagent model mainly to show how project complexity, governance strength, and market environment affect the project owner’s decision on PDM. Experiment results show that project owner usually choose Design-Build method when the project is very complex within a certain range. Besides, this paper points out that Design-Build method will be the prior choice when the potential contractors develop quickly. This paper provides the owners with methods and suggestions in terms of showing how the factors affect PDM selection, and it may improve the project performance.

  1. HIV-1 protease cleavage site prediction based on two-stage feature selection method.

    Science.gov (United States)

    Niu, Bing; Yuan, Xiao-Cheng; Roeper, Preston; Su, Qiang; Peng, Chun-Rong; Yin, Jing-Yuan; Ding, Juan; Li, HaiPeng; Lu, Wen-Cong

    2013-03-01

    Knowledge of the mechanism of HIV protease cleavage specificity is critical to the design of specific and effective HIV inhibitors. Searching for an accurate, robust, and rapid method to correctly predict the cleavage sites in proteins is crucial when searching for possible HIV inhibitors. In this article, HIV-1 protease specificity was studied using the correlation-based feature subset (CfsSubset) selection method combined with Genetic Algorithms method. Thirty important biochemical features were found based on a jackknife test from the original data set containing 4,248 features. By using the AdaBoost method with the thirty selected features the prediction model yields an accuracy of 96.7% for the jackknife test and 92.1% for an independent set test, with increased accuracy over the original dataset by 6.7% and 77.4%, respectively. Our feature selection scheme could be a useful technique for finding effective competitive inhibitors of HIV protease.

  2. A Novel Technique of Error Concealment Method Selection in Texture Images Using ALBP Classifier

    Directory of Open Access Journals (Sweden)

    Z. Tothova

    2010-06-01

    Full Text Available There are many error concealment techniques for image processing. In the paper, the focus is on restoration of image with missing blocks or macroblocks. Different methods can be optimal for different kinds of images. In recent years, great attention was dedicated to textures, and specific methods were developed for their processing. Many of them use classification of textures as an integral part. It is also of an advantage to know the texture classification to select the best restoration technique. In the paper, selection based on texture classification with advanced local binary patterns and spatial distribution of dominant patterns is proposed. It is shown, that for classified textures, optimal error concealment method can be selected from predefined ones, resulting then in better restoration. For testing, three methods of extrapolation and texture synthesis were used.

  3. Single-chain lipopeptide vaccines for the induction of virus-specific cytotoxic T cell responses in randomly selected populations.

    Science.gov (United States)

    Gras-Masse, H

    2001-12-01

    Effective vaccine development is now taking advantage of the rapidly accumulating information concerning the molecular basis of a protective immune response. Analysts and medicinal chemists have joined forces with immunologists and taken up the clear challenge of identifying immunologically active structural elements and synthesizing them in pure, reproducible forms. Current literature reveals the growing interest for extremely reductionist approaches aiming at producing totally synthetic vaccines that would be fully defined at the molecular level and particularly safe. The sequential information contained in these formulations tends to be minimized to those epitopes which elicit neutralizing antibodies, or cell-mediated responses. In the following review, we describe some of our results in developing fully synthetic, clinically acceptable lipopeptide vaccines for inducing cytotoxic T lymphocytes (CTL) responses in randomly selected populations.

  4. Psychotropic medication in a randomly selected group of citizens receiving residential or home care

    DEFF Research Database (Denmark)

    Futtrup, Tina Bergmann; Helnæs, Ann Kathrine; Schultz, Hanne

    2014-01-01

    INTRODUCTION: Treatment with one or more psychotropic medications (PMs), especially in the elderly, is associated with risk, and the effects of treatment are poorly validated. The aim of this article was to describe the use of PM in a population of citizens receiving either residential care or home...... care with focus on the prevalence of drug use, the combination of different PMs and doses in relation to current recommendations. METHODS: The medication lists of 214 citizens receiving residential care (122) and home care (92) were collected together with information on age, gender and residential...

  5. Mode selection of modal expansion method estimating vibration field of washing machine

    Science.gov (United States)

    Jung, B. K.; Jeong, W. B.

    2015-03-01

    This paper is about a study estimating the vibration and radiated noise of a washing machine by using a mode selection-applied modal expansion method (MEM). MEM is a technique that identifies the vibration field from a portion of eigenvectors (or mode shapes) of a structure, and thus, the selection of the eigenvectors has a big impact on the vibration results identified. However, there have been few studies about selecting the eigenvectors with respect to the structural vibration and radiated noise estimation. Accordingly, this paper proposes the use of a new mode selection method to identify the vibration based on the MEM and then calculate radiated noise of a washing machine. The results gained from the experiment were also compared. The vibration and noise results of numerical analysis using the proposed selection method are in line with the measured results. The selection method proposed in this paper corresponds well with the MEM and this process seems to be applicable to the estimation of various structure vibrations and radiated noise.

  6. Historical background: Why is it important to improve automated particle selection methods?

    Energy Technology Data Exchange (ETDEWEB)

    Glaeser, Robert M.

    2003-08-14

    A current trend in single-particle electron microscopy is to compute three-dimensional reconstructions with ever-increasing numbers of particles in the data sets. Since manual--or even semi-automated--selection of particles represents a major bottleneck when the data set exceeds several thousand particles, there is growing interest in developing automatic methods for selecting images of individual particles. Except in special cases, however, it has proven difficult to achieve the degree of efficiency and reliability that would make fully automated particle selection a useful tool. The simplest methods such as cross correlation (i.e., matched filtering) do not perform well enough to be used for fully automated particle selection. Geometric properties (area, perimeter-to-area ratio, etc.) and the integrated ''mass'' of candidate particles are additional factors that could improve automated particle selection if suitable methods of contouring particles could be developed. Another suggestion is that data be always collected as pairs of images, the first taken at low defocus (to capture information at the highest possible resolution) and the second at very high defocus (to improve the visibility of the particle). Finally, it is emphasized that well-annotated, open-access data sets need to be established in order to encourage the further development and validation of methods for automated particle selection.

  7. A fast gene selection method for multi-cancer classification using multiple support vector data description.

    Science.gov (United States)

    Cao, Jin; Zhang, Li; Wang, Bangjun; Li, Fanzhang; Yang, Jiwen

    2015-02-01

    For cancer classification problems based on gene expression, the data usually has only a few dozen sizes but has thousands to tens of thousands of genes which could contain a large number of irrelevant genes. A robust feature selection algorithm is required to remove irrelevant genes and choose the informative ones. Support vector data description (SVDD) has been applied to gene selection for many years. However, SVDD cannot address the problems with multiple classes since it only considers the target class. In addition, it is time-consuming when applying SVDD to gene selection. This paper proposes a novel fast feature selection method based on multiple SVDD and applies it to multi-class microarray data. A recursive feature elimination (RFE) scheme is introduced to iteratively remove irrelevant features, so the proposed method is called multiple SVDD-RFE (MSVDD-RFE). To make full use of all classes for a given task, MSVDD-RFE independently selects a relevant gene subset for each class. The final selected gene subset is the union of these relevant gene subsets. The effectiveness and accuracy of MSVDD-RFE are validated by experiments on five publicly available microarray datasets. Our proposed method is faster and more effective than other methods. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. Comparison of statistical methods for identification of Streptococcus thermophilus, Enterococcus faecalis, and Enterococcus faecium from randomly amplified polymorphic DNA patterns.

    Science.gov (United States)

    Moschetti, G; Blaiotta, G; Villani, F; Coppola, S; Parente, E

    2001-05-01

    Thermophilic streptococci play an important role in the manufacture of many European cheeses, and a rapid and reliable method for their identification is needed. Randomly amplified polymorphic DNA (RAPD) PCR (RAPD-PCR) with two different primers coupled to hierarchical cluster analysis has proven to be a powerful tool for the classification and typing of Streptococcus thermophilus, Enterococcus faecium, and Enterococcus faecalis (G. Moschetti, G. Blaiotta, M. Aponte, P. Catzeddu, F. Villani, P. Deiana, and S. Coppola, J. Appl. Microbiol. 85:25-36, 1998). In order to develop a fast and inexpensive method for the identification of thermophilic streptococci, RAPD-PCR patterns were generated with a single primer (XD9), and the results were analyzed using artificial neural networks (Multilayer Perceptron, Radial Basis Function network, and Bayesian network) and multivariate statistical techniques (cluster analysis, linear discriminant analysis, and classification trees). Cluster analysis allowed the identification of S. thermophilus but not of enterococci. A Bayesian network proved to be more effective than a Multilayer Perceptron or a Radial Basis Function network for the identification of S. thermophilus, E. faecium, and E. faecalis using simplified RAPD-PCR patterns (obtained by summing the bands in selected areas of the patterns). The Bayesian network also significantly outperformed two multivariate statistical techniques (linear discriminant analysis and classification trees) and proved to be less sensitive to the size of the training set and more robust in the response to patterns belonging to unknown species.

  9. Fuzzy norm method for evaluating random vibration of airborne platform from limited PSD data

    Directory of Open Access Journals (Sweden)

    Wang Zhongyu

    2014-12-01

    Full Text Available For random vibration of airborne platform, the accurate evaluation is a key indicator to ensure normal operation of airborne equipment in flight. However, only limited power spectral density (PSD data can be obtained at the stage of flight test. Thus, those conventional evaluation methods cannot be employed when the distribution characteristics and priori information are unknown. In this paper, the fuzzy norm method (FNM is proposed which combines the advantages of fuzzy theory and norm theory. The proposed method can deeply dig system information from limited data, which probability distribution is not taken into account. Firstly, the FNM is employed to evaluate variable interval and expanded uncertainty from limited PSD data, and the performance of FNM is demonstrated by confidence level, reliability and computing accuracy of expanded uncertainty. In addition, the optimal fuzzy parameters are discussed to meet the requirements of aviation standards and metrological practice. Finally, computer simulation is used to prove the adaptability of FNM. Compared with statistical methods, FNM has superiority for evaluating expanded uncertainty from limited data. The results show that the reliability of calculation and evaluation is superior to 95%.

  10. Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness

    Energy Technology Data Exchange (ETDEWEB)

    Chelouche, Doron; Pozo-Nuñez, Francisco [Department of Physics, Faculty of Natural Sciences, University of Haifa, Haifa 3498838 (Israel); Zucker, Shay, E-mail: doron@sci.haifa.ac.il, E-mail: francisco.pozon@gmail.com, E-mail: shayz@post.tau.ac.il [Department of Geosciences, Raymond and Beverly Sackler Faculty of Exact Sciences, Tel Aviv University, Tel Aviv 6997801 (Israel)

    2017-08-01

    A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discrete correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size–luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.

  11. Methods of Reverberation Mapping. I. Time-lag Determination by Measures of Randomness

    Science.gov (United States)

    Chelouche, Doron; Pozo-Nuñez, Francisco; Zucker, Shay

    2017-08-01

    A class of methods for measuring time delays between astronomical time series is introduced in the context of quasar reverberation mapping, which is based on measures of randomness or complexity of the data. Several distinct statistical estimators are considered that do not rely on polynomial interpolations of the light curves nor on their stochastic modeling, and do not require binning in correlation space. Methods based on von Neumann’s mean-square successive-difference estimator are found to be superior to those using other estimators. An optimized von Neumann scheme is formulated, which better handles sparsely sampled data and outperforms current implementations of discrete correlation function methods. This scheme is applied to existing reverberation data of varying quality, and consistency with previously reported time delays is found. In particular, the size-luminosity relation of the broad-line region in quasars is recovered with a scatter comparable to that obtained by other works, yet with fewer assumptions made concerning the process underlying the variability. The proposed method for time-lag determination is particularly relevant for irregularly sampled time series, and in cases where the process underlying the variability cannot be adequately modeled.

  12. SORIOS – A method for evaluating and selecting environmental certificates and labels

    DEFF Research Database (Denmark)

    Kikkenborg Pedersen, Dennis; Dukovska-Popovska, Iskra; Ola Strandhagen, Jan

    2012-01-01

    This paper presents a general method for evaluating and selecting environmental certificates and labels for companies to use on products and services. The method is developed based on a case study using a Grounded Theory approach. The result is a generalized six-step method that features an initi...... searching strategy and an evaluation model that weighs the prerequisites, rewards and the organization of certificate or label against the strategic needs of a company.......This paper presents a general method for evaluating and selecting environmental certificates and labels for companies to use on products and services. The method is developed based on a case study using a Grounded Theory approach. The result is a generalized six-step method that features an initial...

  13. Comparison of injection methods in myofascial pain syndrome: a randomized controlled trial.

    Science.gov (United States)

    Ay, Saime; Evcik, Deniz; Tur, Birkan Sonel

    2010-01-01

    In this study; we aimed to compare the efficacy of local anesthetic injection and dry needling methods on pain, cervical range of motion (ROM), and depression in myofascial pain syndrome patients (MPS). This study was designed as a prospective randomized controlled study. Eighty patients (female 52/male 28) admitted to a physical medicine and rehabilitation outpatient clinic diagnosed as MPS were included in the study. Patients were randomly assigned into two groups. Group 1 (n = 40) received local anesthetic injection (2 ml lidocaine of 1%) and group 2 (n = 40) received dry injecting on trigger points. Both patient groups were given stretching exercises aimed at the trapezius muscle to be applied at home. Patients were evaluated according to pain, cervical ROM, and depression. Pain was assessed using Visual Analog Scale (VAS) and active cervical ROM was measured using goniometry. Beck Depression Inventory (BDI) was used to assess the level of depression. There were no statistically significant differences in the pre-treatment evaluation parameters of the patients. There were statistically significant improvements in VAS, cervical ROM, and BDI scores after 4 and 12 weeks in both groups compared to pre-treatment results (p 0.05). Our study indicated that exercise associated with local anesthetic and dry needling injections were effective in decrease of pain level in MPS as well as increase of cervical ROM and decrease of depressive mood levels of individuals.

  14. A computational method for selecting short peptide sequences for inorganic material binding.

    Science.gov (United States)

    Nayebi, Niloofar; Cetinel, Sibel; Omar, Sara Ibrahim; Tuszynski, Jack A; Montemagno, Carlo

    2017-11-01

    Discovering or designing biofunctionalized materials with improved quality highly depends on the ability to manipulate and control the peptide-inorganic interaction. Various peptides can be used as assemblers, synthesizers, and linkers in the material syntheses. In another context, specific and selective material-binding peptides can be used as recognition blocks in mining applications. In this study, we propose a new in silico method to select short 4-mer peptides with high affinity and selectivity for a given target material. This method is illustrated with the calcite (104) surface as an example, which has been experimentally validated. A calcite binding peptide can play an important role in our understanding of biomineralization. A practical aspect of calcite is a need for it to be selectively depressed in mining sites. © 2017 Wiley Periodicals, Inc.

  15. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  16. A rapid and scalable method for selecting recombinant mouse monoclonal antibodies

    Directory of Open Access Journals (Sweden)

    Wright Gavin J

    2010-06-01

    Full Text Available Abstract Background Monoclonal antibodies with high affinity and selectivity that work on wholemount fixed tissues are valuable reagents to the cell and developmental biologist, and yet isolating them remains a long and unpredictable process. Here we report a rapid and scalable method to select and express recombinant mouse monoclonal antibodies that are essentially equivalent to those secreted by parental IgG-isotype hybridomas. Results Increased throughput was achieved by immunizing mice with pools of antigens and cloning - from small numbers of hybridoma cells - the functionally rearranged light and heavy chains into a single expression plasmid. By immunizing with the ectodomains of zebrafish cell surface receptor proteins expressed in mammalian cells and screening for formalin-resistant epitopes, we selected antibodies that gave expected staining patterns on wholemount fixed zebrafish embryos. Conclusions This method can be used to quickly select several high quality monoclonal antibodies from a single immunized mouse and facilitates their distribution using plasmids.

  17. The MCDM Model for Personnel Selection Based on SWARA and ARAS Methods

    Directory of Open Access Journals (Sweden)

    Darjan Karabasevic

    2015-05-01

    Full Text Available Competent employees are the key resource in an organization for achieving success and, therefore, competitiveness on the market. The aim of the recruitment and selection process is to acquire personnel with certain competencies required for a particular position, i.e.,a position within the company. Bearing in mind the fact that in the process of decision making decision-makers have underused the methods of making decisions, this paper aims to establish an MCDM model for the evaluation and selection of candidates in the process of the recruitment and selection of personnel based on the SWARA and the ARAS methods. Apart from providing an MCDM model, the paper will additionally provide a set of evaluation criteria for the position of a sales manager (the middle management in the telecommunication industry which will also be used in the numerical example. On the basis of a numerical example, in the process of employment, theproposed MCDMmodel can be successfully usedin selecting candidates.

  18. Using the AHP Method to Select an ERP System for an SME Manufacturing Company

    Directory of Open Access Journals (Sweden)

    Kłos Sławomir

    2014-09-01

    Full Text Available This paper proposes the application of the Analytic Hierarchy Process method to support decision making regarding the selection of an Enterprise Resource Planning system in a manufacturing company. The main assumption of the work is that the management of the selection of an ERP system should consider that the most important selection criteria are concerned with the functionality of the ERP system. Besides this, the aspects of total cost of ownership, technical support and implementation time or vendor experience are taken into consideration to guarantee a successful ERP implementation. The proposed procedure of an ERP system selection is dedicated for small and medium manufacturing enterprises. A structure of attributes for the AHP method is proposed on the basis of an analysis and identification of critical success factors. Different kinds of production (make-to-stock, make-to-order and engineer-to-order are taken into consideration. Illustrative examples are also given.

  19. Improved ion-selective detection method using nanopipette with poly(vinyl chloride)-based membrane.

    Science.gov (United States)

    Kang, Eun Ji; Takami, Tomohide; Deng, Xiao Long; Son, Jong Wan; Kawai, Tomoji; Park, Bae Ho

    2014-05-15

    Ion-selective electrodes (ISEs) are widely used to detect targeted ions in solution selectively. Application of an ISE to a small area detection system with a nanopipette requires a special measurement method in order to avoid the enhanced background signal problem caused by a cation-rich layer near the charged inner surface of the nanopipette and the selectivity change problem due to relatively fast saturation of the ISE inside the nanopipette. We developed a novel ion-selective detection system using a nanopipette that measures an alternating current (AC) signal mediated by saturated ionophores in a poly(vinyl chloride) (PVC) membrane located at the conical shank of the nanopipette to solve the above problems. Small but reliable K(+) and Na(+) ionic current passing through a PVC membrane containing saturated bis(benzo-15-crown-5) and bis(12-crown-4) ionophore, respectively, could be selectively detected using the AC signal measurement system equipped with a lock-in amplifier.

  20. An Integrated Intuitionistic Fuzzy Multi Criteria Decision Making Method for Facility Location Selection

    OpenAIRE

    Boran, Fatih

    2011-01-01

    The facility location selection, which is one of the important activities in strategic planning for a wide range of private and public companies, is a multi-criteria decision making problem including both quantitative and qualitative criteria. Traditional methods for facility location selection can not be effectively handled because information can not be represented by precise information under many conditions. This paper proposes the integration of intuitionistic fuzzy preference relation a...