WorldWideScience

Sample records for model selection based

  1. Uniform design based SVM model selection for face recognition

    Science.gov (United States)

    Li, Weihong; Liu, Lijuan; Gong, Weiguo

    2010-02-01

    Support vector machine (SVM) has been proved to be a powerful tool for face recognition. The generalization capacity of SVM depends on the model with optimal hyperparameters. The computational cost of SVM model selection results in application difficulty in face recognition. In order to overcome the shortcoming, we utilize the advantage of uniform design--space filling designs and uniformly scattering theory to seek for optimal SVM hyperparameters. Then we propose a face recognition scheme based on SVM with optimal model which obtained by replacing the grid and gradient-based method with uniform design. The experimental results on Yale and PIE face databases show that the proposed method significantly improves the efficiency of SVM model selection.

  2. Numerical Model based Reliability Estimation of Selective Laser Melting Process

    DEFF Research Database (Denmark)

    Mohanty, Sankhya; Hattel, Jesper Henri

    2014-01-01

    Selective laser melting is developing into a standard manufacturing technology with applications in various sectors. However, the process is still far from being at par with conventional processes such as welding and casting, the primary reason of which is the unreliability of the process. While...... of the selective laser melting process. A validated 3D finite-volume alternating-direction-implicit numerical technique is used to model the selective laser melting process, and is calibrated against results from single track formation experiments. Correlation coefficients are determined for process input...... parameters such as laser power, speed, beam profile, etc. Subsequently, uncertainties in the processing parameters are utilized to predict a range for the various outputs, using a Monte Carlo method based uncertainty analysis methodology, and the reliability of the process is established....

  3. Auditory-model based robust feature selection for speech recognition.

    Science.gov (United States)

    Koniaris, Christos; Kuropatwinski, Marcin; Kleijn, W Bastiaan

    2010-02-01

    It is shown that robust dimension-reduction of a feature set for speech recognition can be based on a model of the human auditory system. Whereas conventional methods optimize classification performance, the proposed method exploits knowledge implicit in the auditory periphery, inheriting its robustness. Features are selected to maximize the similarity of the Euclidean geometry of the feature domain and the perceptual domain. Recognition experiments using mel-frequency cepstral coefficients (MFCCs) confirm the effectiveness of the approach, which does not require labeled training data. For noisy data the method outperforms commonly used discriminant-analysis based dimension-reduction methods that rely on labeling. The results indicate that selecting MFCCs in their natural order results in subsets with good performance.

  4. A Reliability Based Model for Wind Turbine Selection

    Directory of Open Access Journals (Sweden)

    A.K. Rajeevan

    2013-06-01

    Full Text Available A wind turbine generator output at a specific site depends on many factors, particularly cut- in, rated and cut-out wind speed parameters. Hence power output varies from turbine to turbine. The objective of this paper is to develop a mathematical relationship between reliability and wind power generation. The analytical computation of monthly wind power is obtained from weibull statistical model using cubic mean cube root of wind speed. Reliability calculation is based on failure probability analysis. There are many different types of wind turbinescommercially available in the market. From reliability point of view, to get optimum reliability in power generation, it is desirable to select a wind turbine generator which is best suited for a site. The mathematical relationship developed in this paper can be used for site-matching turbine selection in reliability point of view.

  5. Fuzzy Investment Portfolio Selection Models Based on Interval Analysis Approach

    Directory of Open Access Journals (Sweden)

    Haifeng Guo

    2012-01-01

    Full Text Available This paper employs fuzzy set theory to solve the unintuitive problem of the Markowitz mean-variance (MV portfolio model and extend it to a fuzzy investment portfolio selection model. Our model establishes intervals for expected returns and risk preference, which can take into account investors' different investment appetite and thus can find the optimal resolution for each interval. In the empirical part, we test this model in Chinese stocks investment and find that this model can fulfill different kinds of investors’ objectives. Finally, investment risk can be decreased when we add investment limit to each stock in the portfolio, which indicates our model is useful in practice.

  6. Novel web service selection model based on discrete group search.

    Science.gov (United States)

    Zhai, Jie; Shao, Zhiqing; Guo, Yi; Zhang, Haiteng

    2014-01-01

    In our earlier work, we present a novel formal method for the semiautomatic verification of specifications and for describing web service composition components by using abstract concepts. After verification, the instantiations of components were selected to satisfy the complex service performance constraints. However, selecting an optimal instantiation, which comprises different candidate services for each generic service, from a large number of instantiations is difficult. Therefore, we present a new evolutionary approach on the basis of the discrete group search service (D-GSS) model. With regard to obtaining the optimal multiconstraint instantiation of the complex component, the D-GSS model has competitive performance compared with other service selection models in terms of accuracy, efficiency, and ability to solve high-dimensional service composition component problems. We propose the cost function and the discrete group search optimizer (D-GSO) algorithm and study the convergence of the D-GSS model through verification and test cases.

  7. Variable Selection in Model-based Clustering: A General Variable Role Modeling

    OpenAIRE

    Maugis, Cathy; Celeux, Gilles; Martin-Magniette, Marie-Laure

    2008-01-01

    The currently available variable selection procedures in model-based clustering assume that the irrelevant clustering variables are all independent or are all linked with the relevant clustering variables. We propose a more versatile variable selection model which describes three possible roles for each variable: The relevant clustering variables, the irrelevant clustering variables dependent on a part of the relevant clustering variables and the irrelevant clustering variables totally indepe...

  8. Selecting an Appropriate Upscaled Reservoir Model Based on Connectivity Analysis

    Directory of Open Access Journals (Sweden)

    Preux Christophe

    2016-09-01

    Full Text Available Reservoir engineers aim to build reservoir models to investigate fluid flows within hydrocarbon reservoirs. These models consist of three-dimensional grids populated by petrophysical properties. In this paper, we focus on permeability that is known to significantly influence fluid flow. Reservoir models usually encompass a very large number of fine grid blocks to better represent heterogeneities. However, performing fluid flow simulations for such fine models is extensively CPU-time consuming. A common practice consists in converting the fine models into coarse models with less grid blocks: this is the upscaling process. Many upscaling methods have been proposed in the literature that all lead to distinct coarse models. The problem is how to choose the appropriate upscaling method. Various criteria have been established to evaluate the information loss due to upscaling, but none of them investigate connectivity. In this paper, we propose to first perform a connectivity analysis for the fine and candidate coarse models. This makes it possible to identify shortest paths connecting wells. Then, we introduce two indicators to quantify the length and trajectory mismatch between the paths for the fine and the coarse models. The upscaling technique to be recommended is the one that provides the coarse model for which the shortest paths are the closest to the shortest paths determined for the fine model, both in terms of length and trajectory. Last, the potential of this methodology is investigated from two test cases. We show that the two indicators help select suitable upscaling techniques as long as gravity is not a prominent factor that drives fluid flows.

  9. Patch-based generative shape model and MDL model selection for statistical analysis of archipelagos

    DEFF Research Database (Denmark)

    Ganz, Melanie; Nielsen, Mads; Brandt, Sami

    2010-01-01

    as a probability distribution of a binary image where the model is intended to facilitate sequential simulation. Our results show that a relatively simple model is able to generate structures visually similar to calcifications. Furthermore, we used the shape model as a shape prior in the statistical segmentation......We propose a statistical generative shape model for archipelago-like structures. These kind of structures occur, for instance, in medical images, where our intention is to model the appearance and shapes of calcifications in x-ray radio graphs. The generative model is constructed by (1) learning...... a patch-based dictionary for possible shapes, (2) building up a time-homogeneous Markov model to model the neighbourhood correlations between the patches, and (3) automatic selection of the model complexity by the minimum description length principle. The generative shape model is proposed...

  10. Performance Measurement Model for the Supplier Selection Based on AHP

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2015-10-01

    Full Text Available The performance of the supplier is a crucial factor for the success or failure of any company. Rational and effective decision making in terms of the supplier selection process can help the organization to optimize cost and quality functions. The nature of supplier selection processes is generally complex, especially when the company has a large variety of products and vendors. Over the years, several solutions and methods have emerged for addressing the supplier selection problem (SSP. Experience and studies have shown that there is no best way for evaluating and selecting a specific supplier process, but that it varies from one organization to another. The aim of this research is to demonstrate how a multiple attribute decision making approach can be effectively applied for the supplier selection process.

  11. Generalized Selectivity Description for Polymeric Ion-Selective Electrodes Based on the Phase Boundary Potential Model.

    Science.gov (United States)

    Bakker, Eric

    2010-02-15

    A generalized description of the response behavior of potentiometric polymer membrane ion-selective electrodes is presented on the basis of ion-exchange equilibrium considerations at the sample-membrane interface. This paper includes and extends on previously reported theoretical advances in a more compact yet more comprehensive form. Specifically, the phase boundary potential model is used to derive the origin of the Nernstian response behavior in a single expression, which is valid for a membrane containing any charge type and complex stoichiometry of ionophore and ion-exchanger. This forms the basis for a generalized expression of the selectivity coefficient, which may be used for the selectivity optimization of ion-selective membranes containing electrically charged and neutral ionophores of any desired stoichiometry. It is shown to reduce to expressions published previously for specialized cases, and may be effectively applied to problems relevant in modern potentiometry. The treatment is extended to mixed ion solutions, offering a comprehensive yet formally compact derivation of the response behavior of ion-selective electrodes to a mixture of ions of any desired charge. It is compared to predictions by the less accurate Nicolsky-Eisenman equation. The influence of ion fluxes or any form of electrochemical excitation is not considered here, but may be readily incorporated if an ion-exchange equilibrium at the interface may be assumed in these cases.

  12. Selection bias in species distribution models: An econometric approach on forest trees based on structural modeling

    Science.gov (United States)

    Martin-StPaul, N. K.; Ay, J. S.; Guillemot, J.; Doyen, L.; Leadley, P.

    2014-12-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global changes on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of applications on forest trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8km). We also compared the outputs of the SSDM with outputs of a classical SDM (i.e. Biomod ensemble modelling) in terms of bioclimatic response curves and potential distributions under current climate and climate change scenarios. The shapes of the bioclimatic response curves and the modelled species distribution maps differed markedly between SSDM and classical SDMs, with contrasted patterns according to species and spatial resolutions. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents

  13. Discounting model selection with area-based measures: A case for numerical integration.

    Science.gov (United States)

    Gilroy, Shawn P; Hantula, Donald A

    2018-03-01

    A novel method for analyzing delay discounting data is proposed. This newer metric, a model-based Area Under Curve (AUC) combining approximate Bayesian model selection and numerical integration, was compared to the point-based AUC methods developed by Myerson, Green, and Warusawitharana (2001) and extended by Borges, Kuang, Milhorn, and Yi (2016). Using data from computer simulation and a published study, comparisons of these methods indicated that a model-based form of AUC offered a more consistent and statistically robust measurement of area than provided by using point-based methods alone. Beyond providing a form of AUC directly from a discounting model, numerical integration methods permitted a general calculation in cases when the Effective Delay 50 (ED50) measure could not be calculated. This allowed discounting model selection to proceed in conditions where data are traditionally more challenging to model and measure, a situation where point-based AUC methods are often enlisted. Results from simulation and existing data indicated that numerical integration methods extended both the area-based interpretation of delay discounting as well as the discounting model selection approach. Limitations of point-based AUC as a first-line analysis of discounting and additional extensions of discounting model selection were also discussed. © 2018 Society for the Experimental Analysis of Behavior.

  14. A Site Selection Model for a Straw-Based Power Generation Plant with CO2 Emissions

    Directory of Open Access Journals (Sweden)

    Hao Lv

    2014-10-01

    Full Text Available The decision on the location of a straw-based power generation plant has a great influence on the plant’s operation and performance. This study explores traditional theories for site selection. Using integer programming, the study optimizes the economic and carbon emission outcomes of straw-based power generation as two objectives, with the supply and demand of straw as constraints. It provides a multi-objective mixed-integer programming model to solve the site selection problem for a straw-based power generation plant. It then provides a case study to demonstrate the application of the model in the decision on the site selection for a straw-based power generation plant with a Chinese region. Finally, the paper discusses the result of the model in the context of the wider aspect of straw-based power generation.

  15. Traditional and robust vector selection methods for use with similarity based models

    International Nuclear Information System (INIS)

    Hines, J. W.; Garvey, D. R.

    2006-01-01

    Vector selection, or instance selection as it is often called in the data mining literature, performs a critical task in the development of nonparametric, similarity based models. Nonparametric, similarity based modeling (SBM) is a form of 'lazy learning' which constructs a local model 'on the fly' by comparing a query vector to historical, training vectors. For large training sets the creation of local models may become cumbersome, since each training vector must be compared to the query vector. To alleviate this computational burden, varying forms of training vector sampling may be employed with the goal of selecting a subset of the training data such that the samples are representative of the underlying process. This paper describes one such SBM, namely auto-associative kernel regression (AAKR), and presents five traditional vector selection methods and one robust vector selection method that may be used to select prototype vectors from a larger data set in model training. The five traditional vector selection methods considered are min-max, vector ordering, combination min-max and vector ordering, fuzzy c-means clustering, and Adeli-Hung clustering. Each method is described in detail and compared using artificially generated data and data collected from the steam system of an operating nuclear power plant. (authors)

  16. Selecting representative climate models for climate change impact studies : An advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; ter Maat, Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  17. Selecting representative climate models for climate change impact studies: an advanced envelope-based selection approach

    NARCIS (Netherlands)

    Lutz, Arthur F.; Maat, ter Herbert W.; Biemans, Hester; Shrestha, Arun B.; Wester, Philippus; Immerzeel, Walter W.

    2016-01-01

    Climate change impact studies depend on projections of future climate provided by climate models. The number of climate models is large and increasing, yet limitations in computational capacity make it necessary to compromise the number of climate models that can be included in a climate change

  18. Order Selection for General Expression of Nonlinear Autoregressive Model Based on Multivariate Stepwise Regression

    Science.gov (United States)

    Shi, Jinfei; Zhu, Songqing; Chen, Ruwen

    2017-12-01

    An order selection method based on multiple stepwise regressions is proposed for General Expression of Nonlinear Autoregressive model which converts the model order problem into the variable selection of multiple linear regression equation. The partial autocorrelation function is adopted to define the linear term in GNAR model. The result is set as the initial model, and then the nonlinear terms are introduced gradually. Statistics are chosen to study the improvements of both the new introduced and originally existed variables for the model characteristics, which are adopted to determine the model variables to retain or eliminate. So the optimal model is obtained through data fitting effect measurement or significance test. The simulation and classic time-series data experiment results show that the method proposed is simple, reliable and can be applied to practical engineering.

  19. Supplier selection based on a neural network model using genetic algorithm.

    Science.gov (United States)

    Golmohammadi, Davood; Creese, Robert C; Valian, Haleh; Kolassa, John

    2009-09-01

    In this paper, a decision-making model was developed to select suppliers using neural networks (NNs). This model used historical supplier performance data for selection of vendor suppliers. Input and output were designed in a unique manner for training purposes. The managers' judgments about suppliers were simulated by using a pairwise comparisons matrix for output estimation in the NN. To obtain the benefit of a search technique for model structure and training, genetic algorithm (GA) was applied for the initial weights and architecture of the network. The suppliers' database information (input) can be updated over time to change the suppliers' score estimation based on their performance. The case study illustrated shows how the model can be applied for suppliers' selection.

  20. PARAMETER ESTIMATION AND MODEL SELECTION FOR INDOOR ENVIRONMENTS BASED ON SPARSE OBSERVATIONS

    Directory of Open Access Journals (Sweden)

    Y. Dehbi

    2017-09-01

    Full Text Available This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  1. Parameter Estimation and Model Selection for Indoor Environments Based on Sparse Observations

    Science.gov (United States)

    Dehbi, Y.; Loch-Dehbi, S.; Plümer, L.

    2017-09-01

    This paper presents a novel method for the parameter estimation and model selection for the reconstruction of indoor environments based on sparse observations. While most approaches for the reconstruction of indoor models rely on dense observations, we predict scenes of the interior with high accuracy in the absence of indoor measurements. We use a model-based top-down approach and incorporate strong but profound prior knowledge. The latter includes probability density functions for model parameters and sparse observations such as room areas and the building footprint. The floorplan model is characterized by linear and bi-linear relations with discrete and continuous parameters. We focus on the stochastic estimation of model parameters based on a topological model derived by combinatorial reasoning in a first step. A Gauss-Markov model is applied for estimation and simulation of the model parameters. Symmetries are represented and exploited during the estimation process. Background knowledge as well as observations are incorporated in a maximum likelihood estimation and model selection is performed with AIC/BIC. The likelihood is also used for the detection and correction of potential errors in the topological model. Estimation results are presented and discussed.

  2. On dynamic selection of households for direct marketing based on Markov chain models with memory

    NARCIS (Netherlands)

    Otter, Pieter W.

    A simple, dynamic selection procedure is proposed, based on conditional, expected profits using Markov chain models with memory. The method is easy to apply, only frequencies and mean values have to be calculated or estimated. The method is empirically illustrated using a data set from a charitable

  3. An Improved Test Selection Optimization Model Based on Fault Ambiguity Group Isolation and Chaotic Discrete PSO

    Directory of Open Access Journals (Sweden)

    Xiaofeng Lv

    2018-01-01

    Full Text Available Sensor data-based test selection optimization is the basis for designing a test work, which ensures that the system is tested under the constraint of the conventional indexes such as fault detection rate (FDR and fault isolation rate (FIR. From the perspective of equipment maintenance support, the ambiguity isolation has a significant effect on the result of test selection. In this paper, an improved test selection optimization model is proposed by considering the ambiguity degree of fault isolation. In the new model, the fault test dependency matrix is adopted to model the correlation between the system fault and the test group. The objective function of the proposed model is minimizing the test cost with the constraint of FDR and FIR. The improved chaotic discrete particle swarm optimization (PSO algorithm is adopted to solve the improved test selection optimization model. The new test selection optimization model is more consistent with real complicated engineering systems. The experimental result verifies the effectiveness of the proposed method.

  4. Sensor selection of helicopter transmission systems based on physical model and sensitivity analysis

    Directory of Open Access Journals (Sweden)

    Lyu Kehong

    2014-06-01

    Full Text Available In the helicopter transmission systems, it is important to monitor and track the tooth damage evolution using lots of sensors and detection methods. This paper develops a novel approach for sensor selection based on physical model and sensitivity analysis. Firstly, a physical model of tooth damage and mesh stiffness is built. Secondly, some effective condition indicators (CIs are presented, and the optimal CIs set is selected by comparing their test statistics according to Mann–Kendall test. Afterwards, the selected CIs are used to generate a health indicator (HI through sen slop estimator. Then, the sensors are selected according to the monotonic relevance and sensitivity to the damage levels. Finally, the proposed method is verified by the simulation and experimental data. The results show that the approach can provide a guide for health monitoring of helicopter transmission systems, and it is effective to reduce the test cost and improve the system’s reliability.

  5. The MCDM Model for Personnel Selection Based on SWARA and ARAS Methods

    Directory of Open Access Journals (Sweden)

    Darjan Karabasevic

    2015-05-01

    Full Text Available Competent employees are the key resource in an organization for achieving success and, therefore, competitiveness on the market. The aim of the recruitment and selection process is to acquire personnel with certain competencies required for a particular position, i.e.,a position within the company. Bearing in mind the fact that in the process of decision making decision-makers have underused the methods of making decisions, this paper aims to establish an MCDM model for the evaluation and selection of candidates in the process of the recruitment and selection of personnel based on the SWARA and the ARAS methods. Apart from providing an MCDM model, the paper will additionally provide a set of evaluation criteria for the position of a sales manager (the middle management in the telecommunication industry which will also be used in the numerical example. On the basis of a numerical example, in the process of employment, theproposed MCDMmodel can be successfully usedin selecting candidates.

  6. A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method

    Directory of Open Access Journals (Sweden)

    Jun-He Yang

    2017-01-01

    Full Text Available Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir’s water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir’s water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.

  7. Genome-wide selection by mixed model ridge regression and extensions based on geostatistical models.

    Science.gov (United States)

    Schulz-Streeck, Torben; Piepho, Hans-Peter

    2010-03-31

    The success of genome-wide selection (GS) approaches will depend crucially on the availability of efficient and easy-to-use computational tools. Therefore, approaches that can be implemented using mixed models hold particular promise and deserve detailed study. A particular class of mixed models suitable for GS is given by geostatistical mixed models, when genetic distance is treated analogously to spatial distance in geostatistics. We consider various spatial mixed models for use in GS. The analyses presented for the QTL-MAS 2009 dataset pay particular attention to the modelling of residual errors as well as of polygenetic effects. It is shown that geostatistical models are viable alternatives to ridge regression, one of the common approaches to GS. Correlations between genome-wide estimated breeding values and true breeding values were between 0.879 and 0.889. In the example considered, we did not find a large effect of the residual error variance modelling, largely because error variances were very small. A variance components model reflecting the pedigree of the crosses did not provide an improved fit. We conclude that geostatistical models deserve further study as a tool to GS that is easily implemented in a mixed model package.

  8. Spatial enhancement of ECG using diagnostic similarity score based lead selective multi-scale linear model.

    Science.gov (United States)

    Nallikuzhy, Jiss J; Dandapat, S

    2017-06-01

    In this work, a new patient-specific approach to enhance the spatial resolution of ECG is proposed and evaluated. The proposed model transforms a three-lead ECG into a standard twelve-lead ECG thereby enhancing its spatial resolution. The three leads used for prediction are obtained from the standard twelve-lead ECG. The proposed model takes advantage of the improved inter-lead correlation in wavelet domain. Since the model is patient-specific, it also selects the optimal predictor leads for a given patient using a lead selection algorithm. The lead selection algorithm is based on a new diagnostic similarity score which computes the diagnostic closeness between the original and the spatially enhanced leads. Standard closeness measures are used to assess the performance of the model. The similarity in diagnostic information between the original and the spatially enhanced leads are evaluated using various diagnostic measures. Repeatability and diagnosability are performed to quantify the applicability of the model. A comparison of the proposed model is performed with existing models that transform a subset of standard twelve-lead ECG into the standard twelve-lead ECG. From the analysis of the results, it is evident that the proposed model preserves diagnostic information better compared to other models. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Progressive sampling-based Bayesian optimization for efficient and automatic machine learning model selection.

    Science.gov (United States)

    Zeng, Xueqiang; Luo, Gang

    2017-12-01

    Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.

  10. Trust-Enhanced Cloud Service Selection Model Based on QoS Analysis.

    Science.gov (United States)

    Pan, Yuchen; Ding, Shuai; Fan, Wenjuan; Li, Jing; Yang, Shanlin

    2015-01-01

    Cloud computing technology plays a very important role in many areas, such as in the construction and development of the smart city. Meanwhile, numerous cloud services appear on the cloud-based platform. Therefore how to how to select trustworthy cloud services remains a significant problem in such platforms, and extensively investigated owing to the ever-growing needs of users. However, trust relationship in social network has not been taken into account in existing methods of cloud service selection and recommendation. In this paper, we propose a cloud service selection model based on the trust-enhanced similarity. Firstly, the direct, indirect, and hybrid trust degrees are measured based on the interaction frequencies among users. Secondly, we estimate the overall similarity by combining the experience usability measured based on Jaccard's Coefficient and the numerical distance computed by Pearson Correlation Coefficient. Then through using the trust degree to modify the basic similarity, we obtain a trust-enhanced similarity. Finally, we utilize the trust-enhanced similarity to find similar trusted neighbors and predict the missing QoS values as the basis of cloud service selection and recommendation. The experimental results show that our approach is able to obtain optimal results via adjusting parameters and exhibits high effectiveness. The cloud services ranking by our model also have better QoS properties than other methods in the comparison experiments.

  11. Leukocyte Motility Models Assessed through Simulation and Multi-objective Optimization-Based Model Selection.

    Directory of Open Access Journals (Sweden)

    Mark N Read

    2016-09-01

    Full Text Available The advent of two-photon microscopy now reveals unprecedented, detailed spatio-temporal data on cellular motility and interactions in vivo. Understanding cellular motility patterns is key to gaining insight into the development and possible manipulation of the immune response. Computational simulation has become an established technique for understanding immune processes and evaluating hypotheses in the context of experimental data, and there is clear scope to integrate microscopy-informed motility dynamics. However, determining which motility model best reflects in vivo motility is non-trivial: 3D motility is an intricate process requiring several metrics to characterize. This complicates model selection and parameterization, which must be performed against several metrics simultaneously. Here we evaluate Brownian motion, Lévy walk and several correlated random walks (CRWs against the motility dynamics of neutrophils and lymph node T cells under inflammatory conditions by simultaneously considering cellular translational and turn speeds, and meandering indices. Heterogeneous cells exhibiting a continuum of inherent translational speeds and directionalities comprise both datasets, a feature significantly improving capture of in vivo motility when simulated as a CRW. Furthermore, translational and turn speeds are inversely correlated, and the corresponding CRW simulation again improves capture of our in vivo data, albeit to a lesser extent. In contrast, Brownian motion poorly reflects our data. Lévy walk is competitive in capturing some aspects of neutrophil motility, but T cell directional persistence only, therein highlighting the importance of evaluating models against several motility metrics simultaneously. This we achieve through novel application of multi-objective optimization, wherein each model is independently implemented and then parameterized to identify optimal trade-offs in performance against each metric. The resultant Pareto

  12. A GMM-Based Test for Normal Disturbances of the Heckman Sample Selection Model

    Directory of Open Access Journals (Sweden)

    Michael Pfaffermayr

    2014-10-01

    Full Text Available The Heckman sample selection model relies on the assumption of normal and homoskedastic disturbances. However, before considering more general, alternative semiparametric models that do not need the normality assumption, it seems useful to test this assumption. Following Meijer and Wansbeek (2007, the present contribution derives a GMM-based pseudo-score LM test on whether the third and fourth moments of the disturbances of the outcome equation of the Heckman model conform to those implied by the truncated normal distribution. The test is easy to calculate and in Monte Carlo simulations it shows good performance for sample sizes of 1000 or larger.

  13. Observability analysis for model-based fault detection and sensor selection in induction motors

    International Nuclear Information System (INIS)

    Nakhaeinejad, Mohsen; Bryant, Michael D

    2011-01-01

    Sensors in different types and configurations provide information on the dynamics of a system. For a specific task, the question is whether measurements have enough information or whether the sensor configuration can be changed to improve the performance or to reduce costs. Observability analysis may answer the questions. This paper presents a general algorithm of nonlinear observability analysis with application to model-based diagnostics and sensor selection in three-phase induction motors. A bond graph model of the motor is developed and verified with experiments. A nonlinear observability matrix based on Lie derivatives is obtained from state equations. An observability index based on the singular value decomposition of the observability matrix is obtained. Singular values and singular vectors are used to identify the most and least observable configurations of sensors and parameters. A complex step derivative technique is used in the calculation of Jacobians to improve the computational performance of the observability analysis. The proposed algorithm of observability analysis can be applied to any nonlinear system to select the best configuration of sensors for applications of model-based diagnostics, observer-based controller, or to determine the level of sensor redundancy. Observability analysis on induction motors provides various sensor configurations with corresponding observability indices. Results show the redundancy levels for different sensors, and provide a sensor selection guideline for model-based diagnostics, and for observer-based controllers. The results can also be used for sensor fault detection and to improve the reliability of the system by increasing the redundancy level in measurements

  14. Log-linear model based behavior selection method for artificial fish swarm algorithm.

    Science.gov (United States)

    Huang, Zhehuang; Chen, Yidong

    2015-01-01

    Artificial fish swarm algorithm (AFSA) is a population based optimization technique inspired by social behavior of fishes. In past several years, AFSA has been successfully applied in many research and application areas. The behavior of fishes has a crucial impact on the performance of AFSA, such as global exploration ability and convergence speed. How to construct and select behaviors of fishes are an important task. To solve these problems, an improved artificial fish swarm algorithm based on log-linear model is proposed and implemented in this paper. There are three main works. Firstly, we proposed a new behavior selection algorithm based on log-linear model which can enhance decision making ability of behavior selection. Secondly, adaptive movement behavior based on adaptive weight is presented, which can dynamically adjust according to the diversity of fishes. Finally, some new behaviors are defined and introduced into artificial fish swarm algorithm at the first time to improve global optimization capability. The experiments on high dimensional function optimization showed that the improved algorithm has more powerful global exploration ability and reasonable convergence speed compared with the standard artificial fish swarm algorithm.

  15. Peer selection and influence effects on adolescent alcohol use: a stochastic actor-based model.

    Science.gov (United States)

    Mundt, Marlon P; Mercken, Liesbeth; Zakletskaia, Larissa

    2012-08-06

    Early adolescent alcohol use is a major public health challenge. Without clear guidance on the causal pathways between peers and alcohol use, adolescent alcohol interventions may be incomplete. The objective of this study is to disentangle selection and influence effects associated with the dynamic interplay of adolescent friendships and alcohol use. The study analyzes data from Add Health, a longitudinal survey of seventh through eleventh grade U.S. students enrolled between 1995 and 1996. A stochastic actor-based model is used to model the co-evolution of alcohol use and friendship connections. Selection effects play a significant role in the creation of peer clusters with similar alcohol use. Friendship nominations between two students who shared the same alcohol use frequency were 3.60 (95% CI: 2.01-9.62) times more likely than between otherwise identical students with differing alcohol use frequency. The model controlled for alternative pathways to friendship nomination including reciprocity, transitivity, and similarities in age, gender, and race/ethnicity. The simulation model did not support a significant friends' influence effect on alcohol behavior. The findings suggest that peer selection plays a major role in alcohol use behavior among adolescent friends. Our simulation results would lend themselves to adolescent alcohol abuse interventions that leverage adolescent social network characteristics.

  16. Peer selection and influence effects on adolescent alcohol use: a stochastic actor-based model

    Directory of Open Access Journals (Sweden)

    Mundt Marlon P

    2012-08-01

    Full Text Available Abstract Background Early adolescent alcohol use is a major public health challenge. Without clear guidance on the causal pathways between peers and alcohol use, adolescent alcohol interventions may be incomplete. The objective of this study is to disentangle selection and influence effects associated with the dynamic interplay of adolescent friendships and alcohol use. Methods The study analyzes data from Add Health, a longitudinal survey of seventh through eleventh grade U.S. students enrolled between 1995 and 1996. A stochastic actor-based model is used to model the co-evolution of alcohol use and friendship connections. Results Selection effects play a significant role in the creation of peer clusters with similar alcohol use. Friendship nominations between two students who shared the same alcohol use frequency were 3.60 (95% CI: 2.01-9.62 times more likely than between otherwise identical students with differing alcohol use frequency. The model controlled for alternative pathways to friendship nomination including reciprocity, transitivity, and similarities in age, gender, and race/ethnicity. The simulation model did not support a significant friends’ influence effect on alcohol behavior. Conclusions The findings suggest that peer selection plays a major role in alcohol use behavior among adolescent friends. Our simulation results would lend themselves to adolescent alcohol abuse interventions that leverage adolescent social network characteristics.

  17. A comparison of regression methods for model selection in individual-based landscape genetic analysis.

    Science.gov (United States)

    Shirk, Andrew J; Landguth, Erin L; Cushman, Samuel A

    2018-01-01

    Anthropogenic migration barriers fragment many populations and limit the ability of species to respond to climate-induced biome shifts. Conservation actions designed to conserve habitat connectivity and mitigate barriers are needed to unite fragmented populations into larger, more viable metapopulations, and to allow species to track their climate envelope over time. Landscape genetic analysis provides an empirical means to infer landscape factors influencing gene flow and thereby inform such conservation actions. However, there are currently many methods available for model selection in landscape genetics, and considerable uncertainty as to which provide the greatest accuracy in identifying the true landscape model influencing gene flow among competing alternative hypotheses. In this study, we used population genetic simulations to evaluate the performance of seven regression-based model selection methods on a broad array of landscapes that varied by the number and type of variables contributing to resistance, the magnitude and cohesion of resistance, as well as the functional relationship between variables and resistance. We also assessed the effect of transformations designed to linearize the relationship between genetic and landscape distances. We found that linear mixed effects models had the highest accuracy in every way we evaluated model performance; however, other methods also performed well in many circumstances, particularly when landscape resistance was high and the correlation among competing hypotheses was limited. Our results provide guidance for which regression-based model selection methods provide the most accurate inferences in landscape genetic analysis and thereby best inform connectivity conservation actions. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.

  18. Research on Methods for Discovering and Selecting Cloud Infrastructure Services Based on Feature Modeling

    Directory of Open Access Journals (Sweden)

    Huamin Zhu

    2016-01-01

    Full Text Available Nowadays more and more cloud infrastructure service providers are providing large numbers of service instances which are a combination of diversified resources, such as computing, storage, and network. However, for cloud infrastructure services, the lack of a description standard and the inadequate research of systematic discovery and selection methods have exposed difficulties in discovering and choosing services for users. First, considering the highly configurable properties of a cloud infrastructure service, the feature model method is used to describe such a service. Second, based on the description of the cloud infrastructure service, a systematic discovery and selection method for cloud infrastructure services are proposed. The automatic analysis techniques of the feature model are introduced to verify the model’s validity and to perform the matching of the service and demand models. Finally, we determine the critical decision metrics and their corresponding measurement methods for cloud infrastructure services, where the subjective and objective weighting results are combined to determine the weights of the decision metrics. The best matching instances from various providers are then ranked by their comprehensive evaluations. Experimental results show that the proposed methods can effectively improve the accuracy and efficiency of cloud infrastructure service discovery and selection.

  19. Prediction of selected Indian stock using a partitioning–interpolation based ARIMA–GARCH model

    Directory of Open Access Journals (Sweden)

    C. Narendra Babu

    2015-07-01

    Full Text Available Accurate long-term prediction of time series data (TSD is a very useful research challenge in diversified fields. As financial TSD are highly volatile, multi-step prediction of financial TSD is a major research problem in TSD mining. The two challenges encountered are, maintaining high prediction accuracy and preserving the data trend across the forecast horizon. The linear traditional models such as autoregressive integrated moving average (ARIMA and generalized autoregressive conditional heteroscedastic (GARCH preserve data trend to some extent, at the cost of prediction accuracy. Non-linear models like ANN maintain prediction accuracy by sacrificing data trend. In this paper, a linear hybrid model, which maintains prediction accuracy while preserving data trend, is proposed. A quantitative reasoning analysis justifying the accuracy of proposed model is also presented. A moving-average (MA filter based pre-processing, partitioning and interpolation (PI technique are incorporated by the proposed model. Some existing models and the proposed model are applied on selected NSE India stock market data. Performance results show that for multi-step ahead prediction, the proposed model outperforms the others in terms of both prediction accuracy and preserving data trend.

  20. A Quality Function Deployment-Based Model for Cutting Fluid Selection

    Directory of Open Access Journals (Sweden)

    Kanika Prasad

    2016-01-01

    Full Text Available Cutting fluid is applied for numerous reasons while machining a workpiece, like increasing tool life, minimizing workpiece thermal deformation, enhancing surface finish, flushing away chips from cutting surface, and so on. Hence, choosing a proper cutting fluid for a specific machining application becomes important for enhanced efficiency and effectiveness of a manufacturing process. Cutting fluid selection is a complex procedure as the decision depends on many complicated interactions, including work material’s machinability, rigorousness of operation, cutting tool material, metallurgical, chemical, and human compatibility, reliability and stability of fluid, and cost. In this paper, a decision making model is developed based on quality function deployment technique with a view to respond to the complex character of cutting fluid selection problem and facilitate judicious selection of cutting fluid from a comprehensive list of available alternatives. In the first example, HD-CUTSOL is recognized as the most suitable cutting fluid for drilling holes in titanium alloy with tungsten carbide tool and in the second example, for performing honing operation on stainless steel alloy with cubic boron nitride tool, CF5 emerges out as the best honing fluid. Implementation of this model would result in cost reduction through decreased manpower requirement, enhanced workforce efficiency, and efficient information exploitation.

  1. Inference Based on the Best-Fitting Model can Contribute to the Replication Crisis: Assessing Model Selection Uncertainty Using a Bootstrap Approach

    Science.gov (United States)

    Lubke, Gitta H.; Campbell, Ian

    2016-01-01

    Inference and conclusions drawn from model fitting analyses are commonly based on a single “best-fitting” model. If model selection and inference are carried out using the same data model selection uncertainty is ignored. We illustrate the Type I error inflation that can result from using the same data for model selection and inference, and we then propose a simple bootstrap based approach to quantify model selection uncertainty in terms of model selection rates. A selection rate can be interpreted as an estimate of the replication probability of a fitted model. The benefits of bootstrapping model selection uncertainty is demonstrated in a growth mixture analyses of data from the National Longitudinal Study of Youth, and a 2-group measurement invariance analysis of the Holzinger-Swineford data. PMID:28663687

  2. Designing Organizational Effectiveness Model of Selected Iraq’s Sporting Federations Based on Competing Values Framework

    Directory of Open Access Journals (Sweden)

    Hossein Eydi

    2013-01-01

    Full Text Available The aim of the present study was designing effectiveness model of selected Iraq sport federations based on competing values framework. Statistical society of present study included 221 subjects ranging from chairmen, expert staffs, national adolescent athletes, and national referees. 180 subjects (81.4 percent answered standard questionnaire of Eydi et al (2011 with five Likert values scale. Content and face validity of this tool was confirmed by 12 academic professors and its reliability was validated by Cronbach's alpha (r = 0.97. Results of Structural Equation Model (SEM based on path analysis method showed that factors of expert human resources(0.88, organizational interaction (0.88, productivity (0.87, employees' cohesion (0.84, planning (0.84, organizational stability (0.81, flexibility (0.78, and organizational resources (0.74 had the most effects on organizational effectiveness.Also, findings of factor analysis showed that patterns of internal procedures and rational goals were main patterns of competing values framework and determinants of organizational effectiveness in Iraq's selected sport federations. Moreover, federations of football, track and field, weightlifting, and basketball had the highest mean of organizational effectiveness, respectively. Hence, Iraq sport federations mainly focused on organizational control, and internal attention as index of OE.

  3. An Appraisal Model Based on a Synthetic Feature Selection Approach for Students’ Academic Achievement

    Directory of Open Access Journals (Sweden)

    Ching-Hsue Cheng

    2017-11-01

    Full Text Available Obtaining necessary information (and even extracting hidden messages from existing big data, and then transforming them into knowledge, is an important skill. Data mining technology has received increased attention in various fields in recent years because it can be used to find historical patterns and employ machine learning to aid in decision-making. When we find unexpected rules or patterns from the data, they are likely to be of high value. This paper proposes a synthetic feature selection approach (SFSA, which is combined with a support vector machine (SVM to extract patterns and find the key features that influence students’ academic achievement. For verifying the proposed model, two databases, namely, “Student Profile” and “Tutorship Record”, were collected from an elementary school in Taiwan, and were concatenated into an integrated dataset based on students’ names as a research dataset. The results indicate the following: (1 the accuracy of the proposed feature selection approach is better than that of the Minimum-Redundancy-Maximum-Relevance (mRMR approach; (2 the proposed model is better than the listing methods when the six least influential features have been deleted; and (3 the proposed model can enhance the accuracy and facilitate the interpretation of the pattern from a hybrid-type dataset of students’ academic achievement.

  4. Environmental risk assessment of selected organic chemicals based on TOC test and QSAR estimation models.

    Science.gov (United States)

    Chi, Yulang; Zhang, Huanteng; Huang, Qiansheng; Lin, Yi; Ye, Guozhu; Zhu, Huimin; Dong, Sijun

    2018-02-01

    Environmental risks of organic chemicals have been greatly determined by their persistence, bioaccumulation, and toxicity (PBT) and physicochemical properties. Major regulations in different countries and regions identify chemicals according to their bioconcentration factor (BCF) and octanol-water partition coefficient (Kow), which frequently displays a substantial correlation with the sediment sorption coefficient (Koc). Half-life or degradability is crucial for the persistence evaluation of chemicals. Quantitative structure activity relationship (QSAR) estimation models are indispensable for predicting environmental fate and health effects in the absence of field- or laboratory-based data. In this study, 39 chemicals of high concern were chosen for half-life testing based on total organic carbon (TOC) degradation, and two widely accepted and highly used QSAR estimation models (i.e., EPI Suite and PBT Profiler) were adopted for environmental risk evaluation. The experimental results and estimated data, as well as the two model-based results were compared, based on the water solubility, Kow, Koc, BCF and half-life. Environmental risk assessment of the selected compounds was achieved by combining experimental data and estimation models. It was concluded that both EPI Suite and PBT Profiler were fairly accurate in measuring the physicochemical properties and degradation half-lives for water, soil, and sediment. However, the half-lives between the experimental and the estimated results were still not absolutely consistent. This suggests deficiencies of the prediction models in some ways, and the necessity to combine the experimental data and predicted results for the evaluation of environmental fate and risks of pollutants. Copyright © 2016. Published by Elsevier B.V.

  5. Modeling Multilevel Supplier Selection Problem Based on Weighted-Directed Network and Its Solution

    Directory of Open Access Journals (Sweden)

    Chia-Te Wei

    2017-01-01

    Full Text Available With the rapid development of economy, the supplier network is becoming more and more complicated. It is important to choose the right suppliers for improving the efficiency of the supply chain, so how to choose the right ones is one of the important research directions of supply chain management. This paper studies the partner selection problem from the perspective of supplier network global optimization. Firstly, this paper discusses and forms the evaluation system to estimate the supplier from the two indicators of risk and greenness and then applies the value as the weight of the network between two nodes to build a weighted-directed supplier network; secondly, the study establishes the optimal combination model of supplier selection based on the global network perspective and solves the model by the dynamic programming-tabu search algorithm and the improved ant colony algorithm, respectively; finally, different scale simulation examples are given to testify the efficiency of the two algorithms. The results show that the ant colony algorithm is superior to the tabu search one as a whole, but the latter is slightly better than the former when network scale is small.

  6. GIS based site and structure selection model for groundwater recharge: a hydrogeomorphic approach.

    Science.gov (United States)

    Vijay, Ritesh; Sohony, R A

    2009-10-01

    The groundwater in India is facing a critical situation due to over exploitation, reduction in recharge potential by change in land use and land cover and improper planning and management. A groundwater development plan needs a large volume of multidisciplinary data from various sources. A geographic information system (GIS) based hydrogeomorphic approach can provide the appropriate platform for spatial analysis of diverse data sets for decision making in groundwater recharge. The paper presents development of GIS based model to provide more accuracy in identification and suitability analysis for finding out zones and locating suitable sites with suggested structures for artificial recharge. Satellite images were used to prepare the geomorphological and land use maps. For site selection, the items such as slope, surface infiltration, and order of drainage were generated and integrated in GIS using Weighted Index Overlay Analysis and Boolean logics. Similarly for identification of suitable structures, complex matrix was programmed based on local climatic, topographic, hydrogeologic and landuse conditions as per artificial recharge manual of Central Ground Water Board, India. The GIS based algorithm is implemented in a user-friendly way using arc macro language on Arc/Info platform.

  7. An Empirical Study of Wrappers for Feature Subset Selection based on a Parallel Genetic Algorithm: The Multi-Wrapper Model

    KAUST Repository

    Soufan, Othman

    2012-09-01

    Feature selection is the first task of any learning approach that is applied in major fields of biomedical, bioinformatics, robotics, natural language processing and social networking. In feature subset selection problem, a search methodology with a proper criterion seeks to find the best subset of features describing data (relevance) and achieving better performance (optimality). Wrapper approaches are feature selection methods which are wrapped around a classification algorithm and use a performance measure to select the best subset of features. We analyze the proper design of the objective function for the wrapper approach and highlight an objective based on several classification algorithms. We compare the wrapper approaches to different feature selection methods based on distance and information based criteria. Significant improvement in performance, computational time, and selection of minimally sized feature subsets is achieved by combining different objectives for the wrapper model. In addition, considering various classification methods in the feature selection process could lead to a global solution of desirable characteristics.

  8. Optimizing radiology peer review: a mathematical model for selecting future cases based on prior errors.

    Science.gov (United States)

    Sheu, Yun Robert; Feder, Elie; Balsim, Igor; Levin, Victor F; Bleicher, Andrew G; Branstetter, Barton F

    2010-06-01

    Peer review is an essential process for physicians because it facilitates improved quality of patient care and continuing physician learning and improvement. However, peer review often is not well received by radiologists who note that it is time intensive, is subjective, and lacks a demonstrable impact on patient care. Current advances in peer review include the RADPEER() system, with its standardization of discrepancies and incorporation of the peer-review process into the PACS itself. The purpose of this study was to build on RADPEER and similar systems by using a mathematical model to optimally select the types of cases to be reviewed, for each radiologist undergoing review, on the basis of the past frequency of interpretive error, the likelihood of morbidity from an error, the financial cost of an error, and the time required for the reviewing radiologist to interpret the study. The investigators compiled 612,890 preliminary radiology reports authored by residents and attending radiologists at a large tertiary care medical center from 1999 to 2004. Discrepancies between preliminary and final interpretations were classified by severity and validated by repeat review of major discrepancies. A mathematical model was then used to calculate, for each author of a preliminary report, the combined morbidity and financial costs of expected errors across 3 modalities (MRI, CT, and conventional radiography) and 4 departmental divisions (neuroradiology, abdominal imaging, musculoskeletal imaging, and thoracic imaging). A customized report was generated for each on-call radiologist that determined the category (modality and body part) with the highest total cost function. A universal total cost based on probability data from all radiologists was also compiled. The use of mathematical models to guide case selection could optimize the efficiency and effectiveness of physician time spent on peer review and produce more concrete and meaningful feedback to radiologists

  9. Accounting for selection bias in species distribution models: An econometric approach on forested trees based on structural modeling

    Science.gov (United States)

    Ay, Jean-Sauveur; Guillemot, Joannès; Martin-StPaul, Nicolas K.; Doyen, Luc; Leadley, Paul

    2015-04-01

    Species distribution models (SDMs) are widely used to study and predict the outcome of global change on species. In human dominated ecosystems the presence of a given species is the result of both its ecological suitability and human footprint on nature such as land use choices. Land use choices may thus be responsible for a selection bias in the presence/absence data used in SDM calibration. We present a structural modelling approach (i.e. based on structural equation modelling) that accounts for this selection bias. The new structural species distribution model (SSDM) estimates simultaneously land use choices and species responses to bioclimatic variables. A land use equation based on an econometric model of landowner choices was joined to an equation of species response to bioclimatic variables. SSDM allows the residuals of both equations to be dependent, taking into account the possibility of shared omitted variables and measurement errors. We provide a general description of the statistical theory and a set of application on forested trees over France using databases of climate and forest inventory at different spatial resolution (from 2km to 8 km). We also compared the output of the SSDM with outputs of a classical SDM in term of bioclimatic response curves and potential distribution under current climate. According to the species and the spatial resolution of the calibration dataset, shapes of bioclimatic response curves the modelled species distribution maps differed markedly between the SSDM and classical SDMs. The magnitude and directions of these differences were dependent on the correlations between the errors from both equations and were highest for higher spatial resolutions. A first conclusion is that the use of classical SDMs can potentially lead to strong miss-estimation of the actual and future probability of presence modelled. Beyond this selection bias, the SSDM we propose represents a crucial step to account for economic constraints on tree

  10. Bayesian model selection: Evidence estimation based on DREAM simulation and bridge sampling

    Science.gov (United States)

    Volpi, Elena; Schoups, Gerrit; Firmani, Giovanni; Vrugt, Jasper A.

    2017-04-01

    Bayesian inference has found widespread application in Earth and Environmental Systems Modeling, providing an effective tool for prediction, data assimilation, parameter estimation, uncertainty analysis and hypothesis testing. Under multiple competing hypotheses, the Bayesian approach also provides an attractive alternative to traditional information criteria (e.g. AIC, BIC) for model selection. The key variable for Bayesian model selection is the evidence (or marginal likelihood) that is the normalizing constant in the denominator of Bayes theorem; while it is fundamental for model selection, the evidence is not required for Bayesian inference. It is computed for each hypothesis (model) by averaging the likelihood function over the prior parameter distribution, rather than maximizing it as by information criteria; the larger a model evidence the more support it receives among a collection of hypothesis as the simulated values assign relatively high probability density to the observed data. Hence, the evidence naturally acts as an Occam's razor, preferring simpler and more constrained models against the selection of over-fitted ones by information criteria that incorporate only the likelihood maximum. Since it is not particularly easy to estimate the evidence in practice, Bayesian model selection via the marginal likelihood has not yet found mainstream use. We illustrate here the properties of a new estimator of the Bayesian model evidence, which provides robust and unbiased estimates of the marginal likelihood; the method is coined Gaussian Mixture Importance Sampling (GMIS). GMIS uses multidimensional numerical integration of the posterior parameter distribution via bridge sampling (a generalization of importance sampling) of a mixture distribution fitted to samples of the posterior distribution derived from the DREAM algorithm (Vrugt et al., 2008; 2009). Some illustrative examples are presented to show the robustness and superiority of the GMIS estimator with

  11. Bioinformatics-based selection of a model cell type for in vitro biomaterial testing.

    Science.gov (United States)

    Groen, Nathalie; van de Peppel, Jeroen; Yuan, Huipin; van Leeuwen, Johannes P T M; van Blitterswijk, Clemens A; de Boer, Jan

    2013-07-01

    Biomaterial properties can be tailored for specific applications via systematic and high-throughput screening of biomaterial-cell interactions. However, progress in material development is often hampered by the lack of adequate in vitro testing methods, frequently due to incomplete understanding of involved in vivo mechanisms. In line with drug discovery in pharmacology, a crucial step in assay development for biomaterial screening is the identification of a target to direct the screen against. Herein, the cell type to be used for screening is of essential importance and has to be carefully chosen. So far, few attention has been put on selecting a cell type specifically suitable for in vitro testing of materials for predefined applications. In this manuscript, we describe an approach to define a suitable cell type for screening assays, for which biomaterials for bone regeneration served as example. Using a bioinformatics methodology, different cell lines are compared on three well-characterized model materials. The transcriptional profiles of MG63, iMSC, SV-HFO, hPPCT, hBPCT and SW480 cells are assessed on 3 calcium phosphate-based materials to evaluate their potential application in a screening assay. We show that MG63 is the most suitable cell line to evaluate biomaterials for bone regeneration applications, evidenced by their robust gene expression differences between the 3 model materials. The gene expression differences between the cell lines were assessed based on the overall gene expression profiles and specific subsets of genes and pathways related to osteogenesis and bone homeostasis in response to the 3 materials tested. In the next phase, this cell line will be used to identify a target correlating with in vivo biomaterial performance and henceforth to develop an in vitro screening system. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. A Collective Case Study of Secondary Students' Model-Based Inquiry on Natural Selection through Programming in an Agent-Based Modeling Environment

    Science.gov (United States)

    Xiang, Lin

    This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8 th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on natural selection implemented in a charter school of a major California city during spring semester of 2009. Eight 8th grade students, two boys and six girls, participated in this study. All of them were low socioeconomic status (SES). English was a second language for all of them, but they had been identified as fluent English speakers at least a year before the study. None of them had learned either natural selection or programming before the study. The study spanned over 7 weeks and was comprised of two study phases. In phase one the subject students learned natural selection in science classroom and how to do programming in NetLogo, an ABPM tool, in a computer lab; in phase two, the subject students were asked to program a simulation of adaptation based on the natural selection model in NetLogo. Both qualitative and quantitative data were collected in this study. The data resources included (1) pre and post test questionnaire, (2) student in-class worksheet, (3) programming planning sheet, (4) code-conception matching sheet, (5) student NetLogo projects, (6) videotaped programming processes, (7) final interview, and (8) investigator's field notes. Both qualitative and quantitative approaches were applied to analyze the gathered data. The findings suggested that students made progress on understanding adaptation phenomena and natural selection at the end of ABPM-supported MBI learning but the progress was limited. These students still held some misconceptions in their conceptual models, such as the idea that animals need to "learn" to adapt into the environment. Besides, their models of natural selection appeared to be

  13. IRaPPA: information retrieval based integration of biophysical models for protein assembly selection.

    Science.gov (United States)

    Moal, Iain H; Barradas-Bautista, Didier; Jiménez-García, Brian; Torchala, Mieczyslaw; van der Velde, Arjan; Vreven, Thom; Weng, Zhiping; Bates, Paul A; Fernández-Recio, Juan

    2017-06-15

    In order to function, proteins frequently bind to one another and form 3D assemblies. Knowledge of the atomic details of these structures helps our understanding of how proteins work together, how mutations can lead to disease, and facilitates the designing of drugs which prevent or mimic the interaction. Atomic modeling of protein-protein interactions requires the selection of near-native structures from a set of docked poses based on their calculable properties. By considering this as an information retrieval problem, we have adapted methods developed for Internet search ranking and electoral voting into IRaPPA, a pipeline integrating biophysical properties. The approach enhances the identification of near-native structures when applied to four docking methods, resulting in a near-native appearing in the top 10 solutions for up to 50% of complexes benchmarked, and up to 70% in the top 100. IRaPPA has been implemented in the SwarmDock server ( http://bmm.crick.ac.uk/∼SwarmDock/ ), pyDock server ( http://life.bsc.es/pid/pydockrescoring/ ) and ZDOCK server ( http://zdock.umassmed.edu/ ), with code available on request. moal@ebi.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  14. Dopamine selectively remediates 'model-based' reward learning: a computational approach.

    Science.gov (United States)

    Sharp, Madeleine E; Foerde, Karin; Daw, Nathaniel D; Shohamy, Daphna

    2016-02-01

    Patients with loss of dopamine due to Parkinson's disease are impaired at learning from reward. However, it remains unknown precisely which aspect of learning is impaired. In particular, learning from reward, or reinforcement learning, can be driven by two distinct computational processes. One involves habitual stamping-in of stimulus-response associations, hypothesized to arise computationally from 'model-free' learning. The other, 'model-based' learning, involves learning a model of the world that is believed to support goal-directed behaviour. Much work has pointed to a role for dopamine in model-free learning. But recent work suggests model-based learning may also involve dopamine modulation, raising the possibility that model-based learning may contribute to the learning impairment in Parkinson's disease. To directly test this, we used a two-step reward-learning task which dissociates model-free versus model-based learning. We evaluated learning in patients with Parkinson's disease tested ON versus OFF their dopamine replacement medication and in healthy controls. Surprisingly, we found no effect of disease or medication on model-free learning. Instead, we found that patients tested OFF medication showed a marked impairment in model-based learning, and that this impairment was remediated by dopaminergic medication. Moreover, model-based learning was positively correlated with a separate measure of working memory performance, raising the possibility of common neural substrates. Our results suggest that some learning deficits in Parkinson's disease may be related to an inability to pursue reward based on complete representations of the environment. © The Author (2015). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Prediction of Placental Barrier Permeability: A Model Based on Partial Least Squares Variable Selection Procedure

    Directory of Open Access Journals (Sweden)

    Yong-Hong Zhang

    2015-05-01

    Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.

  16. Hybrid Model Based on Genetic Algorithms and SVM Applied to Variable Selection within Fruit Juice Classification

    Science.gov (United States)

    Fernandez-Lozano, C.; Canto, C.; Gestal, M.; Andrade-Garda, J. M.; Rabuñal, J. R.; Dorado, J.; Pazos, A.

    2013-01-01

    Given the background of the use of Neural Networks in problems of apple juice classification, this paper aim at implementing a newly developed method in the field of machine learning: the Support Vector Machines (SVM). Therefore, a hybrid model that combines genetic algorithms and support vector machines is suggested in such a way that, when using SVM as a fitness function of the Genetic Algorithm (GA), the most representative variables for a specific classification problem can be selected. PMID:24453933

  17. Optimization of multi-environment trials for genomic selection based on crop models.

    Science.gov (United States)

    Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J

    2017-08-01

    We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.

  18. Variable selection for confounder control, flexible modeling and Collaborative Targeted Minimum Loss-based Estimation in causal inference

    Science.gov (United States)

    Schnitzer, Mireille E.; Lok, Judith J.; Gruber, Susan

    2015-01-01

    This paper investigates the appropriateness of the integration of flexible propensity score modeling (nonparametric or machine learning approaches) in semiparametric models for the estimation of a causal quantity, such as the mean outcome under treatment. We begin with an overview of some of the issues involved in knowledge-based and statistical variable selection in causal inference and the potential pitfalls of automated selection based on the fit of the propensity score. Using a simple example, we directly show the consequences of adjusting for pure causes of the exposure when using inverse probability of treatment weighting (IPTW). Such variables are likely to be selected when using a naive approach to model selection for the propensity score. We describe how the method of Collaborative Targeted minimum loss-based estimation (C-TMLE; van der Laan and Gruber, 2010) capitalizes on the collaborative double robustness property of semiparametric efficient estimators to select covariates for the propensity score based on the error in the conditional outcome model. Finally, we compare several approaches to automated variable selection in low-and high-dimensional settings through a simulation study. From this simulation study, we conclude that using IPTW with flexible prediction for the propensity score can result in inferior estimation, while Targeted minimum loss-based estimation and C-TMLE may benefit from flexible prediction and remain robust to the presence of variables that are highly correlated with treatment. However, in our study, standard influence function-based methods for the variance underestimated the standard errors, resulting in poor coverage under certain data-generating scenarios. PMID:26226129

  19. A review of selected topics in physics based modeling for tunnel field-effect transistors

    Science.gov (United States)

    Esseni, David; Pala, Marco; Palestri, Pierpaolo; Alper, Cem; Rollo, Tommaso

    2017-08-01

    The research field on tunnel-FETs (TFETs) has been rapidly developing in the last ten years, driven by the quest for a new electronic switch operating at a supply voltage well below 1 V and thus delivering substantial improvements in the energy efficiency of integrated circuits. This paper reviews several aspects related to physics based modeling in TFETs, and shows how the description of these transistors implies a remarkable innovation and poses new challenges compared to conventional MOSFETs. A hierarchy of numerical models exist for TFETs covering a wide range of predictive capabilities and computational complexities. We start by reviewing seminal contributions on direct and indirect band-to-band tunneling (BTBT) modeling in semiconductors, from which most TCAD models have been actually derived. Then we move to the features and limitations of TCAD models themselves and to the discussion of what we define non-self-consistent quantum models, where BTBT is computed with rigorous quantum-mechanical models starting from frozen potential profiles and closed-boundary Schrödinger equation problems. We will then address models that solve the open-boundary Schrödinger equation problem, based either on the non-equilibrium Green’s function NEGF or on the quantum-transmitting-boundary formalism, and show how the computational burden of these models may vary in a wide range depending on the Hamiltonian employed in the calculations. A specific section is devoted to TFETs based on 2D crystals and van der Waals hetero-structures. The main goal of this paper is to provide the reader with an introduction to the most important physics based models for TFETs, and with a possible guidance to the wide and rapidly developing literature in this exciting research field.

  20. TU-CD-BRA-05: Atlas Selection for Multi-Atlas-Based Image Segmentation Using Surrogate Modeling

    International Nuclear Information System (INIS)

    Zhao, T; Ruan, D

    2015-01-01

    Purpose: The growing size and heterogeneity in training atlas necessitates sophisticated schemes to identify only the most relevant atlases for the specific multi-atlas-based image segmentation problem. This study aims to develop a model to infer the inaccessible oracle geometric relevance metric from surrogate image similarity metrics, and based on such model, provide guidance to atlas selection in multi-atlas-based image segmentation. Methods: We relate the oracle geometric relevance metric in label space to the surrogate metric in image space, by a monotonically non-decreasing function with additive random perturbations. Subsequently, a surrogate’s ability to prognosticate the oracle order for atlas subset selection is quantified probabilistically. Finally, important insights and guidance are provided for the design of fusion set size, balancing the competing demands to include the most relevant atlases and to exclude the most irrelevant ones. A systematic solution is derived based on an optimization framework. Model verification and performance assessment is performed based on clinical prostate MR images. Results: The proposed surrogate model was exemplified by a linear map with normally distributed perturbation, and verified with several commonly-used surrogates, including MSD, NCC and (N)MI. The derived behaviors of different surrogates in atlas selection and their corresponding performance in ultimate label estimate were validated. The performance of NCC and (N)MI was similarly superior to MSD, with a 10% higher atlas selection probability and a segmentation performance increase in DSC by 0.10 with the first and third quartiles of (0.83, 0.89), compared to (0.81, 0.89). The derived optimal fusion set size, valued at 7/8/8/7 for MSD/NCC/MI/NMI, agreed well with the appropriate range [4, 9] from empirical observation. Conclusion: This work has developed an efficacious probabilistic model to characterize the image-based surrogate metric on atlas selection

  1. A Heckman Selection- t Model

    KAUST Repository

    Marchenko, Yulia V.

    2012-03-01

    Sample selection arises often in practice as a result of the partial observability of the outcome of interest in a study. In the presence of sample selection, the observed data do not represent a random sample from the population, even after controlling for explanatory variables. That is, data are missing not at random. Thus, standard analysis using only complete cases will lead to biased results. Heckman introduced a sample selection model to analyze such data and proposed a full maximum likelihood estimation method under the assumption of normality. The method was criticized in the literature because of its sensitivity to the normality assumption. In practice, data, such as income or expenditure data, often violate the normality assumption because of heavier tails. We first establish a new link between sample selection models and recently studied families of extended skew-elliptical distributions. Then, this allows us to introduce a selection-t (SLt) model, which models the error distribution using a Student\\'s t distribution. We study its properties and investigate the finite-sample performance of the maximum likelihood estimators for this model. We compare the performance of the SLt model to the conventional Heckman selection-normal (SLN) model and apply it to analyze ambulatory expenditures. Unlike the SLNmodel, our analysis using the SLt model provides statistical evidence for the existence of sample selection bias in these data. We also investigate the performance of the test for sample selection bias based on the SLt model and compare it with the performances of several tests used with the SLN model. Our findings indicate that the latter tests can be misleading in the presence of heavy-tailed data. © 2012 American Statistical Association.

  2. Models selection and fitting

    International Nuclear Information System (INIS)

    Martin Llorente, F.

    1990-01-01

    The models of atmospheric pollutants dispersion are based in mathematic algorithms that describe the transport, diffusion, elimination and chemical reactions of atmospheric contaminants. These models operate with data of contaminants emission and make an estimation of quality air in the area. This model can be applied to several aspects of atmospheric contamination

  3. Process-based models of feeding and prey selection in larval fish

    DEFF Research Database (Denmark)

    Fiksen, O.; MacKenzie, Brian

    2002-01-01

    believed to be important to prey selectivity and environmental regulation of feeding in fish. We include the sensitivity of prey to the hydrodynamic signal generated by approaching larval fish and a simple model of the potential loss of prey due to turbulence whereby prey is lost if it leaves...... jig dry wt l(-1). The spatio-temporal fluctuation of turbulence (tidal cycle) and light (sun height) over the bank generates complex structure in the patterns of food intake of larval fish, with different patterns emerging for small and large larvae....

  4. APPLICATION OF RANKING BASED ATTRIBUTE SELECTION FILTERS TO PERFORM AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS THROUGH SEQUENTIAL MINIMAL OPTIMIZATION MODELS

    Directory of Open Access Journals (Sweden)

    C. Sunil Kumar

    2014-10-01

    Full Text Available In this paper, we study the performance of various models for automated evaluation of descriptive answers by using rank based feature selection filters for dimensionality reduction. We quantitatively analyze the best feature selection technique from amongst the five rank based feature selection techniques, namely Chi squared filter, Information gain filter, Gain ratio filter, Relief filter and Symmetrical uncertainty filter. We use Sequential Minimal Optimization with Polynomial kernel to build models and we evaluate the models across various parameters such as Accuracy, Time to build models, Kappa, Mean Absolute Error and Root Mean Squared Error. Except with Relief filter, for all other filters applied models, the accuracies obtained are at least 4% better than accuracies obtained with models with no filters applied. The accuracies recorded are same across Chi squared filter, Information gain filter, Gain ratio filter and Symmetrical Uncertainty filter. Therefore accuracy alone is not the determinant in selecting the best filter. The time taken to build models, Kappa, Mean absolute error and Root Mean Squared Error played a major role in determining the effectiveness of the filters. The overall rank aggregation metric of Symmetrical uncertainty filter is 45 and this is better by 1 rank than the rank aggregation metric of information gain attribute evaluation filter, the nearest contender to Symmetric attribute evaluation filter. Symmetric uncertainty rank aggregation metric is better by 3, 6, 112 ranks respectively when compared to rank aggregation metrics of Chi squared filter, Gain ratio filter and Relief filters. Through these quantitative measurements, we conclude that Symmetrical uncertainty attribute evaluation is the overall best performing rank based feature selection algorithm applicable for auto evaluation of descriptive answers.

  5. A Codon-Based Model of Host-Specific Selection in Parasites, with an Application to the Influenza A Virus

    DEFF Research Database (Denmark)

    Forsberg, Ronald; Christiansen, Freddy Bugge

    2003-01-01

    Parasites sometimes expand their host range by acquiring a new host species. Following a host change event, the selective regime acting on a given parasite gene may change due to host-specific adaptive alterations of protein functionality or host-specific immune-mediated selection. We present...... a codon-based model that attempts to include these effects by allowing the position-specific substitution process to change in conjunction with a host change event. Following maximum-likelihood parameter estimation, we employ an empirical Bayesian procedure to identify candidate sites, potentially...... involved in hostspecific adaptation. We discuss the applicability of the model to the more general problem of ascertaining whether the selective regime differs between two groups of related organisms. The utility of the model is illustrated on a dataset of nucleoprotein sequences from the influenza A virus...

  6. A quantitative quasispecies theory-based model of virus escape mutation under immune selection.

    Science.gov (United States)

    Woo, Hyung-June; Reifman, Jaques

    2012-08-07

    Viral infections involve a complex interplay of the immune response and escape mutation of the virus quasispecies inside a single host. Although fundamental aspects of such a balance of mutation and selection pressure have been established by the quasispecies theory decades ago, its implications have largely remained qualitative. Here, we present a quantitative approach to model the virus evolution under cytotoxic T-lymphocyte immune response. The virus quasispecies dynamics are explicitly represented by mutations in the combined sequence space of a set of epitopes within the viral genome. We stochastically simulated the growth of a viral population originating from a single wild-type founder virus and its recognition and clearance by the immune response, as well as the expansion of its genetic diversity. Applied to the immune escape of a simian immunodeficiency virus epitope, model predictions were quantitatively comparable to the experimental data. Within the model parameter space, we found two qualitatively different regimes of infectious disease pathogenesis, each representing alternative fates of the immune response: It can clear the infection in finite time or eventually be overwhelmed by viral growth and escape mutation. The latter regime exhibits the characteristic disease progression pattern of human immunodeficiency virus, while the former is bounded by maximum mutation rates that can be suppressed by the immune response. Our results demonstrate that, by explicitly representing epitope mutations and thus providing a genotype-phenotype map, the quasispecies theory can form the basis of a detailed sequence-specific model of real-world viral pathogens evolving under immune selection.

  7. A Modified Feature Selection and Artificial Neural Network-Based Day-Ahead Load Forecasting Model for a Smart Grid

    Directory of Open Access Journals (Sweden)

    Ashfaq Ahmad

    2015-12-01

    Full Text Available In the operation of a smart grid (SG, day-ahead load forecasting (DLF is an important task. The SG can enhance the management of its conventional and renewable resources with a more accurate DLF model. However, DLF model development is highly challenging due to the non-linear characteristics of load time series in SGs. In the literature, DLF models do exist; however, these models trade off between execution time and forecast accuracy. The newly-proposed DLF model will be able to accurately predict the load of the next day with a fair enough execution time. Our proposed model consists of three modules; the data preparation module, feature selection and the forecast module. The first module makes the historical load curve compatible with the feature selection module. The second module removes redundant and irrelevant features from the input data. The third module, which consists of an artificial neural network (ANN, predicts future load on the basis of selected features. Moreover, the forecast module uses a sigmoid function for activation and a multi-variate auto-regressive model for weight updating during the training process. Simulations are conducted in MATLAB to validate the performance of our newly-proposed DLF model in terms of accuracy and execution time. Results show that our proposed modified feature selection and modified ANN (m(FS + ANN-based model for SGs is able to capture the non-linearity(ies in the history load curve with 97 . 11 % accuracy. Moreover, this accuracy is achieved at the cost of a fair enough execution time, i.e., we have decreased the average execution time of the existing FS + ANN-based model by 38 . 50 % .

  8. Analysis of habitat-selection rules using an individual-based model

    Science.gov (United States)

    Steven F. Railsback; Bret C. Harvey

    2002-01-01

    Abstract - Despite their promise for simulating natural complexity,individual-based models (IBMs) are rarely used for ecological research or resource management. Few IBMs have been shown to reproduce realistic patterns of behavior by individual organisms.To test our IBM of stream salmonids and draw conclusions about foraging theory,we analyzed the IBM ’s ability to...

  9. Learning Natural Selection in 4th Grade with Multi-Agent-Based Computational Models

    Science.gov (United States)

    Dickes, Amanda Catherine; Sengupta, Pratim

    2013-01-01

    In this paper, we investigate how elementary school students develop multi-level explanations of population dynamics in a simple predator-prey ecosystem, through scaffolded interactions with a multi-agent-based computational model (MABM). The term "agent" in an MABM indicates individual computational objects or actors (e.g., cars), and these…

  10. A data mining based model for selecting type of treatment for kidney stone patients

    Directory of Open Access Journals (Sweden)

    Sepehri MM

    2009-09-01

    Full Text Available "n Normal 0 false false false EN-US X-NONE AR-SA MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} Background: Data mining as a multidisciplinary field is rooted in the fields such as statistics, mathematics, computer science and artificial intelligence and has been gaining momentum in scientific, managerial, and executive applications in health care. Data mining can be defined as the automated extraction of valuable, practical and hidden knowledge and information from large data. Applying data mining in medical records and data is of utmost importance for health care givers and providers and brings vital and valuable outcomes. Data mining can help doctors come up with better recommendations and plans for treatment which actually in many respects have significant impact on patients' life and satisfaction In this paper we have proposed and utilized data mining methods to extract hidden information in medical records of pelvis stone patients with ureteral stone. We have tried to design a decision support system model to be applicable for selecting type of treatment for these groups of patients."n"nMethods: We gathered needed information from Shahid Hashemi Nejad hospital. In this research we have used decision tree as a data mining tool, for selecting suitable treatment for patients with ureteral stone. This

  11. Location-based Mobile Relay Selection and Impact of Inaccurate Path Loss Model Parameters

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2010-01-01

    by simulations. The SNR measurement based relay selection scheme proposed previously is unsuitable for use with fast moving users in e.g. vehicular scenarios due to a large signaling overhead. The proposed location based scheme is shown to work well with fast moving users due to a lower signaling overhead....... The required location accuracy was found to be comparable to the accuracy of standard GPS. As the scheme was found to be highly sensitive to NLOS situations with unknown attenuation, knowledge of obstacle locations obtained either by sensing online or from a map of obstacles, was identified as a prerequisite...... to be wide enough to allow them to be estimated in practical systems....

  12. Automatic selection of localized region-based active contour models using image content analysis applied to brain tumor segmentation.

    Science.gov (United States)

    Ilunga-Mbuyamba, Elisee; Avina-Cervantes, Juan Gabriel; Cepeda-Negrete, Jonathan; Ibarra-Manzano, Mario Alberto; Chalopin, Claire

    2017-12-01

    Brain tumor segmentation is a routine process in a clinical setting and provides useful information for diagnosis and treatment planning. Manual segmentation, performed by physicians or radiologists, is a time-consuming task due to the large quantity of medical data generated presently. Hence, automatic segmentation methods are needed, and several approaches have been introduced in recent years including the Localized Region-based Active Contour Model (LRACM). There are many popular LRACM, but each of them presents strong and weak points. In this paper, the automatic selection of LRACM based on image content and its application on brain tumor segmentation is presented. Thereby, a framework to select one of three LRACM, i.e., Local Gaussian Distribution Fitting (LGDF), localized Chan-Vese (C-V) and Localized Active Contour Model with Background Intensity Compensation (LACM-BIC), is proposed. Twelve visual features are extracted to properly select the method that may process a given input image. The system is based on a supervised approach. Applied specifically to Magnetic Resonance Imaging (MRI) images, the experiments showed that the proposed system is able to correctly select the suitable LRACM to handle a specific image. Consequently, the selection framework achieves better accuracy performance than the three LRACM separately. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Variable Selection and Updating In Model-Based Discriminant Analysis for High Dimensional Data with Food Authenticity Applications*

    Science.gov (United States)

    Murphy, Thomas Brendan; Dean, Nema; Raftery, Adrian E.

    2010-01-01

    Food authenticity studies are concerned with determining if food samples have been correctly labelled or not. Discriminant analysis methods are an integral part of the methodology for food authentication. Motivated by food authenticity applications, a model-based discriminant analysis method that includes variable selection is presented. The discriminant analysis model is fitted in a semi-supervised manner using both labeled and unlabeled data. The method is shown to give excellent classification performance on several high-dimensional multiclass food authenticity datasets with more variables than observations. The variables selected by the proposed method provide information about which variables are meaningful for classification purposes. A headlong search strategy for variable selection is shown to be efficient in terms of computation and achieves excellent classification performance. In applications to several food authenticity datasets, our proposed method outperformed default implementations of Random Forests, AdaBoost, transductive SVMs and Bayesian Multinomial Regression by substantial margins. PMID:20936055

  14. An expert-based model for selecting the most suitable substrate material type for antenna circuits

    Science.gov (United States)

    AL-Oqla, Faris M.; Omar, Amjad A.

    2015-06-01

    Quality and properties of microwave circuits depend on all the circuit components. One of these components is the substrate. The process of substrate material selection is a decision-making problem that involves multicriteria with objectives that are diverse and conflicting. The aim of this work was to select the most suitable substrate material type to be used in antennas in the microwave frequency range that gives best performance and reliability of the substrate. For this purpose, a model was built to ease the decision-making that includes hierarchical alternatives and criteria. The substrate material type options considered were limited to fiberglass-reinforced epoxy laminates (FR4 εr = 4.8), aluminium (III) oxide (alumina εr = 9.6), gallium arsenide III-V compound (GaAs εr = 12.8) and PTFE composites reinforced with glass microfibers (Duroid εr = 2.2-2.3). To assist in building the model and making decisions, the analytical hierarchy process (AHP) was used. The decision-making process revealed that alumina substrate material type was the most suitable choice for the antennas in the microwave frequency range that yields best performance and reliability. In addition, both the size of the circuit and the loss tangent of the substrates were found to be the most contributing subfactors in the antenna circuit specifications criterion. Experimental assessments were conducted utilising The Expert Choice™ software. The judgments were tested and found to be precise, consistent and justifiable, and the marginal inconsistency values were found to be very narrow. A sensitivity analysis was also presented to demonstrate the confidence in the drawn conclusions.

  15. Model for Selection of the Best Location Based on Fuzzy AHP and Hurwitz Methods

    Directory of Open Access Journals (Sweden)

    Slavko Arsovski

    2017-01-01

    Full Text Available The problem of evaluation and selection of parking lots is a part of significant issues of public transport management in cities. As population expands as well as urban areas, solving the mentioned issues affects employees, security and safety of citizens, and quality of life in long-time period. The aim of this paper is to propose a multicriteria decision model which includes both quantitative and qualitative criteria, which may be of either benefit or cost type, to evaluate locations. The criteria values and the importance of criteria are either precise or linguistic expressions defined by trapezoidal fuzzy numbers. The human judgments of the relative importance of evaluation criteria and uncertain criteria values are often vague and cannot be expressed by exact precise values. The ranking of locations with respect to all criteria and their weights is performed for various degrees of pessimistic-optimistic index. The proposed model is tested through an illustrative example with real life data, where it shows the practical implications in public communal enterprises.

  16. Modelling of ecosystem change on rehabilitated ash disposal sites based on selected bio-indicators / A. Snyman

    OpenAIRE

    Snyman, Anchen

    2006-01-01

    Finding a common language in describing and interpreting multivariate data associated with rehabilitation and disturbance ecology, has became a major challenge. The main objective of this study is to find and evaluate mathematical models to describe ecosystem change based on selected indicators of change. Existing data from a previous rehabilitation project on Hendrina Power Station (Mpumalanga, South Africa) was used as a database for this study and this study aims to re...

  17. Automatic feature selection for model-based reinforcement learning in factored MDPs

    NARCIS (Netherlands)

    Kroon, M.; Whiteson, S.; Wani, M.A.; Kantardzic, M.; Palade, V.; Kurgan, L.; Qi, A.

    2009-01-01

    Feature selection is an important challenge in machine learning. Unfortunately, most methods for automating feature selection are designed for supervised learning tasks and are thus either inapplicable or impractical for reinforcement learning. This paper presents a new approach to feature selection

  18. Selection of comparative cases in land appraisal based on cloud model and gray relevancy theory

    Science.gov (United States)

    Yang, Liu; Liu, Yanfang; Liu, Wei; Song, Xinying; Xu, Xiongjie

    2008-10-01

    There is a traditional approach named market comparative method, which is used to appraise land price by putting a focus on comparing cases. A review of the developing history of this method highlights some persistent challenges. In this paper, this existing approach is extended through coupling the cloud model, a data-mining technique, with gray relevancy theory. This approach allows the construction of quantitative measurement according to qualitative concept (attribute), simulating human cognizing process. This novel method admits hierarchical describing and exploration of the relationship and proximity between district factors and individual factors. By 1-D cloud generator, we obtain several rules in terms of linguistic atom which confirm the number of overlap cloud. Each rule corresponds to a rule's rear that achieves uncertainty ratiocination. In order to prove applicability of this method, it is applied to the selection of comparative cases in land appraisal of Wuhan City. Experimental results show most cases can be correctly discriminated and the better comparative cases are acceptable. Compared with other approaches, this method has better performance in land appraisal.

  19. MicroRNA-based Therapy in Animal Models of Selected Gastrointestinal Cancers

    Directory of Open Access Journals (Sweden)

    Jana Merhautova

    2016-09-01

    Full Text Available Gastrointestinal cancer accounts for the 20 most frequent cancer diseases worldwide and there is a constant urge to bring new therapeutics with new mechanism of action into the clinical practice. Quantity of in vitro and in vivo evidences indicate, that exogenous change in pathologically imbalanced microRNAs (miRNAs is capable of transforming the cancer cell phenotype. This review analyzed preclinical miRNA-based therapy attempts in animal models of gastric, pancreatic, gallbladder, and colorectal cancer. From more than 400 original articles, 26 was found to assess the effect of miRNA mimics, precursors, expression vectors, or inhibitors administered locally or systemically being an approach with relatively high translational potential. We have focused on mapping available information on animal model used (animal strain, cell line, xenograft method, pharmacological aspects (oligonucleotide chemistry, delivery system, posology, route of administration and toxicology assessments. We also summarize findings in the field pharmacokinetics and toxicity of miRNA-based therapy.□

  20. Profit-Based Model Selection for Customer Retention Using Individual Customer Lifetime Values.

    Science.gov (United States)

    Óskarsdóttir, María; Baesens, Bart; Vanthienen, Jan

    2018-03-01

    The goal of customer retention campaigns, by design, is to add value and enhance the operational efficiency of businesses. For organizations that strive to retain their customers in saturated, and sometimes fast moving, markets such as the telecommunication and banking industries, implementing customer churn prediction models that perform well and in accordance with the business goals is vital. The expected maximum profit (EMP) measure is tailored toward this problem by taking into account the costs and benefits of a retention campaign and estimating its worth for the organization. Unfortunately, the measure assumes fixed and equal customer lifetime value (CLV) for all customers, which has been shown to not correspond well with reality. In this article, we extend the EMP measure to take into account the variability in the lifetime values of customers, thereby basing it on individual characteristics. We demonstrate how to incorporate the heterogeneity of CLVs when CLVs are known, when their prior distribution is known, and when neither is known. By taking into account individual CLVs, our proposed approach of measuring model performance gives novel insights when deciding on a customer retention campaign. The method is dependent on the characteristics of the customer base as is compliant with modern business analytics and accommodates the data-driven culture that has manifested itself within organizations.

  1. A Cercla-Based Decision Model to Support Remedy Selection for an Uncertain Volume of Contaminants at a DOE Facility

    Energy Technology Data Exchange (ETDEWEB)

    Christine E. Kerschus

    1999-03-31

    The Paducah Gaseous Diffusion Plant (PGDP) operated by the Department of Energy is challenged with selecting the appropriate remediation technology to cleanup contaminants at Waste Area Group (WAG) 6. This research utilizes value-focused thinking and multiattribute preference theory concepts to produce a decision analysis model designed to aid the decision makers in their selection process. The model is based on CERCLA's five primary balancing criteria, tailored specifically to WAG 6 and the contaminants of concern, utilizes expert opinion and the best available engineering, cost, and performance data, and accounts for uncertainty in contaminant volume. The model ranks 23 remediation technologies (trains) in their ability to achieve the CERCLA criteria at various contaminant volumes. A sensitivity analysis is performed to examine the effects of changes in expert opinion and uncertainty in volume. Further analysis reveals how volume uncertainty is expected to affect technology cost, time and ability to meet the CERCLA criteria. The model provides the decision makers with a CERCLA-based decision analysis methodology that is objective, traceable, and robust to support the WAG 6 Feasibility Study. In addition, the model can be adjusted to address other DOE contaminated sites.

  2. A Cercla-Based Decision Model to Support Remedy Selection for an Uncertain Volume of Contaminants at a DOE Facility

    International Nuclear Information System (INIS)

    Christine E. Kerschus

    1999-01-01

    The Paducah Gaseous Diffusion Plant (PGDP) operated by the Department of Energy is challenged with selecting the appropriate remediation technology to cleanup contaminants at Waste Area Group (WAG) 6. This research utilizes value-focused thinking and multiattribute preference theory concepts to produce a decision analysis model designed to aid the decision makers in their selection process. The model is based on CERCLA's five primary balancing criteria, tailored specifically to WAG 6 and the contaminants of concern, utilizes expert opinion and the best available engineering, cost, and performance data, and accounts for uncertainty in contaminant volume. The model ranks 23 remediation technologies (trains) in their ability to achieve the CERCLA criteria at various contaminant volumes. A sensitivity analysis is performed to examine the effects of changes in expert opinion and uncertainty in volume. Further analysis reveals how volume uncertainty is expected to affect technology cost, time and ability to meet the CERCLA criteria. The model provides the decision makers with a CERCLA-based decision analysis methodology that is objective, traceable, and robust to support the WAG 6 Feasibility Study. In addition, the model can be adjusted to address other DOE contaminated sites

  3. Spatiotemporal Context Awareness for Urban Traffic Modeling and Prediction: Sparse Representation Based Variable Selection.

    Directory of Open Access Journals (Sweden)

    Su Yang

    Full Text Available Spatial-temporal correlations among the data play an important role in traffic flow prediction. Correspondingly, traffic modeling and prediction based on big data analytics emerges due to the city-scale interactions among traffic flows. A new methodology based on sparse representation is proposed to reveal the spatial-temporal dependencies among traffic flows so as to simplify the correlations among traffic data for the prediction task at a given sensor. Three important findings are observed in the experiments: (1 Only traffic flows immediately prior to the present time affect the formation of current traffic flows, which implies the possibility to reduce the traditional high-order predictors into an 1-order model. (2 The spatial context relevant to a given prediction task is more complex than what is assumed to exist locally and can spread out to the whole city. (3 The spatial context varies with the target sensor undergoing prediction and enlarges with the increment of time lag for prediction. Because the scope of human mobility is subject to travel time, identifying the varying spatial context against time lag is crucial for prediction. Since sparse representation can capture the varying spatial context to adapt to the prediction task, it outperforms the traditional methods the inputs of which are confined as the data from a fixed number of nearby sensors. As the spatial-temporal context for any prediction task is fully detected from the traffic data in an automated manner, where no additional information regarding network topology is needed, it has good scalability to be applicable to large-scale networks.

  4. Gis-Based Wind Farm Site Selection Model Offshore Abu Dhabi Emirate, Uae

    Science.gov (United States)

    Saleous, N.; Issa, S.; Mazrouei, J. Al

    2016-06-01

    The United Arab Emirates (UAE) government has declared the increased use of alternative energy a strategic goal and has invested in identifying and developing various sources of such energy. This study aimed at assessing the viability of establishing wind farms offshore the Emirate of Abu Dhabi, UAE and to identify favourable sites for such farms using Geographic Information Systems (GIS) procedures and algorithms. Based on previous studies and on local requirements, a set of suitability criteria was developed including ocean currents, reserved areas, seabed topography, and wind speed. GIS layers were created and a weighted overlay GIS model based on the above mentioned criteria was built to identify suitable sites for hosting a new offshore wind energy farm. Results showed that most of Abu Dhabi offshore areas were unsuitable, largely due to the presence of restricted zones (marine protected areas, oil extraction platforms and oil pipelines in particular). However, some suitable sites could be identified, especially around Delma Island and North of Jabal Barakah in the Western Region. The environmental impact of potential wind farm locations and associated cables on the marine ecology was examined to ensure minimal disturbance to marine life. Further research is needed to specify wind mills characteristics that suit the study area especially with the presence of heavy traffic due to many oil production and shipping activities in the Arabian Gulf most of the year.

  5. Selection and Validation of Predictive Models of Radiation Effects on Tumor Growth Based on Noninvasive Imaging Data.

    Science.gov (United States)

    Lima, E A B F; Oden, J T; Wohlmuth, B; Shahmoradi, A; Hormuth, D A; Yankeelov, T E; Scarabosio, L; Horger, T

    2017-12-01

    The use of mathematical and computational models for reliable predictions of tumor growth and decline in living organisms is one of the foremost challenges in modern predictive science, as it must cope with uncertainties in observational data, model selection, model parameters, and model inadequacy, all for very complex physical and biological systems. In this paper, large classes of parametric models of tumor growth in vascular tissue are discussed including models for radiation therapy. Observational data is obtained from MRI of a murine model of glioma and observed over a period of about three weeks, with X-ray radiation administered 14.5 days into the experimental program. Parametric models of tumor proliferation and decline are presented based on the balance laws of continuum mixture theory, particularly mass balance, and from accepted biological hypotheses on tumor growth. Among these are new model classes that include characterizations of effects of radiation and simple models of mechanical deformation of tumors. The Occam Plausibility Algorithm (OPAL) is implemented to provide a Bayesian statistical calibration of the model classes, 39 models in all, as well as the determination of the most plausible models in these classes relative to the observational data, and to assess model inadequacy through statistical validation processes. Discussions of the numerical analysis of finite element approximations of the system of stochastic, nonlinear partial differential equations characterizing the model classes, as well as the sampling algorithms for Monte Carlo and Markov chain Monte Carlo (MCMC) methods employed in solving the forward stochastic problem, and in computing posterior distributions of parameters and model plausibilities are provided. The results of the analyses described suggest that the general framework developed can provide a useful approach for predicting tumor growth and the effects of radiation.

  6. Multi-Collinearity Based Model Selection for Landslide Susceptibility Mapping: A Case Study from Ulus District of Karabuk, Turkey

    Science.gov (United States)

    Sahin, E. K.; Colkesen, I., , Dr; Kavzoglu, T.

    2017-12-01

    Identification of localities prone to landslide areas plays an important role for emergency planning, disaster management and recovery planning. Due to its great importance for disaster management, producing accurate and up-to-date landslide susceptibility maps is essential for hazard mitigation purpose and regional planning. The main objective of the present study was to apply multi-collinearity based model selection approach for the production of a landslide susceptibility map of Ulus district of Karabuk, Turkey. It is a fact that data do not contain enough information to describe the problem under consideration when the factors are highly correlated with each other. In such cases, choosing a subset of the original features will often lead to better performance. This paper presents multi-collinearity based model selection approach to deal with the high correlation within the dataset. Two collinearity diagnostic factors (Tolerance (TOL) and the Variance Inflation Factor (VIF)) are commonly used to identify multi-collinearity. Values of VIF that exceed 10.0 and TOL values less than 1.0 are often regarded as indicating multi-collinearity. Five causative factors (slope length, curvature, plan curvature, profile curvature and topographical roughness index) were found highly correlated with each other among 15 factors available for the study area. As a result, the five correlated factors were removed from the model estimation, and performances of the models including the remaining 10 factors (aspect, drainage density, elevation, lithology, land use/land cover, NDVI, slope, sediment transport index, topographical position index and topographical wetness index) were evaluated using logistic regression. The performance of prediction model constructed with 10 factors was compared to that of 15-factor model. The prediction performance of two susceptibility maps was evaluated by overall accuracy and the area under the ROC curve (AUC) values. Results showed that overall

  7. Development of Physics-Based Numerical Models for Uncertainty Quantification of Selective Laser Melting Processes

    Data.gov (United States)

    National Aeronautics and Space Administration — The goal of the proposed research is to characterize the influence of process parameter variability inherent to Selective Laser Melting (SLM) and performance effect...

  8. Model-Based Design of Stimulus Trains for Selective Microstimulation of Targeted Neuronal Populations

    National Research Council Canada - National Science Library

    McIntyre, Cameron

    2001-01-01

    ... that accurately reproduced the dynamic firing properties of mammalian neurons, The neuron models were coupled to a three-dimensional finite element model of the spinal cord that solved for the potentials...

  9. Mechanism-based PK/PD modeling of selective serotonin reuptake inhibitors

    NARCIS (Netherlands)

    Geldof, Marian

    2007-01-01

    The main objective of the investigations was to explore the PK/PD correlations of fluvoxamine, as a prototype for the Selective Serotonin Reuptake Inhibitors (SSRIs). In the various investigations, a spectrum of different biomarkers was used, each reflecting a specific process on the causal path

  10. A cloud model based multi-attribute decision making approach for selection and evaluation of groundwater management schemes

    Science.gov (United States)

    Lu, Hongwei; Ren, Lixia; Chen, Yizhong; Tian, Peipei; Liu, Jia

    2017-12-01

    Due to the uncertainty (i.e., fuzziness, stochasticity and imprecision) existed simultaneously during the process for groundwater remediation, the accuracy of ranking results obtained by the traditional methods has been limited. This paper proposes a cloud model based multi-attribute decision making framework (CM-MADM) with Monte Carlo for the contaminated-groundwater remediation strategies selection. The cloud model is used to handle imprecise numerical quantities, which can describe the fuzziness and stochasticity of the information fully and precisely. In the proposed approach, the contaminated concentrations are aggregated via the backward cloud generator and the weights of attributes are calculated by employing the weight cloud module. A case study on the remedial alternative selection for a contaminated site suffering from a 1,1,1-trichloroethylene leakage problem in Shanghai, China is conducted to illustrate the efficiency and applicability of the developed approach. Totally, an attribute system which consists of ten attributes were used for evaluating each alternative through the developed method under uncertainty, including daily total pumping rate, total cost and cloud model based health risk. Results indicated that A14 was evaluated to be the most preferred alternative for the 5-year, A5 for the 10-year, A4 for the 15-year and A6 for the 20-year remediation.

  11. Search for Potent and Selective Aurora A Inhibitors Based on General Ser/Thr Kinase Pharmacophore Model

    Directory of Open Access Journals (Sweden)

    Natalya I. Vasilevich

    2016-04-01

    Full Text Available Based on the data for compounds known from the literature to be active against various types of Ser/Thr kinases, a general pharmachophore model for these types of kinases was developed. The search for the molecules fitting to this pharmacophore among the ASINEX proprietary library revealed a number of compounds, which were tested and appeared to possess some activity against Ser/Thr kinases such as Aurora A, Aurora B and Haspin. Our work on the optimization of these molecules against Aurora A kinase allowed us to achieve several hits in a 3–5 nM range of activity with rather good selectivity and Absorption, Distribution, Metabolism, and Excretion (ADME properties, and cytotoxicity against 16 cancer cell lines. Thus, we showed the possibility to fine-tune the general Ser/Thr pharmacophore to design active and selective compounds against desired types of kinases.

  12. A Collective Case Study of Secondary Students' Model-Based Inquiry on Natural Selection through Programming in an Agent-Based Modeling Environment

    Science.gov (United States)

    Xiang, Lin

    2011-01-01

    This is a collective case study seeking to develop detailed descriptions of how programming an agent-based simulation influences a group of 8th grade students' model-based inquiry (MBI) by examining students' agent-based programmable modeling (ABPM) processes and the learning outcomes. The context of the present study was a biology unit on…

  13. A Quantitative Quasispecies Theory-Based Model of Virus Escape Mutation Under Immune Selection

    Science.gov (United States)

    2012-01-01

    extinction when its mu- tation rate exceeds a threshold. The existence of such a threshold is a central prediction of the quasispecies theory...and describe such experimental data, it is important to realisti- cally specify the nature of selection pressure. Viruses in animal hosts evolve under...Explicit fitness measure- ments of viral clones (35, 36) and biochemical assays of proteins (37) both indicate that single-nucleotide substitutions lead to

  14. SELECTION VIA MIXED MODELS IN SEGREGATING GUAVA FAMILIES BASED ON YIELD AND QUALITY TRAITS

    Directory of Open Access Journals (Sweden)

    SILVANA SILVA RED QUINTAL

    Full Text Available ABSTRACT Aiming at the generation of new guava varieties with superior attributes, we conducted this study adopting the REML/BLUP procedure at individual level. Seventeen segregating guava families were evaluated in a randomized-block design with two replicates and 12 plants per plot. Families were obtained after controlled biparental pollination. The studied individuals showed high genotypic variance for fruit weight (FW, total yield (YLD, and ascorbic acid content (AAC. The heritability coefficients of the mean of progenies led to high progeny-selection accuracy for pulp yield (PY, soluble solids content (SSC, in addition to FW, YLD, and AAC; moderate accuracy for fruit acidity (FA and SSC/FA ratio; and low accuracy for mesocarp thickness (MT and pH. Selection among families (h2mp indicated the highest values for FW, PY, YLD, SSC, and AAC, revealing that, for the present study, this practice would be effective, since these traits allowed for the highest selection accuracy values among families. As for the ranking of individuals, families originating from crosses UENF 1835 × UENF 1834, UENF 1831 × UENF 1832, and UENF 1831 × UENF 3739 stood out, occupying the first positions for most traits.

  15. Prediction error variance and expected response to selection, when selection is based on the best predictor - for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    DEFF Research Database (Denmark)

    Andersen, Anders Holst; Korsgaard, Inge Riis; Jensen, Just

    2002-01-01

    In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed...

  16. Preparatory selection of sterilization regime for canned Natural Atlantic Mackerel with oil based on developed mathematical models of the process

    Directory of Open Access Journals (Sweden)

    Maslov A. A.

    2016-12-01

    Full Text Available Definition of preparatory parameters for sterilization regime of canned "Natural Atlantic Mackerel with Oil" is the aim of current study. PRSC software developed at the department of automation and computer engineering is used for preparatory selection. To determine the parameters of process model, in laboratory autoclave AVK-30M the pre-trial process of sterilization and cooling in water with backpressure of canned "Natural Atlantic Mackerel with Oil" in can N 3 has been performed. Gathering information about the temperature in the autoclave sterilization chamber and the can with product has been carried out using Ellab TrackSense PRO loggers. Due to the obtained information three transfer functions for the product model have been identified: in the least heated area of autoclave, the average heated and the most heated. In PRSC programme temporary temperature dependences in the sterilization chamber have been built using this information. The model of sterilization process of canned "Natural Atlantic Mackerel with Oil" has been received after the pre-trial process. Then in the automatic mode the sterilization regime of canned "Natural Atlantic Mackerel with Oil" has been selected using the value of actual effect close to normative sterilizing effect (5.9 conditional minutes. Furthermore, in this study step-mode sterilization of canned "Natural Atlantic Mackerel with Oil" has been selected. Utilization of step-mode sterilization with the maximum temperature equal to 125 °C in the sterilization chamber allows reduce process duration by 10 %. However, the application of this regime in practice requires additional research. Using the described approach based on the developed mathematical models of the process allows receive optimal step and variable canned food sterilization regimes with high energy efficiency and product quality.

  17. Model based estimation for multi-modal user interface component selection

    CSIR Research Space (South Africa)

    Coetzee, L

    2009-12-01

    Full Text Available for inter- action are well understood, the general problem of integrated multi-modal systems are yet to be understood to the same level. User modelling plays an important role within user- adaptive systems. Kobsa [5] presents a review on the devel... and providers of services and therefore user modelling tools will continue to play an important role in computer systems. Even though the utilisation of multiple modalities to break down the access barrier has been addressed by several re- searchers...

  18. A hybrid feature selection and health indicator construction scheme for delay-time-based degradation modelling of rolling element bearings

    Science.gov (United States)

    Zhang, Bin; Deng, Congying; Zhang, Yi

    2018-03-01

    Rolling element bearings are mechanical components used frequently in most rotating machinery and they are also vulnerable links representing the main source of failures in such systems. Thus, health condition monitoring and fault diagnosis of rolling element bearings have long been studied to improve operational reliability and maintenance efficiency of rotatory machines. Over the past decade, prognosis that enables forewarning of failure and estimation of residual life attracted increasing attention. To accurately and efficiently predict failure of the rolling element bearing, the degradation requires to be well represented and modelled. For this purpose, degradation of the rolling element bearing is analysed with the delay-time-based model in this paper. Also, a hybrid feature selection and health indicator construction scheme is proposed for extraction of the bearing health relevant information from condition monitoring sensor data. Effectiveness of the presented approach is validated through case studies on rolling element bearing run-to-failure experiments.

  19. Model Selection for Equating Testlet-Based Tests in the NEAT Design: An Empirical Study

    Science.gov (United States)

    He, Wei; Li, Feifei; Wolfe, Edward W.; Mao, Xia

    2012-01-01

    For those tests solely composed of testlets, local item independency assumption tends to be violated. This study, by using empirical data from a large-scale state assessment program, was interested in investigates the effects of using different models on equating results under the non-equivalent group anchor-test (NEAT) design. Specifically, the…

  20. Selection Input Output by Restriction Using DEA Models Based on a Fuzzy Delphi Approach and Expert Information

    Science.gov (United States)

    Arsad, Roslah; Nasir Abdullah, Mohammad; Alias, Suriana; Isa, Zaidi

    2017-09-01

    Stock evaluation has always been an interesting problem for investors. In this paper, a comparison regarding the efficiency stocks of listed companies in Bursa Malaysia were made through the application of estimation method of Data Envelopment Analysis (DEA). One of the interesting research subjects in DEA is the selection of appropriate input and output parameter. In this study, DEA was used to measure efficiency of stocks of listed companies in Bursa Malaysia in terms of the financial ratio to evaluate performance of stocks. Based on previous studies and Fuzzy Delphi Method (FDM), the most important financial ratio was selected. The results indicated that return on equity, return on assets, net profit margin, operating profit margin, earnings per share, price to earnings and debt to equity were the most important ratios. Using expert information, all the parameter were clarified as inputs and outputs. The main objectives were to identify most critical financial ratio, clarify them based on expert information and compute the relative efficiency scores of stocks as well as rank them in the construction industry and material completely. The methods of analysis using Alirezaee and Afsharian’s model were employed in this study, where the originality of Charnes, Cooper and Rhodes (CCR) with the assumption of Constant Return to Scale (CSR) still holds. This method of ranking relative efficiency of decision making units (DMUs) was value-added by the Balance Index. The interested data was made for year 2015 and the population of the research includes accepted companies in stock markets in the construction industry and material (63 companies). According to the ranking, the proposed model can rank completely for 63 companies using selected financial ratio.

  1. Electrical Resistivity Based Empirical Model For Delineating Some Selected Soil Properties On Sandy-Loam Soil

    Directory of Open Access Journals (Sweden)

    Joshua

    2015-08-01

    Full Text Available Electrical Resistivity ER survey was conducted on a Sandy-loam soil with a view to evaluate some selected soil properties. Electrical Resistivity was measured from the soil surface at 0 0.3 m ER30 and 0 0.9 m ER90 soil depths using multi-electrode Wenner array and Miller 400D resistance meter. Soil samples were collected to a depth 0.3 m at points where ER was measured and analyzed for properties such as Organic Matter OM Cation Exchange Capacity CEC Soil Water Content SWC Sand Silt and Clay contents using standard methods. The results indicated that lower ER areas exhibit higher content of soil properties than higher ER areas. The ER90 correlates insignificantly to the soil properties while ER30 correlates significantly to the soil properties except clay r 0.63 - 0.75. The relationship between ER30 and soil properties were best fitted to multiple linear regression R2 0.90 and Boltzmann distribution R2 0.80 - 0.84. The study indicates the ability of ER to delineate some soil properties influencing yield on sandy-loam soil. This will help farmers take decisions that can improve yields.

  2. Proposing a New Approach for Supplier Selection Based on Kraljic’s Model Using FMEA and Integer Linear Programming

    Directory of Open Access Journals (Sweden)

    S. Mohammad Arabzad

    2012-06-01

    Full Text Available In recent years, numerous methods have been proposed to deal with supplier evaluation and selection problem, but a point which has been usually neglected by researchers is the role of purchasing items. The aim of this paper is to propose an integrated approach to select suppliers and allocate orders on the basis of the nature of the purchasing items which means that this issue plays an important role in supplier selection and order allocation. Therefore, items are first categorized according to the Kraljic’s model by the use of FMEA technique. Then, suppliers are categorized and evaluated in four phases with respect to different types of purchasing items (Strategic, Bottleneck, Leverage and Routine. Finally, an integer linear programming is utilized to allocate purchasing orders to suppliers. Furthermore, an empirical example is conducted to illustrate the stage of proposed approach. Results imply that ranking of suppliers and allocation of purchasing items based on the nature of purchasing items will create more capabilities in managing purchasing items and suppliers .

  3. Developing a suitable model for supplier selection based on supply chain risks: an empirical study from Iranian pharmaceutical companies.

    Science.gov (United States)

    Mehralian, Gholamhossein; Rajabzadeh Gatari, Ali; Morakabati, Mohadese; Vatanpour, Hossein

    2012-01-01

    The supply chain represents the critical link between the development of new product and the market in pharmaceutical industry. Over the years, improvements made in supply chain operations have focused largely on ways to reduce cost and gain efficiencies in scale. In addition, powerful regulatory and market forces have provided new incentives for pharmaceutical firms to basically rethink the way they produce and distribute products, and also to re-imagine the role of the supply chain in driving strategic growth, brand differentiation and economic value in the health continuum. The purpose of this paper is to formulate basic factors involved in risk analysis of pharmaceutical industry, and also determine the effective factors involved in suppliers selection and their priorities. This paper is based on the results of literature review, experts' opinion acquisition, statistical analysis and also using MADM models on data gathered from distributed questionnaires. The model consists of the following steps and components: first factors involved in to supply chain risks are determined. Based on them a framework is considered. According the result of statistical analysis and MADM models the risk factors are formulated. The paper determines the main components and influenceial factors involving in the supply chain risks. Results showed that delivery risk can make an important contribution to mitigate the risk of pharmaceutical industry.

  4. Developing a Suitable Model for Supplier Selection Based on Supply Chain Risks: An Empirical Study from Iranian Pharmaceutical Companies

    Science.gov (United States)

    Mehralian, Gholamhossein; Rajabzadeh Gatari, Ali; Morakabati, Mohadese; Vatanpour, Hossein

    2012-01-01

    The supply chain represents the critical link between the development of new product and the market in pharmaceutical industry. Over the years, improvements made in supply chain operations have focused largely on ways to reduce cost and gain efficiencies in scale. In addition, powerful regulatory and market forces have provided new incentives for pharmaceutical firms to basically rethink the way they produce and distribute products, and also to re-imagine the role of the supply chain in driving strategic growth, brand differentiation and economic value in the health continuum. The purpose of this paper is to formulate basic factors involved in risk analysis of pharmaceutical industry, and also determine the effective factors involved in suppliers selection and their priorities. This paper is based on the results of literature review, experts’ opinion acquisition, statistical analysis and also using MADM models on data gathered from distributed questionnaires. The model consists of the following steps and components: first factors involved in to supply chain risks are determined. Based on them a framework is considered. According the result of statistical analysis and MADM models the risk factors are formulated. The paper determines the main components and influenceial factors involving in the supply chain risks. Results showed that delivery risk can make an important contribution to mitigate the risk of pharmaceutical industry. PMID:24250442

  5. Modeling Natural Selection

    Science.gov (United States)

    Bogiages, Christopher A.; Lotter, Christine

    2011-01-01

    In their research, scientists generate, test, and modify scientific models. These models can be shared with others and demonstrate a scientist's understanding of how the natural world works. Similarly, students can generate and modify models to gain a better understanding of the content, process, and nature of science (Kenyon, Schwarz, and Hug…

  6. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    International Nuclear Information System (INIS)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan

    2012-01-01

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms

  7. A New Variable Selection Method Based on Mutual Information Maximization by Replacing Collinear Variables for Nonlinear Quantitative Structure-Property Relationship Models

    Energy Technology Data Exchange (ETDEWEB)

    Ghasemi, Jahan B.; Zolfonoun, Ehsan [Toosi University of Technology, Tehran (Korea, Republic of)

    2012-05-15

    Selection of the most informative molecular descriptors from the original data set is a key step for development of quantitative structure activity/property relationship models. Recently, mutual information (MI) has gained increasing attention in feature selection problems. This paper presents an effective mutual information-based feature selection approach, named mutual information maximization by replacing collinear variables (MIMRCV), for nonlinear quantitative structure-property relationship models. The proposed variable selection method was applied to three different QSPR datasets, soil degradation half-life of 47 organophosphorus pesticides, GC-MS retention times of 85 volatile organic compounds, and water-to-micellar cetyltrimethylammonium bromide partition coefficients of 62 organic compounds.The obtained results revealed that using MIMRCV as feature selection method improves the predictive quality of the developed models compared to conventional MI based variable selection algorithms.

  8. A hybrid multi-criteria decision modeling approach for the best biodiesel blend selection based on ANP-TOPSIS analysis

    Directory of Open Access Journals (Sweden)

    G. Sakthivel

    2015-03-01

    Full Text Available The ever increasing demand and depletion of fossil fuels had an adverse impact on environmental pollution. The selection of appropriate source of biodiesel and proper blending of biodiesel plays a major role in alternate energy production. This paper describes an application of hybrid Multi Criteria Decision Making (MCDM technique for the selection of optimum fuel blend in fish oil biodiesel for the IC engine. The proposed model, Analytical Network Process (ANP is integrated with Technique for Order Performance by Similarity to Ideal Solution (TOPSIS and VlseKriterijumska Optimizacija I Kompromisno Resenje (in Serbian (VIKOR to evaluate the optimum blend. Evaluation of suitable blend is based on the exploratory analysis of the performance, emission and combustion parameters of the single cylinder, constant speed direct injection diesel engine at different load conditions. Here the ANP is used to determine the relative weights of the criteria, whereas TOPSIS and VIKOR are used for obtaining the final ranking of alternative blends. An efficient pair-wise comparison process and ranking of alternatives can be achieved for optimum blend selection through the integration of ANP with TOPSIS and VIKOR. The obtained preference order of the blends for ANP-VIKOR and ANP-TOPSIS are B20 > Diesel > B40 > B60 > B80 > B100 and B20 > B40 > Diesel > B60 > B80 > B100 respectively. Hence by comparing both these methods, B20 is selected as the best blend to operate the internal combustion engines. This paper highlights a new insight into MCDM techniques to evaluate the best fuel blend for the decision makers such as engine manufactures and R& D engineers to meet the fuel economy and emission norms to empower the green revolution.

  9. Selected System Models

    Science.gov (United States)

    Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.

    Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.

  10. Physics-based simulation modeling and optimization of microstructural changes induced by machining and selective laser melting processes in titanium and nickel based alloys

    Science.gov (United States)

    Arisoy, Yigit Muzaffer

    Manufacturing processes may significantly affect the quality of resultant surfaces and structural integrity of the metal end products. Controlling manufacturing process induced changes to the product's surface integrity may improve the fatigue life and overall reliability of the end product. The goal of this study is to model the phenomena that result in microstructural alterations and improve the surface integrity of the manufactured parts by utilizing physics-based process simulations and other computational methods. Two different (both conventional and advanced) manufacturing processes; i.e. machining of Titanium and Nickel-based alloys and selective laser melting of Nickel-based powder alloys are studied. 3D Finite Element (FE) process simulations are developed and experimental data that validates these process simulation models are generated to compare against predictions. Computational process modeling and optimization have been performed for machining induced microstructure that includes; i) predicting recrystallization and grain size using FE simulations and the Johnson-Mehl-Avrami-Kolmogorov (JMAK) model, ii) predicting microhardness using non-linear regression models and the Random Forests method, and iii) multi-objective machining optimization for minimizing microstructural changes. Experimental analysis and computational process modeling of selective laser melting have been also conducted including; i) microstructural analysis of grain sizes and growth directions using SEM imaging and machine learning algorithms, ii) analysis of thermal imaging for spattering, heating/cooling rates and meltpool size, iii) predicting thermal field, meltpool size, and growth directions via thermal gradients using 3D FE simulations, iv) predicting localized solidification using the Phase Field method. These computational process models and predictive models, once utilized by industry to optimize process parameters, have the ultimate potential to improve performance of

  11. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    Science.gov (United States)

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  12. Statistical Analysis of Wind Power Density Based on the Weibull and Rayleigh Models of Selected Site in Malaysia

    Directory of Open Access Journals (Sweden)

    Aliashim Albani

    2014-02-01

    Full Text Available The demand for electricity in Malaysia is growing in tandem with its Gross Domestic Product (GDP growth. Malaysia is going to need even more energy as it strives to grow towards a high-income economy. Malaysia has taken steps to exploring the renewable energy (RE including wind energy as an alternative source for generating electricity. In the present study, the wind energy potential of the site is statistically analyzed based on 1-year measured hourly time-series wind speed data. Wind data were obtained from the Malaysian Meteorological Department (MMD weather stations at nine selected sites in Malaysia. The data were calculated by using the MATLAB programming to determine and generate the Weibull and Rayleigh distribution functions. Both Weibull and Rayleigh models are fitted and compared to the Field data probability distributions of year 2011. From the analysis, it was shown that the Weibull distribution is fitting the Field data better than the Rayleigh distribution for the whole year 2011. The wind power density of every site has been studied based on the Weibull and Rayleigh functions. The Weibull distribution shows a good approximation for estimation of wind power density in Malaysia.

  13. How to avoid mismodelling in GLM-based fMRI data analysis: cross-validated Bayesian model selection.

    Science.gov (United States)

    Soch, Joram; Haynes, John-Dylan; Allefeld, Carsten

    2016-11-01

    Voxel-wise general linear models (GLMs) are a standard approach for analyzing functional magnetic resonance imaging (fMRI) data. An advantage of GLMs is that they are flexible and can be adapted to the requirements of many different data sets. However, the specification of first-level GLMs leaves the researcher with many degrees of freedom which is problematic given recent efforts to ensure robust and reproducible fMRI data analysis. Formal model comparisons that allow a systematic assessment of GLMs are only rarely performed. On the one hand, too simple models may underfit data and leave real effects undiscovered. On the other hand, too complex models might overfit data and also reduce statistical power. Here we present a systematic approach termed cross-validated Bayesian model selection (cvBMS) that allows to decide which GLM best describes a given fMRI data set. Importantly, our approach allows for non-nested model comparison, i.e. comparing more than two models that do not just differ by adding one or more regressors. It also allows for spatially heterogeneous modelling, i.e. using different models for different parts of the brain. We validate our method using simulated data and demonstrate potential applications to empirical data. The increased use of model comparison and model selection should increase the reliability of GLM results and reproducibility of fMRI studies. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Exploring Several Methods of Groundwater Model Selection

    Science.gov (United States)

    Samani, Saeideh; Ye, Ming; Asghari Moghaddam, Asghar

    2017-04-01

    Selecting reliable models for simulating groundwater flow and solute transport is essential to groundwater resources management and protection. This work is to explore several model selection methods for avoiding over-complex and/or over-parameterized groundwater models. We consider six groundwater flow models with different numbers (6, 10, 10, 13, 13 and 15) of model parameters. These models represent alternative geological interpretations, recharge estimates, and boundary conditions at a study site in Iran. The models were developed with Model Muse, and calibrated against observations of hydraulic head using UCODE. Model selection was conducted by using the following four approaches: (1) Rank the models using their root mean square error (RMSE) obtained after UCODE-based model calibration, (2) Calculate model probability using GLUE method, (3) Evaluate model probability using model selection criteria (AIC, AICc, BIC, and KIC), and (4) Evaluate model weights using the Fuzzy Multi-Criteria-Decision-Making (MCDM) approach. MCDM is based on the fuzzy analytical hierarchy process (AHP) and fuzzy technique for order performance, which is to identify the ideal solution by a gradual expansion from the local to the global scale of model parameters. The KIC and MCDM methods are superior to other methods, as they consider not only the fit between observed and simulated data and the number of parameter, but also uncertainty in model parameters. Considering these factors can prevent from occurring over-complexity and over-parameterization, when selecting the appropriate groundwater flow models. These methods selected, as the best model, one with average complexity (10 parameters) and the best parameter estimation (model 3).

  15. Model Selection and Evaluation Based on Emerging Infectious Disease Data Sets including A/H1N1 and Ebola

    Directory of Open Access Journals (Sweden)

    Wendi Liu

    2015-01-01

    Full Text Available The aim of the present study is to apply simple ODE models in the area of modeling the spread of emerging infectious diseases and show the importance of model selection in estimating parameters, the basic reproduction number, turning point, and final size. To quantify the plausibility of each model, given the data and the set of four models including Logistic, Gompertz, Rosenzweg, and Richards models, the Bayes factors are calculated and the precise estimates of the best fitted model parameters and key epidemic characteristics have been obtained. In particular, for Ebola the basic reproduction numbers are 1.3522 (95% CI (1.3506, 1.3537, 1.2101 (95% CI (1.2084, 1.2119, 3.0234 (95% CI (2.6063, 3.4881, and 1.9018 (95% CI (1.8565, 1.9478, the turning points are November 7,November 17, October 2, and November 3, 2014, and the final sizes until December 2015 are 25794 (95% CI (25630, 25958, 3916 (95% CI (3865, 3967, 9886 (95% CI (9740, 10031, and 12633 (95% CI (12515, 12750 for West Africa, Guinea, Liberia, and Sierra Leone, respectively. The main results confirm that model selection is crucial in evaluating and predicting the important quantities describing the emerging infectious diseases, and arbitrarily picking a model without any consideration of alternatives is problematic.

  16. Assessment of dual selection in grid based selectivity systems

    DEFF Research Database (Denmark)

    Sistiaga, Manu; Herrmann, Bent; Grimaldo, Eduardo

    2010-01-01

    Herein we propose a method to assess dual selection in grid based selectivity systems. This method takes into account the parameter “grid contact likelihood” (Cgrid), which can be interpreted as the proportion of fish that actually makes an attempt to escape through the grid. In a case study...... of the Barents Sea cod and haddock trawl fishery, we demonstrate that our model describes the experimental data better than the models previously used to fit similar data. For both cod and haddock, Cgrid was significantly smaller than 1.0, which demonstrated the relevance of the proposed model. Cgrid was higher......-compartment setup to avoid imprecise estimates of Cgrid, L50grid, SRgrid, L50codend, and SRcodend. In general, only the combined selectivity of the grid and the codend could be estimated with acceptable precision using a standard two-compartment sampling approach....

  17. A Hybrid Multi-Step Model for Forecasting Day-Ahead Electricity Price Based on Optimization, Fuzzy Logic and Model Selection

    Directory of Open Access Journals (Sweden)

    Ping Jiang

    2016-08-01

    Full Text Available The day-ahead electricity market is closely related to other commodity markets such as the fuel and emission markets and is increasingly playing a significant role in human life. Thus, in the electricity markets, accurate electricity price forecasting plays significant role for power producers and consumers. Although many studies developing and proposing highly accurate forecasting models exist in the literature, there have been few investigations on improving the forecasting effectiveness of electricity price from the perspective of reducing the volatility of data with satisfactory accuracy. Based on reducing the volatility of the electricity price and the forecasting nature of the radial basis function network (RBFN, this paper successfully develops a two-stage model to forecast the day-ahead electricity price, of which the first stage is particle swarm optimization (PSO-core mapping (CM with self-organizing-map and fuzzy set (PCMwSF, and the second stage is selection rule (SR. The PCMwSF stage applies CM, fuzzy set and optimized weights to obtain the future price, and the SR stage is inspired by the forecasting nature of RBFN and effectively selects the best forecast during the test period. The proposed model, i.e., CM-PCMwSF-SR, not only overcomes the difficulty of reducing the high volatility of the electricity price but also leads to a superior forecasting effectiveness than benchmarks.

  18. Model-Based Hookload Monitoring and Prediction at Drilling Rigs using Neural Networks and Forward-Selection Algorithm

    Science.gov (United States)

    Arnaout, A.; Fruhwirth, R.; Winter, M.; Esmael, B.; Thonhauser, G.

    2012-04-01

    The use of neural networks and advanced machine learning techniques in the oil & gas industry is a growing trend in the market. Especially in drilling oil & gas wells, prediction and monitoring different drilling parameters is an essential task to prevent serious problems like "Kick", "Lost Circulation" or "Stuck Pipe" among others. The hookload represents the weight load of the drill string at the crane hook. It is one of the most important parameters. During drilling the parameter "Weight on Bit" is controlled by the driller whereby the hookload is the only measure to monitor how much weight on bit is applied to the bit to generate the hole. Any changes in weight on bit will be directly reflected at the hookload. Furthermore any unwanted contact between the drill string and the wellbore - potentially leading to stuck pipe problem - will appear directly in the measurements of the hookload. Therefore comparison of the measured to the predicted hookload will not only give a clear idea on what is happening down-hole, it also enables the prediction of a number of important events that may cause problems in the borehole and yield in some - fortunately rare - cases in catastrophes like blow-outs. Heuristic models using highly sophisticated neural networks were designed for the hookload prediction; the training data sets were prepared in cooperation with drilling experts. Sensor measurements as well as a set of derived feature channels were used as input to the models. The contents of the final data set can be separated into (1) features based on rig operation states, (2) real-time sensors features and (3) features based on physics. A combination of novel neural network architecture - the Completely Connected Perceptron and parallel learning techniques which avoid trapping into local error minima - was used for building the models. In addition automatic network growing algorithms and highly sophisticated stopping criterions offer robust and efficient estimation of the

  19. Fusion strategies for selecting multiple tuning parameters for multivariate calibration and other penalty based processes: A model updating application for pharmaceutical analysis

    International Nuclear Information System (INIS)

    Tencate, Alister J.; Kalivas, John H.; White, Alexander J.

    2016-01-01

    New multivariate calibration methods and other processes are being developed that require selection of multiple tuning parameter (penalty) values to form the final model. With one or more tuning parameters, using only one measure of model quality to select final tuning parameter values is not sufficient. Optimization of several model quality measures is challenging. Thus, three fusion ranking methods are investigated for simultaneous assessment of multiple measures of model quality for selecting tuning parameter values. One is a supervised learning fusion rule named sum of ranking differences (SRD). The other two are non-supervised learning processes based on the sum and median operations. The effect of the number of models evaluated on the three fusion rules are also evaluated using three procedures. One procedure uses all models from all possible combinations of the tuning parameters. To reduce the number of models evaluated, an iterative process (only applicable to SRD) is applied and thresholding a model quality measure before applying the fusion rules is also used. A near infrared pharmaceutical data set requiring model updating is used to evaluate the three fusion rules. In this case, calibration of the primary conditions is for the active pharmaceutical ingredient (API) of tablets produced in a laboratory. The secondary conditions for calibration updating is for tablets produced in the full batch setting. Two model updating processes requiring selection of two unique tuning parameter values are studied. One is based on Tikhonov regularization (TR) and the other is a variation of partial least squares (PLS). The three fusion methods are shown to provide equivalent and acceptable results allowing automatic selection of the tuning parameter values. Best tuning parameter values are selected when model quality measures used with the fusion rules are for the small secondary sample set used to form the updated models. In this model updating situation, evaluation of

  20. Do pseudo-absence selection strategies influence species distribution models and their predictions? An information-theoretic approach based on simulated data

    Directory of Open Access Journals (Sweden)

    Guisan Antoine

    2009-04-01

    Full Text Available Abstract Background Multiple logistic regression is precluded from many practical applications in ecology that aim to predict the geographic distributions of species because it requires absence data, which are rarely available or are unreliable. In order to use multiple logistic regression, many studies have simulated "pseudo-absences" through a number of strategies, but it is unknown how the choice of strategy influences models and their geographic predictions of species. In this paper we evaluate the effect of several prevailing pseudo-absence strategies on the predictions of the geographic distribution of a virtual species whose "true" distribution and relationship to three environmental predictors was predefined. We evaluated the effect of using a real absences b pseudo-absences selected randomly from the background and c two-step approaches: pseudo-absences selected from low suitability areas predicted by either Ecological Niche Factor Analysis: (ENFA or BIOCLIM. We compared how the choice of pseudo-absence strategy affected model fit, predictive power, and information-theoretic model selection results. Results Models built with true absences had the best predictive power, best discriminatory power, and the "true" model (the one that contained the correct predictors was supported by the data according to AIC, as expected. Models based on random pseudo-absences had among the lowest fit, but yielded the second highest AUC value (0.97, and the "true" model was also supported by the data. Models based on two-step approaches had intermediate fit, the lowest predictive power, and the "true" model was not supported by the data. Conclusion If ecologists wish to build parsimonious GLM models that will allow them to make robust predictions, a reasonable approach is to use a large number of randomly selected pseudo-absences, and perform model selection based on an information theoretic approach. However, the resulting models can be expected to have

  1. APPLICATION OF THE MODEL CERNE FOR THE ESTABLISHMENT OF CRITERIA INCUBATION SELECTION IN TECHNOLOGY BASED BUSINESSES : A STUDY IN INCUBATORS OF TECHNOLOGICAL BASE OF THE COUNTRY

    Directory of Open Access Journals (Sweden)

    Clobert Jefferson Passoni

    2017-03-01

    Full Text Available Business incubators are a great source of encouragement for innovative projects, enabling the development of new technologies, providing infrastructure, advice and support, which are key elements for the success of new business. The technology-based firm incubators (TBFs, which are 154 in Brazil. Each one of them has its own mechanism for the selection of the incubation companies. Because of the different forms of management of incubators, the business model CERNE - Reference Center for Support for New Projects - was created by Anprotec and Sebrae, in order to standardize procedures and promote the increase of chances for success in the incubations. The objective of this study is to propose selection criteria for the incubation, considering CERNE’s five dimensions and aiming to help on the decision-making in the assessment of candidate companies in a TBF incubator. The research was conducted from the public notices of 20 TBF incubators, where 38 selection criteria were identified and classified. Managers of TBF incubators validated 26 criteria by its importance via online questionnaires. As a result, favorable ratings were obtained to 25 of them. Only one criterion differed from the others, with a unfavorable rating.

  2. Emotional bookkeeping and high partner selectivity are necessary for the emergence of partner-specific reciprocal affiliation in an agent-based model of primate groups.

    Directory of Open Access Journals (Sweden)

    Ellen Evers

    Full Text Available Primate affiliative relationships are differentiated, individual-specific and often reciprocal. However, the required cognitive abilities are still under debate. Recently, we introduced the EMO-model, in which two emotional dimensions regulate social behaviour: anxiety-FEAR and satisfaction-LIKE. Emotional bookkeeping is modelled by providing each individual with partner-specific LIKE attitudes in which the emotional experiences of earlier affiliations with others are accumulated. Individuals also possess fixed partner-specific FEAR attitudes, reflecting the stable dominance hierarchy. In this paper, we focus on one key parameter of the model, namely the degree of partner selectivity, i.e. the extent to which individuals rely on their LIKE attitudes when choosing affiliation partners. Studying the effect of partner selectivity on the emergent affiliative relationships, we found that at high selectivity, individuals restricted their affiliative behaviours more to similar-ranking individuals and that reciprocity of affiliation was enhanced. We compared the emotional bookkeeping model with a control model, in which individuals had fixed LIKE attitudes simply based on the (fixed rank-distance, instead of dynamic LIKE attitudes based on earlier events. Results from the control model were very similar to the emotional bookkeeping model: high selectivity resulted in preference of similar-ranking partners and enhanced reciprocity. However, only in the emotional bookkeeping model did high selectivity result in the emergence of reciprocal affiliative relationships that were highly partner-specific. Moreover, in the emotional bookkeeping model, LIKE attitude predicted affiliative behaviour better than rank-distance, especially at high selectivity. Our model suggests that emotional bookkeeping is a likely candidate mechanism to underlie partner-specific reciprocal affiliation.

  3. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  4. Genetic search feature selection for affective modeling

    DEFF Research Database (Denmark)

    Martínez, Héctor P.; Yannakakis, Georgios N.

    2010-01-01

    Automatic feature selection is a critical step towards the generation of successful computational models of affect. This paper presents a genetic search-based feature selection method which is developed as a global-search algorithm for improving the accuracy of the affective models built....... The method is tested and compared against sequential forward feature selection and random search in a dataset derived from a game survey experiment which contains bimodal input features (physiological and gameplay) and expressed pairwise preferences of affect. Results suggest that the proposed method...

  5. Genomic selection models double the accuracy of predicted breeding values for bacterial cold water disease resistance compared to a traditional pedigree-based model in rainbow trout aquaculture

    Science.gov (United States)

    Previously we have shown that bacterial cold water disease (BCWD) resistance in rainbow trout can be improved using traditional family-based selection, but progress has been limited to exploiting only between-family genetic variation. Genomic selection (GS) is a new alternative enabling exploitation...

  6. Prediction error variance and expected response to selection, when selection is based on the best predictor – for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2002-05-01

    Full Text Available Abstract In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects. In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model or a generalised version of heritability plays a central role in these formulas.

  7. Melody Track Selection Using Discriminative Language Model

    Science.gov (United States)

    Wu, Xiao; Li, Ming; Suo, Hongbin; Yan, Yonghong

    In this letter we focus on the task of selecting the melody track from a polyphonic MIDI file. Based on the intuition that music and language are similar in many aspects, we solve the selection problem by introducing an n-gram language model to learn the melody co-occurrence patterns in a statistical manner and determine the melodic degree of a given MIDI track. Furthermore, we propose the idea of using background model and posterior probability criteria to make modeling more discriminative. In the evaluation, the achieved 81.6% correct rate indicates the feasibility of our approach.

  8. A quick method based on SIMPLISMA-KPLS for simultaneously selecting outlier samples and informative samples for model standardization in near infrared spectroscopy

    Science.gov (United States)

    Li, Li-Na; Ma, Chang-Ming; Chang, Ming; Zhang, Ren-Cheng

    2017-12-01

    A novel method based on SIMPLe-to-use Interactive Self-modeling Mixture Analysis (SIMPLISMA) and Kernel Partial Least Square (KPLS), named as SIMPLISMA-KPLS, is proposed in this paper for selection of outlier samples and informative samples simultaneously. It is a quick algorithm used to model standardization (or named as model transfer) in near infrared (NIR) spectroscopy. The NIR experiment data of the corn for analysis of the protein content is introduced to evaluate the proposed method. Piecewise direct standardization (PDS) is employed in model transfer. And the comparison of SIMPLISMA-PDS-KPLS and KS-PDS-KPLS is given in this research by discussion of the prediction accuracy of protein content and calculation speed of each algorithm. The conclusions include that SIMPLISMA-KPLS can be utilized as an alternative sample selection method for model transfer. Although it has similar accuracy to Kennard-Stone (KS), it is different from KS as it employs concentration information in selection program. This means that it ensures analyte information is involved in analysis, and the spectra (X) of the selected samples is interrelated with concentration (y). And it can be used for outlier sample elimination simultaneously by validation of calibration. According to the statistical data results of running time, it is clear that the sample selection process is more rapid when using KPLS. The quick algorithm of SIMPLISMA-KPLS is beneficial to improve the speed of online measurement using NIR spectroscopy.

  9. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    Reeves, M.; Baker, N.A.; Duguid, J.O. [INTERA, Inc., Las Vegas, NV (United States)

    1994-04-04

    Since the 1960`s, ground-water flow models have been used for analysis of water resources problems. In the 1970`s, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970`s and well into the 1980`s focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M&O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing.

  10. Review and selection of unsaturated flow models

    International Nuclear Information System (INIS)

    Reeves, M.; Baker, N.A.; Duguid, J.O.

    1994-01-01

    Since the 1960's, ground-water flow models have been used for analysis of water resources problems. In the 1970's, emphasis began to shift to analysis of waste management problems. This shift in emphasis was largely brought about by site selection activities for geologic repositories for disposal of high-level radioactive wastes. Model development during the 1970's and well into the 1980's focused primarily on saturated ground-water flow because geologic repositories in salt, basalt, granite, shale, and tuff were envisioned to be below the water table. Selection of the unsaturated zone at Yucca Mountain, Nevada, for potential disposal of waste began to shift model development toward unsaturated flow models. Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M ampersand O) has the responsibility to review, evaluate, and document existing computer models; to conduct performance assessments; and to develop performance assessment models, where necessary. This document describes the CRWMS M ampersand O approach to model review and evaluation (Chapter 2), and the requirements for unsaturated flow models which are the bases for selection from among the current models (Chapter 3). Chapter 4 identifies existing models, and their characteristics. Through a detailed examination of characteristics, Chapter 5 presents the selection of models for testing. Chapter 6 discusses the testing and verification of selected models. Chapters 7 and 8 give conclusions and make recommendations, respectively. Chapter 9 records the major references for each of the models reviewed. Appendix A, a collection of technical reviews for each model, contains a more complete list of references. Finally, Appendix B characterizes the problems used for model testing

  11. A practical procedure for the selection of time-to-failure models based on the assessment of trends in maintenance data

    Energy Technology Data Exchange (ETDEWEB)

    Louit, D.M. [Komatsu Chile, Av. Americo Vespucio 0631, Quilicura, Santiago (Chile)], E-mail: rpascual@ing.puc.cl; Pascual, R. [Centro de Mineria, Pontificia Universidad Catolica de Chile, Av. Vicuna Mackenna 4860, Santiago (Chile); Jardine, A.K.S. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King' s College Road, Toronto, Ont., M5S 3G8 (Canada)

    2009-10-15

    Many times, reliability studies rely on false premises such as independent and identically distributed time between failures assumption (renewal process). This can lead to erroneous model selection for the time to failure of a particular component or system, which can in turn lead to wrong conclusions and decisions. A strong statistical focus, a lack of a systematic approach and sometimes inadequate theoretical background seem to have made it difficult for maintenance analysts to adopt the necessary stage of data testing before the selection of a suitable model. In this paper, a framework for model selection to represent the failure process for a component or system is presented, based on a review of available trend tests. The paper focuses only on single-time-variable models and is primarily directed to analysts responsible for reliability analyses in an industrial maintenance environment. The model selection framework is directed towards the discrimination between the use of statistical distributions to represent the time to failure ('renewal approach'); and the use of stochastic point processes ('repairable systems approach'), when there may be the presence of system ageing or reliability growth. An illustrative example based on failure data from a fleet of backhoes is included.

  12. MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS

    International Nuclear Information System (INIS)

    Asensio Ramos, A.; Manso Sainz, R.; Martínez González, M. J.; Socas-Navarro, H.; Viticchié, B.; Orozco Suárez, D.

    2012-01-01

    Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.

  13. Voter models with heterozygosity selection

    Czech Academy of Sciences Publication Activity Database

    Sturm, A.; Swart, Jan M.

    2008-01-01

    Roč. 18, č. 1 (2008), s. 59-99 ISSN 1050-5164 R&D Projects: GA ČR GA201/06/1323; GA ČR GA201/07/0237 Institutional research plan: CEZ:AV0Z10750506 Keywords : Heterozygosity selection * rebellious voter model * branching * annihilation * survival * coexistence Subject RIV: BA - General Mathematics Impact factor: 1.285, year: 2008

  14. Environmental Sustainability and Effects on Urban Micro Region using Agent-Based Modeling of Urbanisation in Select Major Indian Cities

    Science.gov (United States)

    Aithal, B. H.

    2015-12-01

    Abstract: Urbanisation has gained momentum with globalization in India. Policy decisions to set up commercial, industrial hubs have fuelled large scale migration, added with population upsurge has contributed to the fast growing urban region that needs to be monitored in order to design sustainable urban cities. Unplanned urbanization have resulted in the growth of peri-urban region referred to as urban sprawl, are often devoid of basic amenities and infrastructure leading to large scale environmental problems that are evident. Remote sensing data acquired through space borne sensors at regular interval helps in understanding urban dynamics aided by Geoinformatics which has proved very effective in mapping and monitoring for sustainable urban planning. Cellular automata (CA) is a robust approach for the spatially explicit simulation of land-use land cover dynamics. CA uses rules, states, conditions that are vital factors in modelling urbanisation. This communication effectively introduces simulation assistances of CA with the agent based modelling supported by its fuzzy characteristics and weightages through analytical hierarchal process (AHP). This has been done considering perceived agents such as industries, natural resource etc. Respective agent's role in development of a particular regions into an urban area has been examined with weights and its influence of each of these agents based on its characteristics functions. Validation was performed obtaining a high kappa coefficient indicating the quality and the allocation performance of the model & validity of the model to predict future projections. The prediction using the proposed model was performed for 2030. Further environmental sustainability of each of these cities are explored such as water features, environment, greenhouse gas emissions, effects on human human health etc., Modeling suggests trend of various land use classes transformation with the spurt in urban expansions based on specific regions and

  15. A DECISION MAKING MODEL FOR SELECTION OF WIND ENERGY PRODUCTION FARMS BASED ON FUZZY ANALYTIC HIERARCHY PROCESS

    OpenAIRE

    SAGBAS, Aysun; MAZMANOGLU, Adnan; ALP, Reyhan

    2013-01-01

    The purpose of this paper is to present an evaluation model for the prioritization of wind energy production sites, namely, Mersin, Silifke and Anamur, located in Mediterranean Sea region of Turkey. For this purpose, a fuzzy analytical hierarchy decision making approach based on multi-criteria decision making framework including economic, technical, and environmental criteria was performed. It is found that the results obtained from fuzzy analytical hierarchy process (FAHP) approach, Anamur d...

  16. On spatial mutation-selection models

    Energy Technology Data Exchange (ETDEWEB)

    Kondratiev, Yuri, E-mail: kondrat@math.uni-bielefeld.de [Fakultät für Mathematik, Universität Bielefeld, Postfach 100131, 33501 Bielefeld (Germany); Kutoviy, Oleksandr, E-mail: kutoviy@math.uni-bielefeld.de, E-mail: kutovyi@mit.edu [Fakultät für Mathematik, Universität Bielefeld, Postfach 100131, 33501 Bielefeld (Germany); Department of Mathematics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); Minlos, Robert, E-mail: minl@iitp.ru; Pirogov, Sergey, E-mail: pirogov@proc.ru [IITP, RAS, Bolshoi Karetnyi 19, Moscow (Russian Federation)

    2013-11-15

    We discuss the selection procedure in the framework of mutation models. We study the regulation for stochastically developing systems based on a transformation of the initial Markov process which includes a cost functional. The transformation of initial Markov process by cost functional has an analytic realization in terms of a Kimura-Maruyama type equation for the time evolution of states or in terms of the corresponding Feynman-Kac formula on the path space. The state evolution of the system including the limiting behavior is studied for two types of mutation-selection models.

  17. Supplier selection an MCDA-based approach

    CERN Document Server

    Mukherjee, Krishnendu

    2017-01-01

    The purpose of this book is to present a comprehensive review of the latest research and development trends at the international level for modeling and optimization of the supplier selection process for different industrial sectors. It is targeted to serve two audiences: the MBA and PhD student interested in procurement, and the practitioner who wishes to gain a deeper understanding of procurement analysis with multi-criteria based decision tools to avoid upstream risks to get better supply chain visibility. The book is expected to serve as a ready reference for supplier selection criteria and various multi-criteria based supplier’s evaluation methods for forward, reverse and mass customized supply chain. This book encompasses several criteria, methods for supplier selection in a systematic way based on extensive literature review from 1998 to 2012. It provides several case studies and some useful links which can serve as a starting point for interested researchers. In the appendix several computer code wri...

  18. Multi-dimensional model order selection

    Directory of Open Access Journals (Sweden)

    Roemer Florian

    2011-01-01

    Full Text Available Abstract Multi-dimensional model order selection (MOS techniques achieve an improved accuracy, reliability, and robustness, since they consider all dimensions jointly during the estimation of parameters. Additionally, from fundamental identifiability results of multi-dimensional decompositions, it is known that the number of main components can be larger when compared to matrix-based decompositions. In this article, we show how to use tensor calculus to extend matrix-based MOS schemes and we also present our proposed multi-dimensional model order selection scheme based on the closed-form PARAFAC algorithm, which is only applicable to multi-dimensional data. In general, as shown by means of simulations, the Probability of correct Detection (PoD of our proposed multi-dimensional MOS schemes is much better than the PoD of matrix-based schemes.

  19. An Integrated MCDM Model for Conveyor Equipment Evaluation and Selection in an FMC Based on a Fuzzy AHP and Fuzzy ARAS in the Presence of Vagueness.

    Science.gov (United States)

    Nguyen, Huu-Tho; Dawal, Siti Zawiah Md; Nukman, Yusoff; Rifai, Achmad P; Aoyama, Hideki

    2016-01-01

    The conveyor system plays a vital role in improving the performance of flexible manufacturing cells (FMCs). The conveyor selection problem involves the evaluation of a set of potential alternatives based on qualitative and quantitative criteria. This paper presents an integrated multi-criteria decision making (MCDM) model of a fuzzy AHP (analytic hierarchy process) and fuzzy ARAS (additive ratio assessment) for conveyor evaluation and selection. In this model, linguistic terms represented as triangular fuzzy numbers are used to quantify experts' uncertain assessments of alternatives with respect to the criteria. The fuzzy set is then integrated into the AHP to determine the weights of the criteria. Finally, a fuzzy ARAS is used to calculate the weights of the alternatives. To demonstrate the effectiveness of the proposed model, a case study is performed of a practical example, and the results obtained demonstrate practical potential for the implementation of FMCs.

  20. Ab initio and DFT derived potential energy functions in simulations of selected polyesters based on atomistic models

    Science.gov (United States)

    Blomqvist, Johanna Marjaana

    This study focuses on atomistic simulations of polyesters, the main interest being in the performance of classical models. The Polymer Consistent Force Field (PCFF), developed for synthetic polymers, forms the basis for the simulations. The calculated properties of synthetic polymers depend strongly on the conformational statistics of the polymer chains, and the force field is, therefore, of crucial importance for the reliability of the simulations. Thus, the PCFF has been tested by comparing its results for model molecules of the polyesters studied with those of quantum mechanical ab initio and density functional theory (DFT) calculations regarding the rotational behaviour of typical bonds in these polyesters. The calculations showed that there were severe disagreements between the quantum mechanical and the PCFF studies, leading thus to re-optimisation of the particular torsion potentials of the PCFF. The quantum mechanical methods used were also compared, and though they gave mostly similar results, the DFT methods were found to underestimate some of the torsional barriers. The modified PCFF was shown to yield results in good agreement with experimental data for single chain properties of the selected polyesters. The dependence of the RIS Metropolis Monte Carlo (RMMC) method, used for these property calculations, on different run parameters, was discussed in more detail. The RMMC method, using the original and modified PCFFs, was also used in studies on the flexibility of some polyesters, which are known to be biodegradable, i.e. of polylactic (PLA) and polyglycolic (PGA) acids and some of their copolymers. The original PCFF was found to reproduce the flexibilities of these polyesters in contradiction with the results obtained with the modified PCFF. Finally, the modified PCFF was applied to molecular dynamics simulations on the constructed amorphous models for PLA and PGA and some of their copolymers to study the probability for hydrolysis as the first stage of

  1. Adverse selection model regarding tobacco consumption

    Directory of Open Access Journals (Sweden)

    Dumitru MARIN

    2006-01-01

    Full Text Available The impact of introducing a tax on tobacco consumption can be studied trough an adverse selection model. The objective of the model presented in the following is to characterize the optimal contractual relationship between the governmental authorities and the two type employees: smokers and non-smokers, taking into account that the consumers’ decision to smoke or not represents an element of risk and uncertainty. Two scenarios are run using the General Algebraic Modeling Systems software: one without taxes set on tobacco consumption and another one with taxes set on tobacco consumption, based on an adverse selection model described previously. The results of the two scenarios are compared in the end of the paper: the wage earnings levels and the social welfare in case of a smoking agent and in case of a non-smoking agent.

  2. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  3. Partial imputation to improve predictive modelling in insurance risk classification using a hybrid positive selection algorithm and correlation-based feature selection

    CSIR Research Space (South Africa)

    Duma, M

    2013-09-01

    Full Text Available model is built using libSVM 2.91, a library tool for SVMs designed by Chih-Chung Chang and Chin-Jen Lin. The model also used the radial basis function (RBF) as the kernel function (KF) and the gamma parameter was set to 0.005 (derived by trial...., Dancing with dirty data: methods for exploring and cleaning data. In Casualty Actuarial Society Forum, Casualty Ac- tuarial Society, Virginia, USA, pp. 198–254. 2. Peng, Y. and Kou, G., A comparative study of classification methods in financial risk...

  4. A CERCLA-Based Decision Model to Support Remedy Selection for an Uncertain Volume of Contaminants at a DOE Facility

    National Research Council Canada - National Science Library

    Kerschus, Christine

    1997-01-01

    The Paducah Gaseous Diffusion Plant (PGDP) operated by the Department of Energy is challenged with selecting the appropriate remediation technology to cleanup contaminants at Waste Area Group (WAG) 6...

  5. Novel rat Alzheimer's disease models based on AAV-mediated gene transfer to selectively increase hippocampal Aβ levels

    Directory of Open Access Journals (Sweden)

    Dicker Bridget L

    2007-06-01

    Full Text Available Abstract Background Alzheimer's disease (AD is characterized by a decline in cognitive function and accumulation of amyloid-β peptide (Aβ in extracellular plaques. Mutations in amyloid precursor protein (APP and presenilins alter APP metabolism resulting in accumulation of Aβ42, a peptide essential for the formation of amyloid deposits and proposed to initiate the cascade leading to AD. However, the role of Aβ40, the more prevalent Aβ peptide secreted by cells and a major component of cerebral Aβ deposits, is less clear. In this study, virally-mediated gene transfer was used to selectively increase hippocampal levels of human Aβ42 and Aβ40 in adult Wistar rats, allowing examination of the contribution of each to the cognitive deficits and pathology seen in AD. Results Adeno-associated viral (AAV vectors encoding BRI-Aβ cDNAs were generated resulting in high-level hippocampal expression and secretion of the specific encoded Aβ peptide. As a comparison the effect of AAV-mediated overexpression of APPsw was also examined. Animals were tested for development of learning and memory deficits (open field, Morris water maze, passive avoidance, novel object recognition three months after infusion of AAV. A range of impairments was found, with the most pronounced deficits observed in animals co-injected with both AAV-BRI-Aβ40 and AAV-BRI-Aβ42. Brain tissue was analyzed by ELISA and immunohistochemistry to quantify levels of detergent soluble and insoluble Aβ peptides. BRI-Aβ42 and the combination of BRI-Aβ40+42 overexpression resulted in elevated levels of detergent-insoluble Aβ. No significant increase in detergent-insoluble Aβ was seen in the rats expressing APPsw or BRI-Aβ40. No pathological features were noted in any rats, except the AAV-BRI-Aβ42 rats which showed focal, amorphous, Thioflavin-negative Aβ42 deposits. Conclusion The results show that AAV-mediated gene transfer is a valuable tool to model aspects of AD pathology in

  6. Scenario analysis and path selection of low-carbon transformation in China based on a modified IPAT model.

    Directory of Open Access Journals (Sweden)

    Liang Chen

    Full Text Available This paper presents a forecast and analysis of population, economic development, energy consumption and CO2 emissions variation in China in the short- and long-term steps before 2020 with 2007 as the base year. The widely applied IPAT model, which is the basis for calculations, projections, and scenarios of greenhouse gases (GHGs reformulated as the Kaya equation, is extended to analyze and predict the relations between human activities and the environment. Four scenarios of CO2 emissions are used including business as usual (BAU, energy efficiency improvement scenario (EEI, low carbon scenario (LC and enhanced low carbon scenario (ELC. The results show that carbon intensity will be reduced by 40-45% as scheduled and economic growth rate will be 6% in China under LC scenario by 2020. The LC scenario, as the most appropriate and the most feasible scheme for China's low-carbon development in the future, can maximize the harmonious development of economy, society, energy and environmental systems. Assuming China's development follows the LC scenario, the paper further gives four paths of low-carbon transformation in China: technological innovation, industrial structure optimization, energy structure optimization and policy guidance.

  7. Model selection for univariable fractional polynomials.

    Science.gov (United States)

    Royston, Patrick

    2017-07-01

    Since Royston and Altman's 1994 publication ( Journal of the Royal Statistical Society, Series C 43: 429-467), fractional polynomials have steadily gained popularity as a tool for flexible parametric modeling of regression relationships. In this article, I present fp_select, a postestimation tool for fp that allows the user to select a parsimonious fractional polynomial model according to a closed test procedure called the fractional polynomial selection procedure or function selection procedure. I also give a brief introduction to fractional polynomial models and provide examples of using fp and fp_select to select such models with real data.

  8. A new kinetic model based on the remote control mechanism to fit experimental data in the selective oxidation of propene into acrolein on biphasic catalysts

    Energy Technology Data Exchange (ETDEWEB)

    Abdeldayem, H.M.; Ruiz, P.; Delmon, B. [Unite de Catalyse et Chimie des Materiaux Divises, Universite Catholique de Louvain, Louvain-La-Neuve (Belgium); Thyrion, F.C. [Unite des Procedes Faculte des Sciences Appliquees, Universite Catholique de Louvain, Louvain-La-Neuve (Belgium)

    1998-12-31

    A new kinetic model for a more accurate and detailed fitting of the experimental data is proposed. The model is based on the remote control mechanism (RCM). The RCM assumes that some oxides (called `donors`) are able to activate molecular oxygen transforming it to very active mobile species (spillover oxygen (O{sub OS})). O{sub OS} migrates onto the surface of the other oxide (called `acceptor`) where it creates and/or regenerates the active sites during the reaction. The model contains tow terms, one considering the creation of selective sites and the other the catalytic reaction at each site. The model has been tested in the selective oxidation of propene into acrolein (T=380, 400, 420 C; oxygen and propene partial pressures between 38 and 152 Torr). Catalysts were prepared as pure MoO{sub 3} (acceptor) and their mechanical mixtures with {alpha}-Sb{sub 2}O{sub 4} (donor) in different proportions. The presence of {alpha}-Sb{sub 2}O{sub 4} changes the reaction order, the activation energy of the reaction and the number of active sites of MoO{sub 3} produced by oxygen spillover. These changes are consistent with a modification in the degree of irrigation of the surface by oxygen spillover. The fitting of the model to experimental results shows that the number of sites created by O{sub SO} increases with the amount of {alpha}-Sb{sub 2}O{sub 4}. (orig.)

  9. Skewed factor models using selection mechanisms

    KAUST Repository

    Kim, Hyoung-Moon

    2015-12-21

    Traditional factor models explicitly or implicitly assume that the factors follow a multivariate normal distribution; that is, only moments up to order two are involved. However, it may happen in real data problems that the first two moments cannot explain the factors. Based on this motivation, here we devise three new skewed factor models, the skew-normal, the skew-tt, and the generalized skew-normal factor models depending on a selection mechanism on the factors. The ECME algorithms are adopted to estimate related parameters for statistical inference. Monte Carlo simulations validate our new models and we demonstrate the need for skewed factor models using the classic open/closed book exam scores dataset.

  10. A Modified Feature Selection and Artificial Neural Network-Based Day-Ahead Load Forecasting Model for a Smart Grid

    OpenAIRE

    Ahmad, Ashfaq; Javaid, Nadeem; Alrajeh, Nabil; Khan, Zahoor; Qasim, Umar; Khan, Abid

    2015-01-01

    In the operation of a smart grid (SG), day-ahead load forecasting (DLF) is an important task. The SG can enhance the management of its conventional and renewable resources with a more accurate DLF model. However, DLF model development is highly challenging due to the non-linear characteristics of load time series in SGs. In the literature, DLF models do exist; however, these models trade off between execution time and forecast accuracy. The newly-proposed DLF model will be able to accurately ...

  11. Development of a thermodynamic data base for selected heavy metals

    International Nuclear Information System (INIS)

    Hageman, Sven; Scharge, Tina; Willms, Thomas

    2015-07-01

    The report on the development of a thermodynamic data base for selected heavy metals covers the description of experimental methods, the thermodynamic model for chromate, the thermodynamic model for dichromate, the thermodynamic model for manganese (II), the thermodynamic model for cobalt, the thermodynamic model for nickel, the thermodynamic model for copper (I), the thermodynamic model for copper(II), the thermodynamic model for mercury (0) and mercury (I), the thermodynamic model for mercury (III), the thermodynamic model for arsenate.

  12. Comparative studies of the ITU-T prediction model for radiofrequency radiation emission and real time measurements at some selected mobile base transceiver stations in Accra, Ghana

    International Nuclear Information System (INIS)

    Obeng, S. O

    2014-07-01

    Recent developments in the electronics industry have led to the widespread use of radiofrequency (RF) devices in various areas including telecommunications. The increasing numbers of mobile base station (BTS) as well as their proximity to residential areas have been accompanied by public health concerns due to the radiation exposure. The main objective of this research was to compare and modify the ITU- T predictive model for radiofrequency radiation emission for BTS with measured data at some selected cell sites in Accra, Ghana. Theoretical and experimental assessment of radiofrequency exposures due to mobile base station antennas have been analysed. The maximum and minimum average power density measured from individual base station in the town was 1. 86µW/m2 and 0.00961µW/m2 respectively. The ITU-T Predictive model power density ranged between 6.40mW/m 2 and 0.344W/m 2 . Results obtained showed a variation between measured power density levels and the ITU-T predictive model. The ITU-T model power density levels decrease with increase in radial distance while real time measurements do not due to fluctuations during measurement. The ITU-T model overestimated the power density levels by a factor l0 5 as compared to real time measurements. The ITU-T model was modified to reduce the level of overestimation. The result showed that radiation intensity varies from one base station to another even at the same distance. Occupational exposure quotient ranged between 5.43E-10 and 1.89E-08 whilst general public exposure quotient ranged between 2.72E-09 and 9.44E-08. From the results, it shows that the RF exposure levels in Accra from these mobile phone base station antennas are below the permitted RF exposure limit to the general public recommended by the International Commission on Non-Ionizing Radiation Protection. (au)

  13. Behavioral optimization models for multicriteria portfolio selection

    Directory of Open Access Journals (Sweden)

    Mehlawat Mukesh Kumar

    2013-01-01

    Full Text Available In this paper, behavioral construct of suitability is used to develop a multicriteria decision making framework for portfolio selection. To achieve this purpose, we rely on multiple methodologies. Analytical hierarchy process technique is used to model the suitability considerations with a view to obtaining the suitability performance score in respect of each asset. A fuzzy multiple criteria decision making method is used to obtain the financial quality score of each asset based upon investor's rating on the financial criteria. Two optimization models are developed for optimal asset allocation considering simultaneously financial and suitability criteria. An empirical study is conducted on randomly selected assets from National Stock Exchange, Mumbai, India to demonstrate the effectiveness of the proposed methodology.

  14. Ant colony optimization as a descriptor selection in QSPR modeling: Estimation of the λmax of anthraquinones-based dyes

    Directory of Open Access Journals (Sweden)

    Morteza Atabati

    2016-09-01

    Full Text Available Quantitative structure–property relationship (QSPR studies based on ant colony optimization (ACO were carried out for the prediction of λmax of 9,10-anthraquinone derivatives. ACO is a meta-heuristic algorithm, which is derived from the observation of real ants and proposed to feature selection. After optimization of 3D geometry of structures by the semi-empirical quantum-chemical calculation at AM1 level, different descriptors were calculated by the HyperChem and Dragon softwares (1514 descriptors. A major problem of QSPR is the high dimensionality of the descriptor space; therefore, descriptor selection is the most important step. In this paper, an ACO algorithm was used to select the best descriptors. Then selected descriptors were applied for model development using multiple linear regression. The average absolute relative deviation and correlation coefficient for the calibration set were obtained as 3.3% and 0.9591, respectively, while the average absolute relative deviation and correlation coefficient for the prediction set were obtained as 5.0% and 0.9526, respectively. The results showed that the applied procedure is suitable for prediction of λmax of 9,10-anthraquinone derivatives.

  15. A Selection Model to Logistic Centers Based on TOPSIS and MCGP Methods: The Case of Airline Industry

    Directory of Open Access Journals (Sweden)

    Kou-Huang Chen

    2014-01-01

    Full Text Available The location selection of a logistics center is a crucial decision relating to cost and benefit analysis in airline industry. However, it is difficult to be solved because there are many conflicting and multiple objectives in location problems. To solve the problem, this paper integrates fuzzy technique for order preference by similarity to an ideal solution (TOPSIS and multichoice goal programming (MCGP to obtain an appropriate logistics center from many alternative locations for airline industry. The proposed method in this paper will offer the decision makers (DMs to set multiple aspiration levels for the decision criteria. A numerical example of application is also presented.

  16. SU-G-BRC-13: Model Based Classification for Optimal Position Selection for Left-Sided Breast Radiotherapy: Free Breathing, DIBH, Or Prone

    Energy Technology Data Exchange (ETDEWEB)

    Lin, H; Liu, T; Xu, X [Rensselaer Polytechnic Institute, Troy, NY (United States); Shi, C [Saint Vincent Medical Center, Bridgeport, CT (United States); Petillion, S; Kindts, I [University Hospitals Leuven, Leuven, Vlaams-Brabant (Belgium); Tang, X [Memorial Sloan Kettering Cancer Center, West Harrison, NY (United States)

    2016-06-15

    Purpose: There are clinical decision challenges to select optimal treatment positions for left-sided breast cancer patients—supine free breathing (FB), supine Deep Inspiration Breath Hold (DIBH) and prone free breathing (prone). Physicians often make the decision based on experiences and trials, which might not always result optimal OAR doses. We herein propose a mathematical model to predict the lowest OAR doses among these three positions, providing a quantitative tool for corresponding clinical decision. Methods: Patients were scanned in FB, DIBH, and prone positions under an IRB approved protocol. Tangential beam plans were generated for each position, and OAR doses were calculated. The position with least OAR doses is defined as the optimal position. The following features were extracted from each scan to build the model: heart, ipsilateral lung, breast volume, in-field heart, ipsilateral lung volume, distance between heart and target, laterality of heart, and dose to heart and ipsilateral lung. Principal Components Analysis (PCA) was applied to remove the co-linearity of the input data and also to lower the data dimensionality. Feature selection, another method to reduce dimensionality, was applied as a comparison. Support Vector Machine (SVM) was then used for classification. Thirtyseven patient data were acquired; up to now, five patient plans were available. K-fold cross validation was used to validate the accuracy of the classifier model with small training size. Results: The classification results and K-fold cross validation demonstrated the model is capable of predicting the optimal position for patients. The accuracy of K-fold cross validations has reached 80%. Compared to PCA, feature selection allows causal features of dose to be determined. This provides more clinical insights. Conclusion: The proposed classification system appeared to be feasible. We are generating plans for the rest of the 37 patient images, and more statistically significant

  17. Reserve selection using nonlinear species distribution models.

    Science.gov (United States)

    Moilanen, Atte

    2005-06-01

    Reserve design is concerned with optimal selection of sites for new conservation areas. Spatial reserve design explicitly considers the spatial pattern of the proposed reserve network and the effects of that pattern on reserve cost and/or ability to maintain species there. The vast majority of reserve selection formulations have assumed a linear problem structure, which effectively means that the biological value of a potential reserve site does not depend on the pattern of selected cells. However, spatial population dynamics and autocorrelation cause the biological values of neighboring sites to be interdependent. Habitat degradation may have indirect negative effects on biodiversity in areas neighboring the degraded site as a result of, for example, negative edge effects or lower permeability for animal movement. In this study, I present a formulation and a spatial optimization algorithm for nonlinear reserve selection problems in grid-based landscapes that accounts for interdependent site values. The method is demonstrated using habitat maps and nonlinear habitat models for threatened birds in the Netherlands, and it is shown that near-optimal solutions are found for regions consisting of up to hundreds of thousands grid cells, a landscape size much larger than those commonly attempted even with linear reserve selection formulations.

  18. Selection and impedance based model of a lithium ion battery technology for integration with virtual power plant

    DEFF Research Database (Denmark)

    Swierczynski, Maciej Jozef; Stroe, Daniel Ioan; Stan, Ana-Irina

    2013-01-01

    The penetration of wind power into the power system has been increasing in the recent years. Therefore, a lot of concerns related to the reliable operation of the power system have been addressed. An attractive solution to minimize the limitations faced by the wind power grid integration is to in......The penetration of wind power into the power system has been increasing in the recent years. Therefore, a lot of concerns related to the reliable operation of the power system have been addressed. An attractive solution to minimize the limitations faced by the wind power grid integration...... is to integrate lithium-ion batteries into virtual power plants; thus, the power system stability and the energy quality can be increased. The selection of the best lithium-ion battery candidate for integration with wind power plants is a key aspect for the economic feasibility of the virtual power plant...

  19. An Extension to the Constructivist Coding Hypothesis as a Learning Model for Selective Feedback when the Base Rate Is High

    Science.gov (United States)

    Ghaffarzadegan, Navid; Stewart, Thomas R.

    2011-01-01

    Elwin, Juslin, Olsson, and Enkvist (2007) and Henriksson, Elwin, and Juslin (2010) offered the constructivist coding hypothesis to describe how people code the outcomes of their decisions when availability of feedback is conditional on the decision. They provided empirical evidence only for the 0.5 base rate condition. This commentary argues that…

  20. Discriminating model for diagnosis of basal cell carcinoma and melanoma in vitro based on the Raman spectra of selected biochemicals

    Science.gov (United States)

    Silveira, Landulfo; Silveira, Fabrício Luiz; Bodanese, Benito; Zângaro, Renato Amaro; Pacheco, Marcos Tadeu T.

    2012-07-01

    Raman spectroscopy has been employed to identify differences in the biochemical constitution of malignant [basal cell carcinoma (BCC) and melanoma (MEL)] cells compared to normal skin tissues, with the goal of skin cancer diagnosis. We collected Raman spectra from compounds such as proteins, lipids, and nucleic acids, which are expected to be represented in human skin spectra, and developed a linear least-squares fitting model to estimate the contributions of these compounds to the tissue spectra. We used a set of 145 spectra from biopsy fragments of normal (30 spectra), BCC (96 spectra), and MEL (19 spectra) skin tissues, collected using a near-infrared Raman spectrometer (830 nm, 50 to 200 mW, and 20 s exposure time) coupled to a Raman probe. We applied the best-fitting model to the spectra of biochemicals and tissues, hypothesizing that the relative spectral contribution of each compound to the tissue Raman spectrum changes according to the disease. We verified that actin, collagen, elastin, and triolein were the most important biochemicals representing the spectral features of skin tissues. A classification model applied to the relative contribution of collagen III, elastin, and melanin using Euclidean distance as a discriminator could differentiate normal from BCC and MEL.

  1. On Data Space Selection and Data Processing for Parameter Identification in a Reaction-Diffusion Model Based on FRAP Experiments

    Directory of Open Access Journals (Sweden)

    Stefan Kindermann

    2015-01-01

    Full Text Available Fluorescence recovery after photobleaching (FRAP is a widely used measurement technique to determine the mobility of fluorescent molecules within living cells. While the experimental setup and protocol for FRAP experiments are usually fixed, data (preprocessing represents an important issue. The aim of this paper is twofold. First, we formulate and solve the problem of relevant FRAP data selection. The theoretical findings are illustrated by the comparison of the results of parameter identification when the full data set was used and the case when the irrelevant data set (data with negligible impact on the confidence interval of the estimated parameters was removed from the data space. Second, we analyze and compare two approaches of FRAP data processing. Our proposition, surprisingly for the FRAP community, claims that the data set represented by the FRAP recovery curves in form of a time series (integrated data approach commonly used by the FRAP community leads to a larger confidence interval compared to the full (spatiotemporal data approach.

  2. Novel metrics for growth model selection.

    Science.gov (United States)

    Grigsby, Matthew R; Di, Junrui; Leroux, Andrew; Zipunnikov, Vadim; Xiao, Luo; Crainiceanu, Ciprian; Checkley, William

    2018-01-01

    Literature surrounding the statistical modeling of childhood growth data involves a diverse set of potential models from which investigators can choose. However, the lack of a comprehensive framework for comparing non-nested models leads to difficulty in assessing model performance. This paper proposes a framework for comparing non-nested growth models using novel metrics of predictive accuracy based on modifications of the mean squared error criteria. Three metrics were created: normalized, age-adjusted, and weighted mean squared error (MSE). Predictive performance metrics were used to compare linear mixed effects models and functional regression models. Prediction accuracy was assessed by partitioning the observed data into training and test datasets. This partitioning was constructed to assess prediction accuracy for backward (i.e., early growth), forward (i.e., late growth), in-range, and on new-individuals. Analyses were done with height measurements from 215 Peruvian children with data spanning from near birth to 2 years of age. Functional models outperformed linear mixed effects models in all scenarios tested. In particular, prediction errors for functional concurrent regression (FCR) and functional principal component analysis models were approximately 6% lower when compared to linear mixed effects models. When we weighted subject-specific MSEs according to subject-specific growth rates during infancy, we found that FCR was the best performer in all scenarios. With this novel approach, we can quantitatively compare non-nested models and weight subgroups of interest to select the best performing growth model for a particular application or problem at hand.

  3. Selected mice models based on APP, MAPT and presenilin gene mutations in research on the pathogenesis of Alzheimer’s disease

    Directory of Open Access Journals (Sweden)

    Magdalena Więdłocha

    2012-06-01

    Full Text Available  The research conducted on animal models of Alzheimer’s disease (AD has provided valuable information about the pathogenesis of this disease and associated behavioral and cognitive deficits as well as the disease-associated anatomical and histopathological lesions of the brain. Transgenic technologies have enabled the creation of animal models based on mutations in APP, MAPT, presenilin genes, tau protein and apoE. Due to economic reasons studies are mainly conducted on mice. Their brain tissue, depending on the mutation, is characterized by histopathological changes, such as the presence of amyloid plaques, tau protein deposits and dystrophic neurites, gliosis, hippocampal atrophy and amyloid accumulation in vessels. Animal cognitive impairment and behavior, which can be demonstrated in behavioral tests, primarily relate to the working and reference memory, alternation and anxiety. Unfortunately, despite the various modifications specific to AD in the genome of animals, scientists have failed to create an animal model characterized by all the pathological changes that can occur in Alzheimer’s disease. Nevertheless, the role of transgenic animals is undeniable, both in research on AD neuropathology and for testing new therapies, such as immunotherapy. Despite the occurrence of abundant Alzheimer’s disease mice models this article is dedicated to selected models with mutations in the APP, MAPT and presenilin genes and their application for behavioral studies.

  4. A flowchart for selecting an ointment base.

    Science.gov (United States)

    Conway, Jeannine M; Brown, Michael C

    2014-02-12

    OBJECTIVES. To improve students' skills in selecting appropriate ointment bases through the development and implementation of a flowchart. A flowchart was designed to help students select the appropriate base for an ointment. Students used the flowchart throughout the semester in both dry and wet laboratory activities. At the end of the semester, students completed a dry laboratory practical that required them to select an appropriate ointment base and levigating agent. Student performance data from the year prior to implementation was compared to data for 2 years after implementation. Calculation, procedure, and labeling errors also were compared. Prior to implementation of the flowchart, 51 of 101 students selected the correct base. After implementation, 169 of 212 students selected the correct base (pflowchart to select an ointment base improved student performance when used in the context of a dry laboratory assignment.

  5. Selected sports talent development models

    OpenAIRE

    Michal Vičar

    2017-01-01

    Background: Sports talent in the Czech Republic is generally viewed as a static, stable phenomena. It stands in contrast with widespread praxis carried out in Anglo-Saxon countries that emphasise its fluctuant nature. This is reflected in the current models describing its development. Objectives: The aim is to introduce current models of talent development in sport. Methods: Comparison and analysing of the following models: Balyi - Long term athlete development model, Côté - Developmen...

  6. VEMAP 1: Selected Model Results

    Data.gov (United States)

    National Aeronautics and Space Administration — The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) was a multi-institutional, international effort addressing the response of biogeography and...

  7. Feature Selection Based on Confidence Machine

    OpenAIRE

    Liu, Chang; Xu, Yi

    2014-01-01

    In machine learning and pattern recognition, feature selection has been a hot topic in the literature. Unsupervised feature selection is challenging due to the loss of labels which would supply the related information.How to define an appropriate metric is the key for feature selection. We propose a filter method for unsupervised feature selection which is based on the Confidence Machine. Confidence Machine offers an estimation of confidence on a feature'reliability. In this paper, we provide...

  8. The linear utility model for optimal selection

    NARCIS (Netherlands)

    Mellenbergh, Gideon J.; van der Linden, Willem J.

    A linear utility model is introduced for optimal selection when several subpopulations of applicants are to be distinguished. Using this model, procedures are described for obtaining optimal cutting scores in subpopulations in quota-free as well as quota-restricted selection situations. The cutting

  9. VEMAP 1: Selected Model Results

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: The Vegetation/Ecosystem Modeling and Analysis Project (VEMAP) was a multi-institutional, international effort addressing the response of biogeography and...

  10. The Informed Guide to Climate Data Sets, a web-based community resource to facilitate the discussion and selection of appropriate datasets for Earth System Model Evaluation

    Science.gov (United States)

    Schneider, D. P.; Deser, C.; Shea, D.

    2011-12-01

    When comparing CMIP5 model output to observations, researchers will be faced with a bewildering array of choices. Considering just a few of the different products available for commonly analyzed climate variables, for reanalysis there are at least half a dozen different products, for sea ice concentrations there are NASA Team or Bootstrap versions, for sea surface temperatures there are HadISST or NOAA ERSST data, and for precipitation there are CMAP and GPCP data sets. While many data centers exist to host data, there is little centralized guidance on discovering and choosing appropriate climate data sets for the task at hand. Common strategies like googling "sea ice data" yield results that at best are substantially incomplete. Anecdotal evidence suggests that individual researchers often base their selections on non-scientific criteria-either the data are in a convenient format that the user is comfortable with, a co-worker has the data handy on her local server, or a mentor discourages or recommends the use of particular products for legacy or other non-objective reasons. Sometimes these casual recommendations are sound, but they are not accessible to the broader community or adequately captured in the peer-reviewed literature. These issues are addressed by the establishment of a web-based Informed Guide with the specific goals to (1) Evaluate and assess selected climate datasets and (2) Provide expert user guidance on the strengths and limitations of selected climate datasets. The Informed Guide is based at NCAR's Climate and Global Dynamics Division, Climate Analysis Section and is funded by NSF. The Informed Guide is an interactive website that welcomes participation from the broad scientific community and is scalable to grow as participation increases. In this presentation, we will present the website, discuss how you can participate, and address the broader issues about its role in the evaluation of CMIP5 and other climate model simulations. A link to the

  11. Polystyrene Based Silver Selective Electrodes

    Directory of Open Access Journals (Sweden)

    Shiva Agarwal

    2002-06-01

    Full Text Available Silver(I selective sensors have been fabricated from polystyrene matrix membranes containing macrocycle, Me6(14 diene.2HClO4 as ionophore. Best performance was exhibited by the membrane having a composition macrocycle : Polystyrene in the ratio 15:1. This membrane worked well over a wide concentration range 5.0×10-6–1.0×10-1M of Ag+ with a near-Nernstian slope of 53.0 ± 1.0 mV per decade of Ag+ activity. The response time of the sensor is <15 s and the membrane can be used over a period of four months with good reproducibility. The proposed electrode works well in a wide pH range 2.5-9.0 and demonstrates good discriminating power over a number of mono-, di-, and trivalent cations. The sensor has also been used as an indicator electrode in the potentiometric titration of silver(II ions against NaCl solution. The sensor can also be used in non-aqueous medium with no significant change in the value of slope or working concentration range for the estimation of Ag+ in solution having up to 25% (v/v nonaqueous fraction.

  12. Selection of classification models from repository of model for water ...

    African Journals Online (AJOL)

    This paper proposes a new technique, Model Selection Technique (MST) for selection and ranking of models from the repository of models by combining three performance measures (Acc, TPR and TNR). This technique provides weightage to each performance measure to find the most suitable model from the repository of ...

  13. Model selection and comparison for independents sinusoids

    DEFF Research Database (Denmark)

    Nielsen, Jesper Kjær; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2014-01-01

    In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve this me....... Through simulations, we demonstrate that the lp-BIC outperforms the asymptotic MAP criterion and other state of the art methods in terms of model selection, de-noising and prediction performance. The simulation code is available online.......In the signal processing literature, many methods have been proposed for estimating the number of sinusoidal basis functions from a noisy data set. The most popular method is the asymptotic MAP criterion, which is sometimes also referred to as the BIC. In this paper, we extend and improve...... this method by considering the problem in a full Bayesian framework instead of the approximate formulation, on which the asymptotic MAP criterion is based. This leads to a new model selection and comparison method, the lp-BIC, whose computational complexity is of the same order as the asymptotic MAP criterion...

  14. Quantile hydrologic model selection and model structure deficiency assessment : 1. Theory

    NARCIS (Netherlands)

    Pande, S.

    2013-01-01

    A theory for quantile based hydrologic model selection and model structure deficiency assessment is presented. The paper demonstrates that the degree to which a model selection problem is constrained by the model structure (measured by the Lagrange multipliers of the constraints) quantifies

  15. A Bayesian random effects discrete-choice model for resource selection: Population-level selection inference

    Science.gov (United States)

    Thomas, D.L.; Johnson, D.; Griffith, B.

    2006-01-01

    Bayesian hierarchical discrete-choice model for resource selection can provide managers with 2 components of population-level inference: average population selection and variability of selection. Both components are necessary to make sound management decisions based on animal selection.

  16. A risk assessment model for selecting cloud service providers

    OpenAIRE

    Cayirci, Erdal; Garaga, Alexandr; Santana de Oliveira, Anderson; Roudier, Yves

    2016-01-01

    The Cloud Adoption Risk Assessment Model is designed to help cloud customers in assessing the risks that they face by selecting a specific cloud service provider. It evaluates background information obtained from cloud customers and cloud service providers to analyze various risk scenarios. This facilitates decision making an selecting the cloud service provider with the most preferable risk profile based on aggregated risks to security, privacy, and service delivery. Based on this model we ...

  17. Classification of archaeological pieces into their respective stratum by a chemometric model based on the soil concentration of 25 selected elements

    International Nuclear Information System (INIS)

    Carrero, J.A.; Goienaga, N.; Fdez-Ortiz de Vallejuelo, S.; Arana, G.; Madariaga, J.M.

    2010-01-01

    The aim of this work was to demonstrate that an archaeological ceramic piece has remained buried underground in the same stratum for centuries without being removed. For this purpose, a chemometric model based on Principal Component Analysis, Soft Independent Modelling of Class Analogy and Linear Discriminant Analysis classification techniques was created with the concentration of some selected elements of both soil of the stratum and soil adhered to the ceramic piece. Some ceramic pieces from four different stratigraphic units, coming from a roman archaeological site in Alava (North of Spain), and its respective stratum soils were collected. The soil adhered to the ceramic pieces was removed and treated in the same way as the soil from its respective stratum. The digestion was carried out following the US Environmental Pollution Agency EPA 3051A method. A total of 54 elements were determined in the extracts by a rapid screening inductively coupled plasma mass spectrometry method. After rejecting the major elements and those which could have changed from the original composition of the soils (migration or retention from/to the buried objects), the following elements (25) were finally taken into account to construct the model: Li, V, Co, As, Y, Nb, Sn, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, Tb, Dy, Ho, Er, Tm, Yb, Lu, Au, Th and U. A total of 33 subsamples were treated from 10 soils belonging to 4 different stratigraphic units. The final model groups and discriminate them in four groups, according to the stratigraphic unit, having both the stratum and soils adhered to the pieces falling down in the same group.

  18. Detection of motor imagery of swallow EEG signals based on the dual-tree complex wavelet transform and adaptive model selection

    Science.gov (United States)

    Yang, Huijuan; Guan, Cuntai; Sui Geok Chua, Karen; San Chok, See; Wang, Chuan Chu; Kok Soon, Phua; Tang, Christina Ka Yin; Keng Ang, Kai

    2014-06-01

    Objective. Detection of motor imagery of hand/arm has been extensively studied for stroke rehabilitation. This paper firstly investigates the detection of motor imagery of swallow (MI-SW) and motor imagery of tongue protrusion (MI-Ton) in an attempt to find a novel solution for post-stroke dysphagia rehabilitation. Detection of MI-SW from a simple yet relevant modality such as MI-Ton is then investigated, motivated by the similarity in activation patterns between tongue movements and swallowing and there being fewer movement artifacts in performing tongue movements compared to swallowing. Approach. Novel features were extracted based on the coefficients of the dual-tree complex wavelet transform to build multiple training models for detecting MI-SW. The session-to-session classification accuracy was boosted by adaptively selecting the training model to maximize the ratio of between-classes distances versus within-class distances, using features of training and evaluation data. Main results. Our proposed method yielded averaged cross-validation (CV) classification accuracies of 70.89% and 73.79% for MI-SW and MI-Ton for ten healthy subjects, which are significantly better than the results from existing methods. In addition, averaged CV accuracies of 66.40% and 70.24% for MI-SW and MI-Ton were obtained for one stroke patient, demonstrating the detectability of MI-SW and MI-Ton from the idle state. Furthermore, averaged session-to-session classification accuracies of 72.08% and 70% were achieved for ten healthy subjects and one stroke patient using the MI-Ton model. Significance. These results and the subjectwise strong correlations in classification accuracies between MI-SW and MI-Ton demonstrated the feasibility of detecting MI-SW from MI-Ton models.

  19. GEOGRAPHIC INFORMATION SYSTEM-BASED MODELING AND ANALYSIS FOR SITE SELECTION OF GREEN MUSSEL, Perna viridis, MARICULTURE IN LADA BAY, PANDEGLANG, BANTEN PROVINCE

    Directory of Open Access Journals (Sweden)

    I Nyoman Radiarta

    2011-06-01

    Full Text Available Green mussel is one of important species cultured in Lada Bay, Pandeglang. To provide a necessary guidance regarding green mussel mariculture development, finding suitable site is an important step. This study was conducted to identify suitable site for green mussel mariculture development using geographic information system (GIS based models. Seven important parameters were grouped into two submodels, namely environmental (water temperature, salinity, suspended solid, dissolve oxygen, and bathymetry and infrastructural (distance to settlement and pond aquaculture. A constraint data was used to exclude the area from suitability maps that cannot be allowed to develop green mussel mariculture, including area of floating net fishing activity and area near electricity station. Analyses of factors and constraints indicated that about 31% of potential area with bottom depth less than 25 m had the most suitable area. This area was shown to have an ideal condition for green mussel mariculture in this study region. This study shows that GIS model is a powerful tool for site selection decision making. The tool can be a valuable tool in solving problems in local, regional, and/or continent areas.

  20. Research and Application of Hybrid Forecasting Model Based on an Optimal Feature Selection System—A Case Study on Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Yunxuan Dong

    2017-04-01

    Full Text Available The process of modernizing smart grid prominently increases the complexity and uncertainty in scheduling and operation of power systems, and, in order to develop a more reliable, flexible, efficient and resilient grid, electrical load forecasting is not only an important key but is still a difficult and challenging task as well. In this paper, a short-term electrical load forecasting model, with a unit for feature learning named Pyramid System and recurrent neural networks, has been developed and it can effectively promote the stability and security of the power grid. Nine types of methods for feature learning are compared in this work to select the best one for learning target, and two criteria have been employed to evaluate the accuracy of the prediction intervals. Furthermore, an electrical load forecasting method based on recurrent neural networks has been formed to achieve the relational diagram of historical data, and, to be specific, the proposed techniques are applied to electrical load forecasting using the data collected from New South Wales, Australia. The simulation results show that the proposed hybrid models can not only satisfactorily approximate the actual value but they are also able to be effective tools in the planning of smart grids.

  1. A Dynamic Model for Limb Selection

    NARCIS (Netherlands)

    Cox, R.F.A; Smitsman, A.W.

    2008-01-01

    Two experiments and a model on limb selection are reported. In Experiment 1 left-handed and right-handed participants (N = 36) repeatedly used one hand for grasping a small cube. After a clear switch in the cube’s location, perseverative limb selection was revealed in both handedness groups. In

  2. Ground-water transport model selection and evaluation guidelines

    International Nuclear Information System (INIS)

    Simmons, C.S.; Cole, C.R.

    1983-01-01

    Guidelines are being developed to assist potential users with selecting appropriate computer codes for ground-water contaminant transport modeling. The guidelines are meant to assist managers with selecting appropriate predictive models for evaluating either arid or humid low-level radioactive waste burial sites. Evaluation test cases in the form of analytical solutions to fundamental equations and experimental data sets have been identified and recommended to ensure adequate code selection, based on accurate simulation of relevant physical processes. The recommended evaluation procedures will consider certain technical issues related to the present limitations in transport modeling capabilities. A code-selection plan will depend on identifying problem objectives, determining the extent of collectible site-specific data, and developing a site-specific conceptual model for the involved hydrology. Code selection will be predicated on steps for developing an appropriate systems model. This paper will review the progress in developing those guidelines. 12 references

  3. Graphical tools for model selection in generalized linear models.

    Science.gov (United States)

    Murray, K; Heritier, S; Müller, S

    2013-11-10

    Model selection techniques have existed for many years; however, to date, simple, clear and effective methods of visualising the model building process are sparse. This article describes graphical methods that assist in the selection of models and comparison of many different selection criteria. Specifically, we describe for logistic regression, how to visualize measures of description loss and of model complexity to facilitate the model selection dilemma. We advocate the use of the bootstrap to assess the stability of selected models and to enhance our graphical tools. We demonstrate which variables are important using variable inclusion plots and show that these can be invaluable plots for the model building process. We show with two case studies how these proposed tools are useful to learn more about important variables in the data and how these tools can assist the understanding of the model building process. Copyright © 2013 John Wiley & Sons, Ltd.

  4. Hidden Markov Model for Stock Selection

    Directory of Open Access Journals (Sweden)

    Nguyet Nguyen

    2015-10-01

    Full Text Available The hidden Markov model (HMM is typically used to predict the hidden regimes of observation data. Therefore, this model finds applications in many different areas, such as speech recognition systems, computational molecular biology and financial market predictions. In this paper, we use HMM for stock selection. We first use HMM to make monthly regime predictions for the four macroeconomic variables: inflation (consumer price index (CPI, industrial production index (INDPRO, stock market index (S&P 500 and market volatility (VIX. At the end of each month, we calibrate HMM’s parameters for each of these economic variables and predict its regimes for the next month. We then look back into historical data to find the time periods for which the four variables had similar regimes with the forecasted regimes. Within those similar periods, we analyze all of the S&P 500 stocks to identify which stock characteristics have been well rewarded during the time periods and assign scores and corresponding weights for each of the stock characteristics. A composite score of each stock is calculated based on the scores and weights of its features. Based on this algorithm, we choose the 50 top ranking stocks to buy. We compare the performances of the portfolio with the benchmark index, S&P 500. With an initial investment of $100 in December 1999, over 15 years, in December 2014, our portfolio had an average gain per annum of 14.9% versus 2.3% for the S&P 500.

  5. A Data-Driven Method for Selecting Optimal Models Based on Graphical Visualisation of Differences in Sequentially Fitted ROC Model Parameters

    Directory of Open Access Journals (Sweden)

    K S Mwitondi

    2013-05-01

    Full Text Available Differences in modelling techniques and model performance assessments typically impinge on the quality of knowledge extraction from data. We propose an algorithm for determining optimal patterns in data by separately training and testing three decision tree models in the Pima Indians Diabetes and the Bupa Liver Disorders datasets. Model performance is assessed using ROC curves and the Youden Index. Moving differences between sequential fitted parameters are then extracted, and their respective probability density estimations are used to track their variability using an iterative graphical data visualisation technique developed for this purpose. Our results show that the proposed strategy separates the groups more robustly than the plain ROC/Youden approach, eliminates obscurity, and minimizes over-fitting. Further, the algorithm can easily be understood by non-specialists and demonstrates multi-disciplinary compliance.

  6. Selecting model complexity in learning problems

    Energy Technology Data Exchange (ETDEWEB)

    Buescher, K.L. [Los Alamos National Lab., NM (United States); Kumar, P.R. [Illinois Univ., Urbana, IL (United States). Coordinated Science Lab.

    1993-10-01

    To learn (or generalize) from noisy data, one must resist the temptation to pick a model for the underlying process that overfits the data. Many existing techniques solve this problem at the expense of requiring the evaluation of an absolute, a priori measure of each model`s complexity. We present a method that does not. Instead, it uses a natural, relative measure of each model`s complexity. This method first creates a pool of ``simple`` candidate models using part of the data and then selects from among these by using the rest of the data.

  7. Ensembling Variable Selectors by Stability Selection for the Cox Model

    Directory of Open Access Journals (Sweden)

    Qing-Yan Yin

    2017-01-01

    Full Text Available As a pivotal tool to build interpretive models, variable selection plays an increasingly important role in high-dimensional data analysis. In recent years, variable selection ensembles (VSEs have gained much interest due to their many advantages. Stability selection (Meinshausen and Bühlmann, 2010, a VSE technique based on subsampling in combination with a base algorithm like lasso, is an effective method to control false discovery rate (FDR and to improve selection accuracy in linear regression models. By adopting lasso as a base learner, we attempt to extend stability selection to handle variable selection problems in a Cox model. According to our experience, it is crucial to set the regularization region Λ in lasso and the parameter λmin properly so that stability selection can work well. To the best of our knowledge, however, there is no literature addressing this problem in an explicit way. Therefore, we first provide a detailed procedure to specify Λ and λmin. Then, some simulated and real-world data with various censoring rates are used to examine how well stability selection performs. It is also compared with several other variable selection approaches. Experimental results demonstrate that it achieves better or competitive performance in comparison with several other popular techniques.

  8. Variable selection for mixture and promotion time cure rate models.

    Science.gov (United States)

    Masud, Abdullah; Tu, Wanzhu; Yu, Zhangsheng

    2016-11-16

    Failure-time data with cured patients are common in clinical studies. Data from these studies are typically analyzed with cure rate models. Variable selection methods have not been well developed for cure rate models. In this research, we propose two least absolute shrinkage and selection operators based methods, for variable selection in mixture and promotion time cure models with parametric or nonparametric baseline hazards. We conduct an extensive simulation study to assess the operating characteristics of the proposed methods. We illustrate the use of the methods using data from a study of childhood wheezing. © The Author(s) 2016.

  9. Selection-based Approach to Cooperative Interval Games

    OpenAIRE

    Bok, Jan; Hladík, Milan

    2014-01-01

    Cooperative interval games are a generalized model of cooperative games in which the worth of every coalition corresponds to a closed interval representing the possible outcomes of its cooperation. Selections are all possible outcomes of the interval game with no additional uncertainty. We introduce new selection-based classes of interval games and prove their characterization theorems and relations to existing classes based on the interval weakly better operator. We show new results regardin...

  10. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, Scott; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimoneous...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: 'Are we actually dealing with a convolutive mixture?'. We try to answer this question for EEG data....

  11. Research and Application of a Novel Hybrid Model Based on Data Selection and Artificial Intelligence Algorithm for Short Term Load Forecasting

    Directory of Open Access Journals (Sweden)

    Wendong Yang

    2017-01-01

    Full Text Available Machine learning plays a vital role in several modern economic and industrial fields, and selecting an optimized machine learning method to improve time series’ forecasting accuracy is challenging. Advanced machine learning methods, e.g., the support vector regression (SVR model, are widely employed in forecasting fields, but the individual SVR pays no attention to the significance of data selection, signal processing and optimization, which cannot always satisfy the requirements of time series forecasting. By preprocessing and analyzing the original time series, in this paper, a hybrid SVR model is developed, considering periodicity, trend and randomness, and combined with data selection, signal processing and an optimization algorithm for short-term load forecasting. Case studies of electricity power data from New South Wales and Singapore are regarded as exemplifications to estimate the performance of the developed novel model. The experimental results demonstrate that the proposed hybrid method is not only robust but also capable of achieving significant improvement compared with the traditional single models and can be an effective and efficient tool for power load forecasting.

  12. Digital curation: a proposal of a semi-automatic digital object selection-based model for digital curation in Big Data environments

    Directory of Open Access Journals (Sweden)

    Moisés Lima Dutra

    2016-08-01

    Full Text Available Introduction: This work presents a new approach for Digital Curations from a Big Data perspective. Objective: The objective is to propose techniques to digital curations for selecting and evaluating digital objects that take into account volume, velocity, variety, reality, and the value of the data collected from multiple knowledge domains. Methodology: This is an exploratory research of applied nature, which addresses the research problem in a qualitative way. Heuristics allow this semi-automatic process to be done either by human curators or by software agents. Results: As a result, it was proposed a model for searching, processing, evaluating and selecting digital objects to be processed by digital curations. Conclusions: It is possible to use Big Data environments as a source of information resources for Digital Curation; besides, Big Data techniques and tools can support the search and selection process of information resources by Digital Curations.

  13. Norm based Threshold Selection for Fault Detectors

    DEFF Research Database (Denmark)

    Rank, Mike Lind; Niemann, Henrik

    1998-01-01

    The design of fault detectors for fault detection and isolation (FDI) in dynamic systems is considered from a norm based point of view. An analysis of norm based threshold selection is given based on different formulations of FDI problems. Both the nominal FDI problem as well as the uncertain FDI...... problem are considered. Based on this analysis, a performance index based on norms of the involved transfer functions is given. The performance index allows us also to optimize the structure of the fault detection filter directly...

  14. Personnel Selection Based on Fuzzy Methods

    Directory of Open Access Journals (Sweden)

    Lourdes Cañós

    2011-03-01

    Full Text Available The decisions of managers regarding the selection of staff strongly determine the success of the company. A correct choice of employees is a source of competitive advantage. We propose a fuzzy method for staff selection, based on competence management and the comparison with the valuation that the company considers the best in each competence (ideal candidate. Our method is based on the Hamming distance and a Matching Level Index. The algorithms, implemented in the software StaffDesigner, allow us to rank the candidates, even when the competences of the ideal candidate have been evaluated only in part. Our approach is applied in a numerical example.

  15. Supplier Selection Study under the Respective of Low-Carbon Supply Chain: A Hybrid Evaluation Model Based on FA-DEA-AHP

    Directory of Open Access Journals (Sweden)

    Xiangshuo He

    2018-02-01

    Full Text Available With the development of global environment and social economy, it is an indispensable choice for enterprises to achieve sustainable growth through developing low-carbon economy and constructing low-carbon supply chain. Supplier is the source of chain, thus selecting excellent low-carbon supplier is the foundation of establishing efficient low-carbon supply chain. This paper presents a novel hybrid model for supplier selection integrated factor analysis (FA, data envelopment analysis (DEA, with analytic hierarchy process (AHP, namely FA-DEA-AHP. First, an evaluation index system is built, incorporating product level, qualification, cooperation ability, and environmental competitiveness. FA is utilized to extract common factors from the 18 pre-selected indicators. Then, DEA is applied to establish the pairwise comparison matrix and AHP is employed to rank these low-carbon suppliers comprehensively and calculate the validity of the decision-making units. Finally, an experiment study with seven cement suppliers in a large industrial enterprise is carried out in this paper. The results reveal that the proposed technique can not only select effective suppliers, but also realize a comprehensive ranking. This research has enriched the methodology of low-carbon supplier evaluation and selection, as well as owns theoretical value in exploring the coordinated development of low-carbon supply chain to some extent.

  16. Source-Based Modeling Of Urban Stormwater Quality Response to the Selected Scenarios Combining Future Changes in Climate and Socio-Economic Factors.

    Science.gov (United States)

    Borris, Matthias; Leonhardt, Günther; Marsalek, Jiri; Österlund, Heléne; Viklander, Maria

    2016-08-01

    The assessment of future trends in urban stormwater quality should be most helpful for ensuring the effectiveness of the existing stormwater quality infrastructure in the future and mitigating the associated impacts on receiving waters. Combined effects of expected changes in climate and socio-economic factors on stormwater quality were examined in two urban test catchments by applying a source-based computer model (WinSLAMM) for TSS and three heavy metals (copper, lead, and zinc) for various future scenarios. Generally, both catchments showed similar responses to the future scenarios and pollutant loads were generally more sensitive to changes in socio-economic factors (i.e., increasing traffic intensities, growth and intensification of the individual land-uses) than in the climate. Specifically, for the selected Intermediate socio-economic scenario and two climate change scenarios (RSP = 2.6 and 8.5), the TSS loads from both catchments increased by about 10 % on average, but when applying the Intermediate climate change scenario (RCP = 4.5) for two SSPs, the Sustainability and Security scenarios (SSP1 and SSP3), the TSS loads increased on average by 70 %. Furthermore, it was observed that well-designed and maintained stormwater treatment facilities targeting local pollution hotspots exhibited the potential to significantly improve stormwater quality, however, at potentially high costs. In fact, it was possible to reduce pollutant loads from both catchments under the future Sustainability scenario (on average, e.g., TSS were reduced by 20 %), compared to the current conditions. The methodology developed in this study was found useful for planning climate change adaptation strategies in the context of local conditions.

  17. Role of melt behavior in modifying oxidation distribution using an interface incorporated model in selective laser melting of aluminum-based material

    Energy Technology Data Exchange (ETDEWEB)

    Gu, Dongdong, E-mail: dongdonggu@nuaa.edu.cn; Dai, Donghua [College of Materials Science and Technology, Nanjing University of Aeronautics and Astronautics, Yudao Street 29, Nanjing 210016 (China); Institute of Additive Manufacturing (3D Printing), Nanjing University of Aeronautics and Astronautics, Yudao Street 29, Nanjing 210016 (China)

    2016-08-28

    A transient three dimensional model for describing the molten pool dynamics and the response of oxidation film evolution in the selective laser melting of aluminum-based material is proposed. The physical difference in both sides of the scan track, powder-solid transformation and temperature dependent physical properties are taken into account. It shows that the heat energy tends to accumulate in the powder material rather than in the as-fabricated part, leading to the formation of the asymmetrical patterns of the temperature contour and the attendant larger dimensions of the molten pool in the powder phase. As a higher volumetric energy density is applied (≥1300 J/mm{sup 3}), a severe evaporation is produced with the upward direction of velocity vector in the irradiated powder region while a restricted operating temperature is obtained in the as-fabricated part. The velocity vector continuously changes from upward direction to the downward one as the scan speed increases from 100 mm/s to 300 mm/s, promoting the generation of the debris of the oxidation films and the resultant homogeneous distribution state in the matrix. For the applied hatch spacing of 50 μm, a restricted remelting phenomenon of the as-fabricated part is produced with the upward direction of the convection flow, significantly reducing the turbulence of the thermal-capillary convection on the breaking of the oxidation films, and therefore, the connected oxidation films through the neighboring layers are typically formed. The morphology and distribution of the oxidation are experimentally acquired, which are in a good agreement with the results predicted by simulation.

  18. Source-Based Modeling Of Urban Stormwater Quality Response to the Selected Scenarios Combining Future Changes in Climate and Socio-Economic Factors

    Science.gov (United States)

    Borris, Matthias; Leonhardt, Günther; Marsalek, Jiri; Österlund, Heléne; Viklander, Maria

    2016-08-01

    The assessment of future trends in urban stormwater quality should be most helpful for ensuring the effectiveness of the existing stormwater quality infrastructure in the future and mitigating the associated impacts on receiving waters. Combined effects of expected changes in climate and socio-economic factors on stormwater quality were examined in two urban test catchments by applying a source-based computer model (WinSLAMM) for TSS and three heavy metals (copper, lead, and zinc) for various future scenarios. Generally, both catchments showed similar responses to the future scenarios and pollutant loads were generally more sensitive to changes in socio-economic factors (i.e., increasing traffic intensities, growth and intensification of the individual land-uses) than in the climate. Specifically, for the selected Intermediate socio-economic scenario and two climate change scenarios (RSP = 2.6 and 8.5), the TSS loads from both catchments increased by about 10 % on average, but when applying the Intermediate climate change scenario (RCP = 4.5) for two SSPs, the Sustainability and Security scenarios (SSP1 and SSP3), the TSS loads increased on average by 70 %. Furthermore, it was observed that well-designed and maintained stormwater treatment facilities targeting local pollution hotspots exhibited the potential to significantly improve stormwater quality, however, at potentially high costs. In fact, it was possible to reduce pollutant loads from both catchments under the future Sustainability scenario (on average, e.g., TSS were reduced by 20 %), compared to the current conditions. The methodology developed in this study was found useful for planning climate change adaptation strategies in the context of local conditions.

  19. Resorcinol-Based Grp94-Selective Inhibitors.

    Science.gov (United States)

    Khandelwal, Anuj; Crowley, Vincent M; Blagg, Brian S J

    2017-10-12

    Glucose regulated protein 94 (Grp94) is the endoplasmic reticulum resident of the 90 kDa heat shock protein (Hsp90) family and represents a promising therapeutic target for the treatment of several diseases. Grp94 is the most unique member of the 90 kDa heat shock protein family due to a five amino acid insertion into its primary sequence, which creates hydrophobic subpockets exclusive to Grp94 that can be utilized for selective inhibition. The first resorcinol-based Grp94-selective inhibitor to take advantage of the hydrophobic S2 subpocket has been developed and shown to manifest low nanomolar affinity and ∼10-fold selectivity for Grp94. Furthermore, these Grp94-selective inhibitors manifest low micromolar GI 50 values against multiple myeloma cells, supporting Grp94 as an emerging target for the treatment of this disease.

  20. Sparse model selection via integral terms

    Science.gov (United States)

    Schaeffer, Hayden; McCalla, Scott G.

    2017-08-01

    Model selection and parameter estimation are important for the effective integration of experimental data, scientific theory, and precise simulations. In this work, we develop a learning approach for the selection and identification of a dynamical system directly from noisy data. The learning is performed by extracting a small subset of important features from an overdetermined set of possible features using a nonconvex sparse regression model. The sparse regression model is constructed to fit the noisy data to the trajectory of the dynamical system while using the smallest number of active terms. Computational experiments detail the model's stability, robustness to noise, and recovery accuracy. Examples include nonlinear equations, population dynamics, chaotic systems, and fast-slow systems.

  1. Modeling and Selection of Software Service Variants

    OpenAIRE

    Wittern, John Erik

    2015-01-01

    Providers and consumers have to deal with variants, meaning alternative instances of a service?s design, implementation, deployment, or operation, when developing or delivering software services. This work presents service feature modeling to deal with associated challenges, comprising a language to represent software service variants and a set of methods for modeling and subsequent variant selection. This work?s evaluation includes a POC implementation and two real-life use cases.

  2. Case Based Heuristic Selection for Timetabling Problems

    OpenAIRE

    Burke, Edmund; Petrovic, Sanja; Qu, Rong

    2006-01-01

    This paper presents a case-based heuristic selection approach for automated university course and exam timetabling. The method described in this paper is motivated by the goal of developing timetabling systems that are fundamentally more general than the current state of the art. Heuristics that worked well in previous similar situations are memorized in a case base and are retrieved for solving the problem in hand. Knowledge discovery techniques are employed in two distinct scenarios. Firstl...

  3. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    2016-08-26

    Aug 26, 2016 ... Feature selection based classifier combination approach for handwritten Devanagari numeral recognition. Pratibha Singh Ajay Verma ... ensemble of classifiers. The main contribution of the proposed method is that, the method gives quite efficient results utilizing only 10% patterns of the available dataset.

  4. Model Selection in Data Analysis Competitions

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Winther, Ole

    2014-01-01

    The use of data analysis competitions for selecting the most appropriate model for a problem is a recent innovation in the field of predictive machine learning. Two of the most well-known examples of this trend was the Netflix Competition and recently the competitions hosted on the online platfor...

  5. Efficiently adapting graphical models for selectivity estimation

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2013-01-01

    in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate...

  6. Feature Selection Based on Mutual Correlation

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Somol, Petr; Ververidis, D.; Kotropoulos, C.

    2006-01-01

    Roč. 19, č. 4225 (2006), s. 569-577 ISSN 0302-9743. [Iberoamerican Congress on Pattern Recognition. CIARP 2006 /11./. Cancun, 14.11.2006-17.11.2006] R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA AV ČR IAA2075302 EU Projects: European Commission(XE) 507752 - MUSCLE Institutional research plan: CEZ:AV0Z10750506 Keywords : feature selection Subject RIV: BD - Theory of Information Impact factor: 0.402, year: 2005 http://library.utia.cas.cz/separaty/historie/haindl-feature selection based on mutual correlation.pdf

  7. Personnel Selection Method Based on Personnel-Job Matching

    OpenAIRE

    Li Wang; Xilin Hou; Lili Zhang

    2013-01-01

    The existing personnel selection decisions in practice are based on the evaluation of job seeker's human capital, and it may be difficult to make personnel-job matching and make each party satisfy. Therefore, this paper puts forward a new personnel selection method by consideration of bilateral matching. Starting from the employment thoughts of ¡°satisfy¡±, the satisfaction evaluation indicator system of each party are constructed. The multi-objective optimization model is given according to ...

  8. Model selection criterion in survival analysis

    Science.gov (United States)

    Karabey, Uǧur; Tutkun, Nihal Ata

    2017-07-01

    Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.

  9. Akaike information criterion to select well-fit resist models

    Science.gov (United States)

    Burbine, Andrew; Fryer, David; Sturtevant, John

    2015-03-01

    In the field of model design and selection, there is always a risk that a model is over-fit to the data used to train the model. A model is well suited when it describes the physical system and not the stochastic behavior of the particular data collected. K-fold cross validation is a method to check this potential over-fitting to the data by calibrating with k-number of folds in the data, typically between 4 and 10. Model training is a computationally expensive operation, however, and given a wide choice of candidate models, calibrating each one repeatedly becomes prohibitively time consuming. Akaike information criterion (AIC) is an information-theoretic approach to model selection based on the maximized log-likelihood for a given model that only needs a single calibration per model. It is used in this study to demonstrate model ranking and selection among compact resist modelforms that have various numbers and types of terms to describe photoresist behavior. It is shown that there is a good correspondence of AIC to K-fold cross validation in selecting the best modelform, and it is further shown that over-fitting is, in most cases, not indicated. In modelforms with more than 40 fitting parameters, the size of the calibration data set benefits from additional parameters, statistically validating the model complexity.

  10. On Using Selection Procedures with Binomial Models.

    Science.gov (United States)

    1983-10-01

    eds.), Shinko Tsusho Co. Ltd., Tokyo, Japan , pp. 501-533. Gupta, S. S. and Sobel, M. (1960). Selecting a subset containing the best of several...IA_____3_6r__I____ *TITLE food A$ieweI L TYPE of 09PORT 6 PERIOD COVERED ON USING SELECTION PROCEDURES WITH BINOMIAL MODELS Technical 6. PeSPRFeauS1 ONG. REPORT...ontoedis stoc toeSI. to Ei.,..,t&* toemR.,. 14. SUPPOLEMENTARY MOCTES 19. Rey WORDS (Coatiou. 40 ow.oa* edo if Necesary and #do""&a by block number

  11. Review and selection of unsaturated flow models

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1993-09-10

    Under the US Department of Energy (DOE), the Civilian Radioactive Waste Management System Management and Operating Contractor (CRWMS M&O) has the responsibility to review, evaluate, and document existing computer ground-water flow models; to conduct performance assessments; and to develop performance assessment models, where necessary. In the area of scientific modeling, the M&O CRWMS has the following responsibilities: To provide overall management and integration of modeling activities. To provide a framework for focusing modeling and model development. To identify areas that require increased or decreased emphasis. To ensure that the tools necessary to conduct performance assessment are available. These responsibilities are being initiated through a three-step process. It consists of a thorough review of existing models, testing of models which best fit the established requirements, and making recommendations for future development that should be conducted. Future model enhancement will then focus on the models selected during this activity. Furthermore, in order to manage future model development, particularly in those areas requiring substantial enhancement, the three-step process will be updated and reported periodically in the future.

  12. Post-model selection inference and model averaging

    Directory of Open Access Journals (Sweden)

    Georges Nguefack-Tsague

    2011-07-01

    Full Text Available Although model selection is routinely used in practice nowadays, little is known about its precise effects on any subsequent inference that is carried out. The same goes for the effects induced by the closely related technique of model averaging. This paper is concerned with the use of the same data first to select a model and then to carry out inference, in particular point estimation and point prediction. The properties of the resulting estimator, called a post-model-selection estimator (PMSE, are hard to derive. Using selection criteria such as hypothesis testing, AIC, BIC, HQ and Cp, we illustrate that, in terms of risk function, no single PMSE dominates the others. The same conclusion holds more generally for any penalised likelihood information criterion. We also compare various model averaging schemes and show that no single one dominates the others in terms of risk function. Since PMSEs can be regarded as a special case of model averaging, with 0-1 random-weights, we propose a connection between the two theories, in the frequentist approach, by taking account of the selection procedure when performing model averaging. We illustrate the point by simulating a simple linear regression model.

  13. Optimal experiment design for model selection in biochemical networks.

    Science.gov (United States)

    Vanlier, Joep; Tiemann, Christian A; Hilbers, Peter A J; van Riel, Natal A W

    2014-02-20

    Mathematical modeling is often used to formalize hypotheses on how a biochemical network operates by discriminating between competing models. Bayesian model selection offers a way to determine the amount of evidence that data provides to support one model over the other while favoring simple models. In practice, the amount of experimental data is often insufficient to make a clear distinction between competing models. Often one would like to perform a new experiment which would discriminate between competing hypotheses. We developed a novel method to perform Optimal Experiment Design to predict which experiments would most effectively allow model selection. A Bayesian approach is applied to infer model parameter distributions. These distributions are sampled and used to simulate from multivariate predictive densities. The method is based on a k-Nearest Neighbor estimate of the Jensen Shannon divergence between the multivariate predictive densities of competing models. We show that the method successfully uses predictive differences to enable model selection by applying it to several test cases. Because the design criterion is based on predictive distributions, which can be computed for a wide range of model quantities, the approach is very flexible. The method reveals specific combinations of experiments which improve discriminability even in cases where data is scarce. The proposed approach can be used in conjunction with existing Bayesian methodologies where (approximate) posteriors have been determined, making use of relations that exist within the inferred posteriors.

  14. Statistical model selection with “Big Data”

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2015-12-01

    Full Text Available Big Data offer potential benefits for statistical modelling, but confront problems including an excess of false positives, mistaking correlations for causes, ignoring sampling biases and selecting by inappropriate methods. We consider the many important requirements when searching for a data-based relationship using Big Data, and the possible role of Autometrics in that context. Paramount considerations include embedding relationships in general initial models, possibly restricting the number of variables to be selected over by non-statistical criteria (the formulation problem, using good quality data on all variables, analyzed with tight significance levels by a powerful selection procedure, retaining available theory insights (the selection problem while testing for relationships being well specified and invariant to shifts in explanatory variables (the evaluation problem, using a viable approach that resolves the computational problem of immense numbers of possible models.

  15. Model structure selection in convolutive mixtures

    DEFF Research Database (Denmark)

    Dyrholm, Mads; Makeig, S.; Hansen, Lars Kai

    2006-01-01

    The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious represent......The CICAAR algorithm (convolutive independent component analysis with an auto-regressive inverse model) allows separation of white (i.i.d) source signals from convolutive mixtures. We introduce a source color model as a simple extension to the CICAAR which allows for a more parsimonious...... representation in many practical mixtures. The new filter-CICAAR allows Bayesian model selection and can help answer questions like: ’Are we actually dealing with a convolutive mixture?’. We try to answer this question for EEG data....

  16. Chemical identification using Bayesian model selection

    Energy Technology Data Exchange (ETDEWEB)

    Burr, Tom; Fry, H. A. (Herbert A.); McVey, B. D. (Brian D.); Sander, E. (Eric)

    2002-01-01

    Remote detection and identification of chemicals in a scene is a challenging problem. We introduce an approach that uses some of the image's pixels to establish the background characteristics while other pixels represent the target for which we seek to identify all chemical species present. This leads to a generalized least squares problem in which we focus on 'subset selection' to identify the chemicals thought to be present. Bayesian model selection allows us to approximate the posterior probability that each chemical in the library is present by adding the posterior probabilities of all the subsets which include the chemical. We present results using realistic simulated data for the case with 1 to 5 chemicals present in each target and compare performance to a hybrid of forward and backward stepwise selection procedure using the F statistic.

  17. Expatriates Selection: An Essay of Model Analysis

    Directory of Open Access Journals (Sweden)

    Rui Bártolo-Ribeiro

    2015-03-01

    Full Text Available The business expansion to other geographical areas with different cultures from which organizations were created and developed leads to the expatriation of employees to these destinations. Recruitment and selection procedures of expatriates do not always have the intended success leading to an early return of these professionals with the consequent organizational disorders. In this study, several articles published in the last five years were analyzed in order to identify the most frequently mentioned dimensions in the selection of expatriates in terms of success and failure. The characteristics in the selection process that may increase prediction of adaptation of expatriates to new cultural contexts of the some organization were studied according to the KSAOs model. Few references were found concerning Knowledge, Skills and Abilities dimensions in the analyzed papers. There was a strong predominance on the evaluation of Other Characteristics, and was given more importance to dispositional factors than situational factors for promoting the integration of the expatriates.

  18. Artificial neural network modelling of biological oxygen demand in rivers at the national level with input selection based on Monte Carlo simulations.

    Science.gov (United States)

    Šiljić, Aleksandra; Antanasijević, Davor; Perić-Grujić, Aleksandra; Ristić, Mirjana; Pocajt, Viktor

    2015-03-01

    Biological oxygen demand (BOD) is the most significant water quality parameter and indicates water pollution with respect to the present biodegradable organic matter content. European countries are therefore obliged to report annual BOD values to Eurostat; however, BOD data at the national level is only available for 28 of 35 listed European countries for the period prior to 2008, among which 46% of data is missing. This paper describes the development of an artificial neural network model for the forecasting of annual BOD values at the national level, using widely available sustainability and economical/industrial parameters as inputs. The initial general regression neural network (GRNN) model was trained, validated and tested utilizing 20 inputs. The number of inputs was reduced to 15 using the Monte Carlo simulation technique as the input selection method. The best results were achieved with the GRNN model utilizing 25% less inputs than the initial model and a comparison with a multiple linear regression model trained and tested using the same input variables using multiple statistical performance indicators confirmed the advantage of the GRNN model. Sensitivity analysis has shown that inputs with the greatest effect on the GRNN model were (in descending order) precipitation, rural population with access to improved water sources, treatment capacity of wastewater treatment plants (urban) and treatment of municipal waste, with the last two having an equal effect. Finally, it was concluded that the developed GRNN model can be useful as a tool to support the decision-making process on sustainable development at a regional, national and international level.

  19. Distribution-Based Cluster Structure Selection.

    Science.gov (United States)

    Yu, Zhiwen; Zhu, Xianjun; Wong, Hau-San; You, Jane; Zhang, Jun; Han, Guoqiang

    2017-11-01

    The objective of cluster structure ensemble is to find a unified cluster structure from multiple cluster structures obtained from different datasets. Unfortunately, not all the cluster structures contribute to the unified cluster structure. This paper investigates the problem of how to select the suitable cluster structures in the ensemble which will be summarized to a more representative cluster structure. Specifically, the cluster structure is first represented by a mixture of Gaussian distributions, the parameters of which are estimated using the expectation-maximization algorithm. Then, several distribution-based distance functions are designed to evaluate the similarity between two cluster structures. Based on the similarity comparison results, we propose a new approach, which is referred to as the distribution-based cluster structure ensemble (DCSE) framework, to find the most representative unified cluster structure. We then design a new technique, the distribution-based cluster structure selection strategy (DCSSS), to select a subset of cluster structures. Finally, we propose using a distribution-based normalized hypergraph cut algorithm to generate the final result. In our experiments, a nonparametric test is adopted to evaluate the difference between DCSE and its competitors. We adopt 20 real-world datasets obtained from the University of California, Irvine and knowledge extraction based on evolutionary learning repositories, and a number of cancer gene expression profiles to evaluate the performance of the proposed methods. The experimental results show that: 1) DCSE works well on the real-world datasets and 2) DCSE based on DCSSS can further improve the performance of the algorithm.

  20. Selecting an appropriate genetic evaluation model for selection in a developing dairy sector

    NARCIS (Netherlands)

    McGill, D.M.; Mulder, H.A.; Thomson, P.C.; Lievaart, J.J.

    2014-01-01

    This study aimed to identify genetic evaluation models (GEM) to accurately select cattle for milk production when only limited data are available. It is based on a data set from the Pakistani Sahiwal progeny testing programme which includes records from five government herds, each consisting of 100

  1. Sample selection and taste correlation in discrete choice transport modelling

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2008-01-01

    of taste correlation in willingness-to-pay estimation are presented. The first contribution addresses how to incorporate taste correlation in the estimation of the value of travel time for public transport. Given a limited dataset the approach taken is to use theory on the value of travel time as guidance...... many issues that deserve attention. This thesis investigates how sample selection can affect estimation of discrete choice models and how taste correlation should be incorporated into applied mixed logit estimation. Sampling in transport modelling is often based on an observed trip. This may cause...... a sample to be choice-based or governed by a self-selection mechanism. In both cases, there is a possibility that sampling affects the estimation of a population model. It was established in the seventies how choice-based sampling affects the estimation of multinomial logit models. The thesis examines...

  2. Ant colony optimization as a descriptor selection in QSPR modeling: Estimation of the λmax of anthraquinones-based dyes

    OpenAIRE

    Morteza Atabati; Kobra Zarei; Azam Borhani

    2016-01-01

    Quantitative structure–property relationship (QSPR) studies based on ant colony optimization (ACO) were carried out for the prediction of λmax of 9,10-anthraquinone derivatives. ACO is a meta-heuristic algorithm, which is derived from the observation of real ants and proposed to feature selection. After optimization of 3D geometry of structures by the semi-empirical quantum-chemical calculation at AM1 level, different descriptors were calculated by the HyperChem and Dragon softwares (1514 des...

  3. A simple parametric model selection test

    OpenAIRE

    Susanne M. Schennach; Daniel Wilhelm

    2014-01-01

    We propose a simple model selection test for choosing among two parametric likelihoods which can be applied in the most general setting without any assumptions on the relation between the candidate models and the true distribution. That is, both, one or neither is allowed to be correctly speci fied or misspeci fied, they may be nested, non-nested, strictly non-nested or overlapping. Unlike in previous testing approaches, no pre-testing is needed, since in each case, the same test statistic to...

  4. Robust inference in sample selection models

    KAUST Repository

    Zhelonkin, Mikhail

    2015-11-20

    The problem of non-random sample selectivity often occurs in practice in many fields. The classical estimators introduced by Heckman are the backbone of the standard statistical analysis of these models. However, these estimators are very sensitive to small deviations from the distributional assumptions which are often not satisfied in practice. We develop a general framework to study the robustness properties of estimators and tests in sample selection models. We derive the influence function and the change-of-variance function of Heckman\\'s two-stage estimator, and we demonstrate the non-robustness of this estimator and its estimated variance to small deviations from the model assumed. We propose a procedure for robustifying the estimator, prove its asymptotic normality and give its asymptotic variance. Both cases with and without an exclusion restriction are covered. This allows us to construct a simple robust alternative to the sample selection bias test. We illustrate the use of our new methodology in an analysis of ambulatory expenditures and we compare the performance of the classical and robust methods in a Monte Carlo simulation study.

  5. Multicriteria framework for selecting a process modelling language

    Science.gov (United States)

    Scanavachi Moreira Campos, Ana Carolina; Teixeira de Almeida, Adiel

    2016-01-01

    The choice of process modelling language can affect business process management (BPM) since each modelling language shows different features of a given process and may limit the ways in which a process can be described and analysed. However, choosing the appropriate modelling language for process modelling has become a difficult task because of the availability of a large number modelling languages and also due to the lack of guidelines on evaluating, and comparing languages so as to assist in selecting the most appropriate one. This paper proposes a framework for selecting a modelling language in accordance with the purposes of modelling. This framework is based on the semiotic quality framework (SEQUAL) for evaluating process modelling languages and a multicriteria decision aid (MCDA) approach in order to select the most appropriate language for BPM. This study does not attempt to set out new forms of assessment and evaluation criteria, but does attempt to demonstrate how two existing approaches can be combined so as to solve the problem of selection of modelling language. The framework is described in this paper and then demonstrated by means of an example. Finally, the advantages and disadvantages of using SEQUAL and MCDA in an integrated manner are discussed.

  6. Transmit antenna selection based on shadowing side information

    KAUST Repository

    Yilmaz, Ferkan

    2011-05-01

    In this paper, we propose a new transmit antenna selection scheme based on shadowing side information. In the proposed scheme, single transmit antenna which has the highest shadowing coefficient is selected. By the proposed technique, usage of the feedback channel and channel estimation complexity at the receiver can be reduced. We consider independent but not identically distributed Generalized-K composite fading model, which is a general composite fading & shadowing channel model for wireless environments. Exact closed-form outage probability, moment generating function and symbol error probability expressions are derived. In addition, theoretical performance results are validated by Monte Carlo simulations. © 2011 IEEE.

  7. Computationally efficient thermal-mechanical modelling of selective laser melting

    NARCIS (Netherlands)

    Yang, Y.; Ayas, C.; Brabazon, Dermot; Naher, Sumsun; Ul Ahad, Inam

    2017-01-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is

  8. Uncertainty associated with selected environmental transport models

    International Nuclear Information System (INIS)

    Little, C.A.; Miller, C.W.

    1979-11-01

    A description is given of the capabilities of several models to predict accurately either pollutant concentrations in environmental media or radiological dose to human organs. The models are discussed in three sections: aquatic or surface water transport models, atmospheric transport models, and terrestrial and aquatic food chain models. Using data published primarily by model users, model predictions are compared to observations. This procedure is infeasible for food chain models and, therefore, the uncertainty embodied in the models input parameters, rather than the model output, is estimated. Aquatic transport models are divided into one-dimensional, longitudinal-vertical, and longitudinal-horizontal models. Several conclusions were made about the ability of the Gaussian plume atmospheric dispersion model to predict accurately downwind air concentrations from releases under several sets of conditions. It is concluded that no validation study has been conducted to test the predictions of either aquatic or terrestrial food chain models. Using the aquatic pathway from water to fish to an adult for 137 Cs as an example, a 95% one-tailed confidence limit interval for the predicted exposure is calculated by examining the distributions of the input parameters. Such an interval is found to be 16 times the value of the median exposure. A similar one-tailed limit for the air-grass-cow-milk-thyroid for 131 I and infants was 5.6 times the median dose. Of the three model types discussed in this report,the aquatic transport models appear to do the best job of predicting observed concentrations. However, this conclusion is based on many fewer aquatic validation data than were availaable for atmospheric model validation

  9. Testing exclusion restrictions and additive separability in sample selection models

    DEFF Research Database (Denmark)

    Huber, Martin; Mellace, Giovanni

    2014-01-01

    Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction of these......Standard sample selection models with non-randomly censored outcomes assume (i) an exclusion restriction (i.e., a variable affecting selection, but not the outcome) and (ii) additive separability of the errors in the selection process. This paper proposes tests for the joint satisfaction...... of these assumptions by applying the approach of Huber and Mellace (Testing instrument validity for LATE identification based on inequality moment constraints, 2011) (for testing instrument validity under treatment endogeneity) to the sample selection framework. We show that the exclusion restriction and additive...... separability imply two testable inequality constraints that come from both point identifying and bounding the outcome distribution of the subpopulation that is always selected/observed. We apply the tests to two variables for which the exclusion restriction is frequently invoked in female wage regressions: non...

  10. De Novo Assembly of Complete Chloroplast Genomes from Non-model Species Based on a K-mer Frequency-Based Selection of Chloroplast Reads from Total DNA Sequences

    Directory of Open Access Journals (Sweden)

    Shairul Izan

    2017-08-01

    Full Text Available Whole Genome Shotgun (WGS sequences of plant species often contain an abundance of reads that are derived from the chloroplast genome. Up to now these reads have generally been identified and assembled into chloroplast genomes based on homology to chloroplasts from related species. This re-sequencing approach may select against structural differences between the genomes especially in non-model species for which no close relatives have been sequenced before. The alternative approach is to de novo assemble the chloroplast genome from total genomic DNA sequences. In this study, we used k-mer frequency tables to identify and extract the chloroplast reads from the WGS reads and assemble these using a highly integrated and automated custom pipeline. Our strategy includes steps aimed at optimizing assemblies and filling gaps which are left due to coverage variation in the WGS dataset. We have successfully de novo assembled three complete chloroplast genomes from plant species with a range of nuclear genome sizes to demonstrate the universality of our approach: Solanum lycopersicum (0.9 Gb, Aegilops tauschii (4 Gb and Paphiopedilum henryanum (25 Gb. We also highlight the need to optimize the choice of k and the amount of data used. This new and cost-effective method for de novo short read assembly will facilitate the study of complete chloroplast genomes with more accurate analyses and inferences, especially in non-model plant genomes.

  11. An Integrated Structure for Supplier Selection and Configuration of Knowledge-Based Networks Using QFD, ANP, and Mixed-Integer Programming Model

    Directory of Open Access Journals (Sweden)

    M. Abbasi

    2013-01-01

    Full Text Available Today’s competitive world conditions and shortened product life cycles have led to the rise of attention towards new product development issue which can guarantee both growth and survival of organizations. The agility of new product development is directed by the efficiency and efficacy of knowledge management skills of an organization. A key issue in thorough success of such networks is the developed knowledge preservation amongst the members. Thus, it is important that reliable relations can be established between the members in order to promote further interactions. To do so, an integrated framework is developed in this paper to configure the new product development network so that sustainable collaborations can be maintained amongst the entities. The proposed framework consists of the network configuration in addition to the supplier selection phase. They are taken into consideration using a biobjective mathematical model in which incurred costs and suppliers' superiority determine the final configuration of the network. Finally, different numerical instances are solved to address the applicability of the proposed model.

  12. Live Imaging-Based Model Selection Reveals Periodic Regulation of the Stochastic G1/S Phase Transition in Vertebrate Axial Development

    Science.gov (United States)

    Kurokawa, Hiroshi; Sakaue-Sawano, Asako; Imamura, Takeshi; Miyawaki, Atsushi; Iimura, Tadahiro

    2014-01-01

    In multicellular organism development, a stochastic cellular response is observed, even when a population of cells is exposed to the same environmental conditions. Retrieving the spatiotemporal regulatory mode hidden in the heterogeneous cellular behavior is a challenging task. The G1/S transition observed in cell cycle progression is a highly stochastic process. By taking advantage of a fluorescence cell cycle indicator, Fucci technology, we aimed to unveil a hidden regulatory mode of cell cycle progression in developing zebrafish. Fluorescence live imaging of Cecyil, a zebrafish line genetically expressing Fucci, demonstrated that newly formed notochordal cells from the posterior tip of the embryonic mesoderm exhibited the red (G1) fluorescence signal in the developing notochord. Prior to their initial vacuolation, these cells showed a fluorescence color switch from red to green, indicating G1/S transitions. This G1/S transition did not occur in a synchronous manner, but rather exhibited a stochastic process, since a mixed population of red and green cells was always inserted between newly formed red (G1) notochordal cells and vacuolating green cells. We termed this mixed population of notochordal cells, the G1/S transition window. We first performed quantitative analyses of live imaging data and a numerical estimation of the probability of the G1/S transition, which demonstrated the existence of a posteriorly traveling regulatory wave of the G1/S transition window. To obtain a better understanding of this regulatory mode, we constructed a mathematical model and performed a model selection by comparing the results obtained from the models with those from the experimental data. Our analyses demonstrated that the stochastic G1/S transition window in the notochord travels posteriorly in a periodic fashion, with doubled the periodicity of the neighboring paraxial mesoderm segmentation. This approach may have implications for the characterization of the

  13. Genetic signatures of natural selection in a model invasive ascidian

    Science.gov (United States)

    Lin, Yaping; Chen, Yiyong; Yi, Changho; Fong, Jonathan J.; Kim, Won; Rius, Marc; Zhan, Aibin

    2017-03-01

    Invasive species represent promising models to study species’ responses to rapidly changing environments. Although local adaptation frequently occurs during contemporary range expansion, the associated genetic signatures at both population and genomic levels remain largely unknown. Here, we use genome-wide gene-associated microsatellites to investigate genetic signatures of natural selection in a model invasive ascidian, Ciona robusta. Population genetic analyses of 150 individuals sampled in Korea, New Zealand, South Africa and Spain showed significant genetic differentiation among populations. Based on outlier tests, we found high incidence of signatures of directional selection at 19 loci. Hitchhiking mapping analyses identified 12 directional selective sweep regions, and all selective sweep windows on chromosomes were narrow (~8.9 kb). Further analyses indentified 132 candidate genes under selection. When we compared our genetic data and six crucial environmental variables, 16 putatively selected loci showed significant correlation with these environmental variables. This suggests that the local environmental conditions have left significant signatures of selection at both population and genomic levels. Finally, we identified “plastic” genomic regions and genes that are promising regions to investigate evolutionary responses to rapid environmental change in C. robusta.

  14. On selection of optimal stochastic model for accelerated life testing

    International Nuclear Information System (INIS)

    Volf, P.; Timková, J.

    2014-01-01

    This paper deals with the problem of proper lifetime model selection in the context of statistical reliability analysis. Namely, we consider regression models describing the dependence of failure intensities on a covariate, for instance, a stressor. Testing the model fit is standardly based on the so-called martingale residuals. Their analysis has already been studied by many authors. Nevertheless, the Bayes approach to the problem, in spite of its advantages, is just developing. We shall present the Bayes procedure of estimation in several semi-parametric regression models of failure intensity. Then, our main concern is the Bayes construction of residual processes and goodness-of-fit tests based on them. The method is illustrated with both artificial and real-data examples. - Highlights: • Statistical survival and reliability analysis and Bayes approach. • Bayes semi-parametric regression modeling in Cox's and AFT models. • Bayes version of martingale residuals and goodness-of-fit test

  15. A model for the sustainable selection of building envelope assemblies

    International Nuclear Information System (INIS)

    Huedo, Patricia; Mulet, Elena; López-Mesa, Belinda

    2016-01-01

    The aim of this article is to define an evaluation model for the environmental impacts of building envelopes to support planners in the early phases of materials selection. The model is intended to estimate environmental impacts for different combinations of building envelope assemblies based on scientifically recognised sustainability indicators. These indicators will increase the amount of information that existing catalogues show to support planners in the selection of building assemblies. To define the model, first the environmental indicators were selected based on the specific aims of the intended sustainability assessment. Then, a simplified LCA methodology was developed to estimate the impacts applicable to three types of dwellings considering different envelope assemblies, building orientations and climate zones. This methodology takes into account the manufacturing, installation, maintenance and use phases of the building. Finally, the model was validated and a matrix in Excel was created as implementation of the model. - Highlights: • Method to assess the envelope impacts based on a simplified LCA • To be used at an earlier phase than the existing methods in a simple way. • It assigns a score by means of known sustainability indicators. • It estimates data about the embodied and operating environmental impacts. • It compares the investment costs with the costs of the consumed energy.

  16. Quantitative modeling of selective lysosomal targeting for drug design

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rosania, G.; Horobin, R.W.

    2008-01-01

    Lysosomes are acidic organelles and are involved in various diseases, the most prominent is malaria. Accumulation of molecules in the cell by diffusion from the external solution into cytosol, lysosome and mitochondrium was calculated with the Fick–Nernst–Planck equation. The cell model considers...... the diffusion of neutral and ionic molecules across biomembranes, protonation to mono- or bivalent ions, adsorption to lipids, and electrical attraction or repulsion. Based on simulation results, high and selective accumulation in lysosomes was found for weak mono- and bivalent bases with intermediate to high...... predicted by the model and three were close. Five of the antimalarial drugs were lipophilic weak dibasic compounds. The predicted optimum properties for a selective accumulation of weak bivalent bases in lysosomes are consistent with experimental values and are more accurate than any prior calculation...

  17. Bayesian Model Selection in Geophysics: The evidence

    Science.gov (United States)

    Vrugt, J. A.

    2016-12-01

    Bayesian inference has found widespread application and use in science and engineering to reconcile Earth system models with data, including prediction in space (interpolation), prediction in time (forecasting), assimilation of observations and deterministic/stochastic model output, and inference of the model parameters. Per Bayes theorem, the posterior probability, , P(H|D), of a hypothesis, H, given the data D, is equivalent to the product of its prior probability, P(H), and likelihood, L(H|D), divided by a normalization constant, P(D). In geophysics, the hypothesis, H, often constitutes a description (parameterization) of the subsurface for some entity of interest (e.g. porosity, moisture content). The normalization constant, P(D), is not required for inference of the subsurface structure, yet of great value for model selection. Unfortunately, it is not particularly easy to estimate P(D) in practice. Here, I will introduce the various building blocks of a general purpose method which provides robust and unbiased estimates of the evidence, P(D). This method uses multi-dimensional numerical integration of the posterior (parameter) distribution. I will then illustrate this new estimator by application to three competing subsurface models (hypothesis) using GPR travel time data from the South Oyster Bacterial Transport Site, in Virginia, USA. The three subsurface models differ in their treatment of the porosity distribution and use (a) horizontal layering with fixed layer thicknesses, (b) vertical layering with fixed layer thicknesses and (c) a multi-Gaussian field. The results of the new estimator are compared against the brute force Monte Carlo method, and the Laplace-Metropolis method.

  18. Feature Selection with Neighborhood Entropy-Based Cooperative Game Theory

    Directory of Open Access Journals (Sweden)

    Kai Zeng

    2014-01-01

    Full Text Available Feature selection plays an important role in machine learning and data mining. In recent years, various feature measurements have been proposed to select significant features from high-dimensional datasets. However, most traditional feature selection methods will ignore some features which have strong classification ability as a group but are weak as individuals. To deal with this problem, we redefine the redundancy, interdependence, and independence of features by using neighborhood entropy. Then the neighborhood entropy-based feature contribution is proposed under the framework of cooperative game. The evaluative criteria of features can be formalized as the product of contribution and other classical feature measures. Finally, the proposed method is tested on several UCI datasets. The results show that neighborhood entropy-based cooperative game theory model (NECGT yield better performance than classical ones.

  19. Fuzzy axiomatic design approach based green supplier selection

    DEFF Research Database (Denmark)

    Kannan, Devika; Govindan, Kannan; Rajendran, Sivakumar

    2015-01-01

    proposes a multi-criteria decision-making (MCDM) approach called Fuzzy Axiomatic Design (FAD) to select the best green supplier for Singapore-based plastic manufacturing company. At first, the environmental criteria was developed along with the traditional criteria based on the literature review...... and company requirements. Next, the FAD methodology evaluates the requirements of both the manufacturer (design needs) and the supplier (functional needs), and because multiple criteria must be considered, a multi-objective optimization model of a fuzzy nature must be developed. The application...... of the proposed approach in the case company has been illustrated and the result of this study helps firm to establish the systematic approach to select the best green supplier within the set of criteria. When the proposed methodology is applied, it allows not only to select the most appropriate green supplier...

  20. Relative spike time coding and STDP-based orientation selectivity in the early visual system in natural continuous and saccadic vision: a computational model.

    Science.gov (United States)

    Masquelier, Timothée

    2012-06-01

    We have built a phenomenological spiking model of the cat early visual system comprising the retina, the Lateral Geniculate Nucleus (LGN) and V1's layer 4, and established four main results (1) When exposed to videos that reproduce with high fidelity what a cat experiences under natural conditions, adjacent Retinal Ganglion Cells (RGCs) have spike-time correlations at a short timescale (~30 ms), despite neuronal noise and possible jitter accumulation. (2) In accordance with recent experimental findings, the LGN filters out some noise. It thus increases the spike reliability and temporal precision, the sparsity, and, importantly, further decreases down to ~15 ms adjacent cells' correlation timescale. (3) Downstream simple cells in V1's layer 4, if equipped with Spike Timing-Dependent Plasticity (STDP), may detect these fine-scale cross-correlations, and thus connect principally to ON- and OFF-centre cells with Receptive Fields (RF) aligned in the visual space, and thereby become orientation selective, in accordance with Hubel and Wiesel (Journal of Physiology 160:106-154, 1962) classic model. Up to this point we dealt with continuous vision, and there was no absolute time reference such as a stimulus onset, yet information was encoded and decoded in the relative spike times. (4) We then simulated saccades to a static image and benchmarked relative spike time coding and time-to-first spike coding w.r.t. to saccade landing in the context of orientation representation. In both the retina and the LGN, relative spike times are more precise, less affected by pre-landing history and global contrast than absolute ones, and lead to robust contrast invariant orientation representations in V1.

  1. Selecting an optimal mixed products using grey relationship model

    Directory of Open Access Journals (Sweden)

    Farshad Faezy Razi

    2013-06-01

    Full Text Available This paper presents an integrated supplier selection and inventory management using grey relationship model (GRM as well as multi-objective decision making process. The proposed model of this paper first ranks different suppliers based on GRM technique and then determines the optimum level of inventory by considering different objectives. To show the implementation of the proposed model, we use some benchmark data presented by Talluri and Baker [Talluri, S., & Baker, R. C. (2002. A multi-phase mathematical programming approach for effective supply chain design. European Journal of Operational Research, 141(3, 544-558.]. The preliminary results indicate that the proposed model of this paper is capable of handling different criteria for supplier selection.

  2. An opinion formation based binary optimization approach for feature selection

    Science.gov (United States)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  3. Spatial Fleming-Viot models with selection and mutation

    CERN Document Server

    Dawson, Donald A

    2014-01-01

    This book constructs a rigorous framework for analysing selected phenomena in evolutionary theory of populations arising due to the combined effects of migration, selection and mutation in a spatial stochastic population model, namely the evolution towards fitter and fitter types through punctuated equilibria. The discussion is based on a number of new methods, in particular multiple scale analysis, nonlinear Markov processes and their entrance laws, atomic measure-valued evolutions and new forms of duality (for state-dependent mutation and multitype selection) which are used to prove ergodic theorems in this context and are applicable for many other questions and renormalization analysis for a variety of phenomena (stasis, punctuated equilibrium, failure of naive branching approximations, biodiversity) which occur due to the combination of rare mutation, mutation, resampling, migration and selection and make it necessary to mathematically bridge the gap (in the limit) between time and space scales.

  4. Selection of personalized patient therapy through the use of knowledge-based computational models that identify tumor-driving signal transduction pathways.

    Science.gov (United States)

    Verhaegh, Wim; van Ooijen, Henk; Inda, Márcia A; Hatzis, Pantelis; Versteeg, Rogier; Smid, Marcel; Martens, John; Foekens, John; van de Wiel, Paul; Clevers, Hans; van de Stolpe, Anja

    2014-06-01

    Increasing knowledge about signal transduction pathways as drivers of cancer growth has elicited the development of "targeted drugs," which inhibit aberrant signaling pathways. They require a companion diagnostic test that identifies the tumor-driving pathway; however, currently available tests like estrogen receptor (ER) protein expression for hormonal treatment of breast cancer do not reliably predict therapy response, at least in part because they do not adequately assess functional pathway activity. We describe a novel approach to predict signaling pathway activity based on knowledge-based Bayesian computational models, which interpret quantitative transcriptome data as the functional output of an active signaling pathway, by using expression levels of transcriptional target genes. Following calibration on only a small number of cell lines or cohorts of patient data, they provide a reliable assessment of signaling pathway activity in tumors of different tissue origin. As proof of principle, models for the canonical Wnt and ER pathways are presented, including initial clinical validation on independent datasets from various cancer types. ©2014 American Association for Cancer Research.

  5. Astrophysical Model Selection in Gravitational Wave Astronomy

    Science.gov (United States)

    Adams, Matthew R.; Cornish, Neil J.; Littenberg, Tyson B.

    2012-01-01

    Theoretical studies in gravitational wave astronomy have mostly focused on the information that can be extracted from individual detections, such as the mass of a binary system and its location in space. Here we consider how the information from multiple detections can be used to constrain astrophysical population models. This seemingly simple problem is made challenging by the high dimensionality and high degree of correlation in the parameter spaces that describe the signals, and by the complexity of the astrophysical models, which can also depend on a large number of parameters, some of which might not be directly constrained by the observations. We present a method for constraining population models using a hierarchical Bayesian modeling approach which simultaneously infers the source parameters and population model and provides the joint probability distributions for both. We illustrate this approach by considering the constraints that can be placed on population models for galactic white dwarf binaries using a future space-based gravitational wave detector. We find that a mission that is able to resolve approximately 5000 of the shortest period binaries will be able to constrain the population model parameters, including the chirp mass distribution and a characteristic galaxy disk radius to within a few percent. This compares favorably to existing bounds, where electromagnetic observations of stars in the galaxy constrain disk radii to within 20%.

  6. Feature selection for fMRI-based deception detection

    Science.gov (United States)

    Jin, Bo; Strasburger, Alvin; Laken, Steven J; Kozel, F Andrew; Johnson, Kevin A; George, Mark S; Lu, Xinghua

    2009-01-01

    Background Functional magnetic resonance imaging (fMRI) is a technology used to detect brain activity. Patterns of brain activation have been utilized as biomarkers for various neuropsychiatric applications. Detecting deception based on the pattern of brain activation characterized with fMRI is getting attention – with machine learning algorithms being applied to this field in recent years. The high dimensionality of fMRI data makes it a difficult task to directly utilize the original data as input for classification algorithms in detecting deception. In this paper, we investigated the procedures of feature selection to enhance fMRI-based deception detection. Results We used the t-statistic map derived from the statistical parametric mapping analysis of fMRI signals to construct features that reflect brain activation patterns. We subsequently investigated various feature selection methods including an ensemble method to identify discriminative features to detect deception. Using 124 features selected from a set of 65,166 original features as inputs for a support vector machine classifier, our results indicate that feature selection significantly enhanced the classification accuracy of the support vector machine in comparison to the models trained using all features and dimension reduction based models. Furthermore, the selected features are shown to form anatomic clusters within brain regions, which supports the hypothesis that specific brain regions may play a role during deception processes. Conclusion Feature selection not only enhances classification accuracy in fMRI-based deception detection but also provides support for the biological hypothesis that brain activities in certain regions of the brain are important for discrimination of deception. PMID:19761569

  7. High-dimensional model estimation and model selection

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    I will review concepts and algorithms from high-dimensional statistics for linear model estimation and model selection. I will particularly focus on the so-called p>>n setting where the number of variables p is much larger than the number of samples n. I will focus mostly on regularized statistical estimators that produce sparse models. Important examples include the LASSO and its matrix extension, the Graphical LASSO, and more recent non-convex methods such as the TREX. I will show the applicability of these estimators in a diverse range of scientific applications, such as sparse interaction graph recovery and high-dimensional classification and regression problems in genomics.

  8. Learning from Past Classification Errors: Exploring Methods for Improving the Performance of a Deep Learning-based Building Extraction Model through Quantitative Analysis of Commission Errors for Optimal Sample Selection

    Science.gov (United States)

    Swan, B.; Laverdiere, M.; Yang, L.

    2017-12-01

    In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process

  9. Selecting a model of supersymmetry breaking mediation

    International Nuclear Information System (INIS)

    AbdusSalam, S. S.; Allanach, B. C.; Dolan, M. J.; Feroz, F.; Hobson, M. P.

    2009-01-01

    We study the problem of selecting between different mechanisms of supersymmetry breaking in the minimal supersymmetric standard model using current data. We evaluate the Bayesian evidence of four supersymmetry breaking scenarios: mSUGRA, mGMSB, mAMSB, and moduli mediation. The results show a strong dependence on the dark matter assumption. Using the inferred cosmological relic density as an upper bound, minimal anomaly mediation is at least moderately favored over the CMSSM. Our fits also indicate that evidence for a positive sign of the μ parameter is moderate at best. We present constraints on the anomaly and gauge mediated parameter spaces and some previously unexplored aspects of the dark matter phenomenology of the moduli mediation scenario. We use sparticle searches, indirect observables and dark matter observables in the global fit and quantify robustness with respect to prior choice. We quantify how much information is contained within each constraint.

  10. An Investigation of the Linkage between Technology-Based Activities and STEM Major Selection in 4-Year Postsecondary Institutions in the United States: Multilevel Structural Equation Modelling

    Science.gov (United States)

    Lee, Ahlam

    2015-01-01

    Among the disciplines of science, technology, engineering, and math (STEM), much attention has been paid to the influences of math- and science-related learning contexts on students' STEM major selection. However, the technology and engineering learning contexts that are linked to STEM major selection have been overlooked. In response, a…

  11. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  12. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Banissi, E.; Khosrowshahi, F.; Sarfraz, M.; Ursyn, A.

    2001-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  13. Physiologically based pharmacokinetic rat model for methyl tertiary-butyl ether; comparison of selected dose metrics following various MTBE exposure scenarios used for toxicity and carcinogenicity evaluation

    International Nuclear Information System (INIS)

    Borghoff, Susan J.; Parkinson, Horace; Leavens, Teresa L.

    2010-01-01

    There are a number of cancer and toxicity studies that have been carried out to assess hazard from methyl tertiary-butyl ether (MTBE) exposure via inhalation and oral administration. MTBE has been detected in surface as well as ground water supplies which emphasized the need to assess the risk from exposure via drinking water contamination. This model can now be used to evaluate route-to-route extrapolation issues concerning MTBE exposures but also as a means of comparing potential dose metrics that may provide insight to differences in biological responses observed in rats following different routes of MTBE exposure. Recently an updated rat physiologically based pharmacokinetic (PBPK) model was published that relied on a description of MTBE and its metabolite tertiary-butyl alcohol (TBA) binding to α2u-globulin, a male rat-specific protein. This model was used to predict concentrations of MTBE and TBA in the kidney, a target tissue in the male rat. The objective of this study was to use this model to evaluate the dosimetry of MTBE and TBA in rats following different exposure scenarios, used to evaluate the toxicity and carcinogenicity of MTBE, and compare various dose metrics under these different conditions. Model simulations suggested that although inhalation and drinking water exposures show a similar pattern of MTBE and TBA exposure in the blood and kidney (i.e. concentration-time profiles), the total blood and kidney levels following exposure of MTBE to 7.5 mg/ml MTBE in the drinking water for 90 days is in the same range as administration of an oral dose of 1000 mg/kg MTBE. Evaluation of the dose metrics also supports that a high oral bolus dose (i.e. 1000 mg/kg MTBE) results in a greater percentage of the dose exhaled as MTBE with a lower percent metabolized to TBA as compared to dose of MTBE that is delivered over a longer period of time as in the case of drinking water.

  14. Genomic Selection in Plant Breeding: Methods, Models, and Perspectives.

    Science.gov (United States)

    Crossa, José; Pérez-Rodríguez, Paulino; Cuevas, Jaime; Montesinos-López, Osval; Jarquín, Diego; de Los Campos, Gustavo; Burgueño, Juan; González-Camacho, Juan M; Pérez-Elizalde, Sergio; Beyene, Yoseph; Dreisigacker, Susanne; Singh, Ravi; Zhang, Xuecai; Gowda, Manje; Roorkiwal, Manish; Rutkoski, Jessica; Varshney, Rajeev K

    2017-11-01

    Genomic selection (GS) facilitates the rapid selection of superior genotypes and accelerates the breeding cycle. In this review, we discuss the history, principles, and basis of GS and genomic-enabled prediction (GP) as well as the genetics and statistical complexities of GP models, including genomic genotype×environment (G×E) interactions. We also examine the accuracy of GP models and methods for two cereal crops and two legume crops based on random cross-validation. GS applied to maize breeding has shown tangible genetic gains. Based on GP results, we speculate how GS in germplasm enhancement (i.e., prebreeding) programs could accelerate the flow of genes from gene bank accessions to elite lines. Recent advances in hyperspectral image technology could be combined with GS and pedigree-assisted breeding. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Selective electromembrane extraction based on isoelectric point

    DEFF Research Database (Denmark)

    Huang, Chuixiu; Gjelstad, Astrid; Pedersen-Bjergaard, Stig

    2015-01-01

    above the pI value (pH 5.13) was found to be optimal. Under the optimal conditions, 73% of AT2 AP (RSD 13%) and 48% of L-Enke (RSD 5%) were found in the solution after this two-step EME process, whereas the other three positively charged peptides were not detected. The observations above indicated......For the first time, selective isolation of a target peptide based on the isoelectric point (pI) was achieved using a two-step electromembrane extraction (EME) approach with a thin flat membrane-based EME device. In this approach, step #1 was an extraction process, where both the target peptide...... angiotensin II antipeptide (AT2 AP, pI=5.13) and the matrix peptides (pI>5.13) angiotensin II (AT2), neurotensin (NT), angiotensin I (AT1) and leu-enkephalin (L-Enke) were all extracted as net positive species from the sample (pH 3.50), through a supported liquid membrane (SLM) of 1-nonanol diluted with 2...

  16. MIS-based sensors with hydrogen selectivity

    Science.gov (United States)

    Li,; Dongmei, [Boulder, CO; Medlin, J William [Boulder, CO; McDaniel, Anthony H [Livermore, CA; Bastasz, Robert J [Livermore, CA

    2008-03-11

    The invention provides hydrogen selective metal-insulator-semiconductor sensors which include a layer of hydrogen selective material. The hydrogen selective material can be polyimide layer having a thickness between 200 and 800 nm. Suitable polyimide materials include reaction products of benzophenone tetracarboxylic dianhydride 4,4-oxydianiline m-phenylene diamine and other structurally similar materials.

  17. Psyche Mission: Scientific Models and Instrument Selection

    Science.gov (United States)

    Polanskey, C. A.; Elkins-Tanton, L. T.; Bell, J. F., III; Lawrence, D. J.; Marchi, S.; Park, R. S.; Russell, C. T.; Weiss, B. P.

    2017-12-01

    NASA has chosen to explore (16) Psyche with their 14th Discovery-class mission. Psyche is a 226-km diameter metallic asteroid hypothesized to be the exposed core of a planetesimal that was stripped of its rocky mantle by multiple hit and run collisions in the early solar system. The spacecraft launch is planned for 2022 with arrival at the asteroid in 2026 for 21 months of operations. The Psyche investigation has five primary scientific objectives: A. Determine whether Psyche is a core, or if it is unmelted material. B. Determine the relative ages of regions of Psyche's surface. C. Determine whether small metal bodies incorporate the same light elements as are expected in the Earth's high-pressure core. D. Determine whether Psyche was formed under conditions more oxidizing or more reducing than Earth's core. E. Characterize Psyche's topography. The mission's task was to select the appropriate instruments to meet these objectives. However, exploring a metal world, rather than one made of ice, rock, or gas, requires development of new scientific models for Psyche to support the selection of the appropriate instruments for the payload. If Psyche is indeed a planetary core, we expect that it should have a detectable magnetic field. However, the strength of the magnetic field can vary by orders of magnitude depending on the formational history of Psyche. The implications of both the extreme low-end and the high-end predictions impact the magnetometer and mission design. For the imaging experiment, what can the team expect for the morphology of a heavily impacted metal body? Efforts are underway to further investigate the differences in crater morphology between high velocity impacts into metal and rock to be prepared to interpret the images of Psyche when they are returned. Finally, elemental composition measurements at Psyche using nuclear spectroscopy encompass a new and unexplored phase space of gamma-ray and neutron measurements. We will present some end

  18. Model selection for the extraction of movement primitives

    Directory of Open Access Journals (Sweden)

    Dominik M Endres

    2013-12-01

    Full Text Available A wide range of blind source separation methods have been used in motor control research for the extraction of movement primitives from EMG and kinematic data. Popular examples are principal component analysis (PCA,independent component analysis (ICA, anechoic demixing, and the time-varying synergy model. However, choosing the parameters of these models, or indeed choosing the type of model, is often done in a heuristic fashion, driven by result expectations as much as by the data. We propose an objective criterion which allows to select the model type, number of primitives and the temporal smoothness prior. Our approach is based on a Laplace approximation to the posterior distribution of the parameters of a given blind source separation model, re-formulated as a Bayesian generative model.We first validate our criterion on ground truth data, showing that it performs at least as good as traditional model selection criteria (Bayesian information criterion, BIC and the Akaike Information Criterion (AIC. Then, we analyze human gait data, finding that an anechoic mixture model with a temporal smoothness constraint on the sources can best account for the data.

  19. Modeling the selective catalytic reduction of NOx by ammonia over a Vanadia-based catalyst from heavy duty diesel exhaust gases

    International Nuclear Information System (INIS)

    Yun, Byoung Kyu; Kim, Man Young

    2013-01-01

    A numerical simulation for prediction of NO X conversion over a commercial V 2 O 5 catalyst with NH 3 as a reductant was performed for a heavy duty diesel engine applications. The chemical behaviors of the SCR reactor are described by using the global NO X kinetics including standard, fast, and NH 3 oxidation reactions with the Langmuir–Hinshelwood (LH) mechanism incorporated into the commercial Boost code. After introducing mathematical models for the SCR reaction with specific reaction parameters, the effects of various parameters such as space velocities, the O 2 , H 2 O, NO 2 , and NH 3 concentrations on the NOx conversion are thoroughly studied and validated by comparing with the experimental data available in the literature. It is found that NO X conversion increases with decreasing space velocity, H 2 O concentration, and NH 3 /NO X ratio, and increasing O 2 concentration and NO 2 /NO X ratio. The study shows that not only is the present approach adopted is flexible in treating performance of the commercial V 2 O 5 based SCR catalyst, it is also accurate and efficient for the prediction of NO X conversion in diesel exhaust environments. - Highlights: ► To find the reaction parameters for LH mechanism over a commercial V2O5 catalyst. ► To investigate the effects of various parameters on the SCR NO X conversion. ► To present benchmark solutions on SCR behavior with diesel exhaust environments.

  20. Variable selection based cotton bollworm odor spectroscopic detection

    Science.gov (United States)

    Lü, Chengxu; Gai, Shasha; Luo, Min; Zhao, Bo

    2016-10-01

    Aiming at rapid automatic pest detection based efficient and targeting pesticide application and shooting the trouble of reflectance spectral signal covered and attenuated by the solid plant, the possibility of near infrared spectroscopy (NIRS) detection on cotton bollworm odor is studied. Three cotton bollworm odor samples and 3 blank air gas samples were prepared. Different concentrations of cotton bollworm odor were prepared by mixing the above gas samples, resulting a calibration group of 62 samples and a validation group of 31 samples. Spectral collection system includes light source, optical fiber, sample chamber, spectrometer. Spectra were pretreated by baseline correction, modeled with partial least squares (PLS), and optimized by genetic algorithm (GA) and competitive adaptive reweighted sampling (CARS). Minor counts differences are found among spectra of different cotton bollworm odor concentrations. PLS model of all the variables was built presenting RMSEV of 14 and RV2 of 0.89, its theory basis is insect volatilizes specific odor, including pheromone and allelochemics, which are used for intra-specific and inter-specific communication and could be detected by NIR spectroscopy. 28 sensitive variables are selected by GA, presenting the model performance of RMSEV of 14 and RV2 of 0.90. Comparably, 8 sensitive variables are selected by CARS, presenting the model performance of RMSEV of 13 and RV2 of 0.92. CARS model employs only 1.5% variables presenting smaller error than that of all variable. Odor gas based NIR technique shows the potential for cotton bollworm detection.

  1. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  2. Effect of Item Response Theory (IRT) Model Selection on Testlet-Based Test Equating. Research Report. ETS RR-14-19

    Science.gov (United States)

    Cao, Yi; Lu, Ru; Tao, Wei

    2014-01-01

    The local item independence assumption underlying traditional item response theory (IRT) models is often not met for tests composed of testlets. There are 3 major approaches to addressing this issue: (a) ignore the violation and use a dichotomous IRT model (e.g., the 2-parameter logistic [2PL] model), (b) combine the interdependent items to form a…

  3. Linear feature selection in texture analysis - A PLS based method

    DEFF Research Database (Denmark)

    Marques, Joselene; Igel, Christian; Lillholm, Martin

    2013-01-01

    We present a texture analysis methodology that combined uncommitted machine-learning techniques and partial least square (PLS) in a fully automatic framework. Our approach introduces a robust PLS-based dimensionality reduction (DR) step to specifically address outliers and high-dimensional feature......, which first applied a PLS regression to rank the features and then defined the best number of features to retain in the model by an iterative learning phase. The outliers in the dataset, that could inflate the number of selected features, were eliminated by a pre-processing step. To cope...... and considering all CV groups, the methods selected 36 % of the original features available. The diagnosis evaluation reached a generalization area-under-the-ROC curve of 0.92, which was higher than established cartilage-based markers known to relate to OA diagnosis....

  4. Individual-based simulation of sexual selection: A quantitative genetic approach

    NARCIS (Netherlands)

    van Dijk, D.; Sloot, P.M.A.; Tay, J.C.; Schut, M.C.

    2010-01-01

    Sexual selection has been mathematically modeled using quantitative genetics as well as population genetics. Two-locus simulation models have been used to study the evolution of male display and female preference. We present an individual-based simulation model of sexual selection in a quantitative

  5. Estimation and variable selection for generalized additive partial linear models

    KAUST Repository

    Wang, Li

    2011-08-01

    We study generalized additive partial linear models, proposing the use of polynomial spline smoothing for estimation of nonparametric functions, and deriving quasi-likelihood based estimators for the linear parameters. We establish asymptotic normality for the estimators of the parametric components. The procedure avoids solving large systems of equations as in kernel-based procedures and thus results in gains in computational simplicity. We further develop a class of variable selection procedures for the linear parameters by employing a nonconcave penalized quasi-likelihood, which is shown to have an asymptotic oracle property. Monte Carlo simulations and an empirical example are presented for illustration. © Institute of Mathematical Statistics, 2011.

  6. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though...

  7. A methodology to design heuristics for model selection based on the characteristics of data: Application to investigate when the Negative Binomial Lindley (NB-L) is preferred over the Negative Binomial (NB).

    Science.gov (United States)

    Shirazi, Mohammadali; Dhavala, Soma Sekhar; Lord, Dominique; Geedipally, Srinivas Reddy

    2017-10-01

    Safety analysts usually use post-modeling methods, such as the Goodness-of-Fit statistics or the Likelihood Ratio Test, to decide between two or more competitive distributions or models. Such metrics require all competitive distributions to be fitted to the data before any comparisons can be accomplished. Given the continuous growth in introducing new statistical distributions, choosing the best one using such post-modeling methods is not a trivial task, in addition to all theoretical or numerical issues the analyst may face during the analysis. Furthermore, and most importantly, these measures or tests do not provide any intuitions into why a specific distribution (or model) is preferred over another (Goodness-of-Logic). This paper ponders into these issues by proposing a methodology to design heuristics for Model Selection based on the characteristics of data, in terms of descriptive summary statistics, before fitting the models. The proposed methodology employs two analytic tools: (1) Monte-Carlo Simulations and (2) Machine Learning Classifiers, to design easy heuristics to predict the label of the 'most-likely-true' distribution for analyzing data. The proposed methodology was applied to investigate when the recently introduced Negative Binomial Lindley (NB-L) distribution is preferred over the Negative Binomial (NB) distribution. Heuristics were designed to select the 'most-likely-true' distribution between these two distributions, given a set of prescribed summary statistics of data. The proposed heuristics were successfully compared against classical tests for several real or observed datasets. Not only they are easy to use and do not need any post-modeling inputs, but also, using these heuristics, the analyst can attain useful information about why the NB-L is preferred over the NB - or vice versa- when modeling data. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. A new Russell model for selecting suppliers

    NARCIS (Netherlands)

    Azadi, Majid; Shabani, Amir; Farzipoor Saen, Reza

    2014-01-01

    Recently, supply chain management (SCM) has been considered by many researchers. Supplier evaluation and selection plays a significant role in establishing an effective SCM. One of the techniques that can be used for selecting suppliers is data envelopment analysis (DEA). In some situations, to

  9. Selective experimental review of the Standard Model

    International Nuclear Information System (INIS)

    Bloom, E.D.

    1985-02-01

    Before disussing experimental comparisons with the Standard Model, (S-M) it is probably wise to define more completely what is commonly meant by this popular term. This model is a gauge theory of SU(3)/sub f/ x SU(2)/sub L/ x U(1) with 18 parameters. The parameters are α/sub s/, α/sub qed/, theta/sub W/, M/sub W/ (M/sub Z/ = M/sub W//cos theta/sub W/, and thus is not an independent parameter), M/sub Higgs/; the lepton masses, M/sub e/, Mμ, M/sub r/; the quark masses, M/sub d/, M/sub s/, M/sub b/, and M/sub u/, M/sub c/, M/sub t/; and finally, the quark mixing angles, theta 1 , theta 2 , theta 3 , and the CP violating phase delta. The latter four parameters appear in the quark mixing matrix for the Kobayashi-Maskawa and Maiani forms. Clearly, the present S-M covers an enormous range of physics topics, and the author can only lightly cover a few such topics in this report. The measurement of R/sub hadron/ is fundamental as a test of the running coupling constant α/sub s/ in QCD. The author will discuss a selection of recent precision measurements of R/sub hadron/, as well as some other techniques for measuring α/sub s/. QCD also requires the self interaction of gluons. The search for the three gluon vertex may be practically realized in the clear identification of gluonic mesons. The author will present a limited review of recent progress in the attempt to untangle such mesons from the plethora q anti q states of the same quantum numbers which exist in the same mass range. The electroweak interactions provide some of the strongest evidence supporting the S-M that exists. Given the recent progress in this subfield, and particularly with the discovery of the W and Z bosons at CERN, many recent reviews obviate the need for further discussion in this report. In attempting to validate a theory, one frequently searches for new phenomena which would clearly invalidate it. 49 references, 28 figures

  10. Graph-based unsupervised feature selection and multiview ...

    Indian Academy of Sciences (India)

    Home; Journals; Journal of Biosciences; Volume 40; Issue 4. Graph-based unsupervised feature selection and multiview clustering for microarray data. Tripti Swarnkar Pabitra Mitra ... Keywords. Biological functional enrichment; clustering; explorative data analysis; feature selection; gene selection; graph-based learning.

  11. Graph-based unsupervised feature selection and multiview ...

    Indian Academy of Sciences (India)

    2015-09-28

    Sep 28, 2015 ... Biological functional enrichment; clustering; explorative data analysis; feature selection; gene selection; graph-based learning. Published online: 28 September ...... RFGS: random forest gene selection; SVST: Support vector sampling technique; SOM: Self-organizing map; GUFS: proposed graph-based.

  12. CROWDSOURCING BASED 3D MODELING

    Directory of Open Access Journals (Sweden)

    A. Somogyi

    2016-06-01

    Full Text Available Web-based photo albums that support organizing and viewing the users’ images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  13. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  14. Assessing the environmental fate of selected polybrominated diphenyl ethers in the region surrounding the Zhuoshui River of Taiwan based on an Equilibrium Constant fugacity model

    Science.gov (United States)

    O'Driscoll, Kieran; Doherty, Rory; Robinson, Jill; Chiang, Wen-Son; Kao Kao, Ruey-Chy

    2015-04-01

    Polybrominated diphenyl ethers (PBDEs) are a group of flame retardants that have been in use since the 1970s. They are included in the list of hazardous substances known as persistent organic pollutants (POPs) because they are extremely hazardous to the environment and human health. PBDEs have been extensively used in industry and manufacturing in Taiwan, thus its citizens are at high risk of exposure to these chemicals. An assessment of the environmental fate of these compounds in the Zhuoshui river and Changhua County regions of western Taiwan, and also including the adjacent area of the Taiwan Strait, was conducted for three high risk congeners, BDE-47, -99 and -209, to obtain information regarding the partitioning, advection, transfer and long range transport potential of the PBDEs in order to identify the level of risk posed by the pollutants in this region. The results indicate that large amounts of PBDEs presently reside in all model compartments - air, soil, water, and sediment - with particularly high levels found in air and especially in sediment. The high levels found in sediment, particularly for BDE-209, are significant, since there is the threat of these pollutants entering the food chain, either directly through benthic feeding, or through resuspension and subsequent feeding in the pelagic region of the water column which is a distinct possibility in the strong currents found within the Taiwan Strait. Another important result is that a substantial portion of emissions leave the model domain directly through advection, particularly for BDE-47 (58%) and BDE-209 (75%), thus posing a risk to adjacent communities. Model results were generally in reasonable agreement with available measured concentrations. In air, model concentrations are in reasonably good agreement with available measured values. For both BDE-47 and -99, model concentrations are a factor of 2-3 higher and BDE-209 within the range of measured values. In soil, model results are somewhat

  15. Selecting, weeding, and weighting biased climate model ensembles

    Science.gov (United States)

    Jackson, C. S.; Picton, J.; Huerta, G.; Nosedal Sanchez, A.

    2012-12-01

    In the Bayesian formulation, the "log-likelihood" is a test statistic for selecting, weeding, or weighting climate model ensembles with observational data. This statistic has the potential to synthesize the physical and data constraints on quantities of interest. One of the thorny issues for formulating the log-likelihood is how one should account for biases. While in the past we have included a generic discrepancy term, not all biases affect predictions of quantities of interest. We make use of a 165-member ensemble CAM3.1/slab ocean climate models with different parameter settings to think through the issues that are involved with predicting each model's sensitivity to greenhouse gas forcing given what can be observed from the base state. In particular we use multivariate empirical orthogonal functions to decompose the differences that exist among this ensemble to discover what fields and regions matter to the model's sensitivity. We find that the differences that matter are a small fraction of the total discrepancy. Moreover, weighting members of the ensemble using this knowledge does a relatively poor job of adjusting the ensemble mean toward the known answer. This points out the shortcomings of using weights to correct for biases in climate model ensembles created by a selection process that does not emphasize the priorities of your log-likelihood.

  16. Parameter estimation and model selection in computational biology.

    Directory of Open Access Journals (Sweden)

    Gabriele Lillacci

    2010-03-01

    Full Text Available A central challenge in computational modeling of biological systems is the determination of the model parameters. Typically, only a fraction of the parameters (such as kinetic rate constants are experimentally measured, while the rest are often fitted. The fitting process is usually based on experimental time course measurements of observables, which are used to assign parameter values that minimize some measure of the error between these measurements and the corresponding model prediction. The measurements, which can come from immunoblotting assays, fluorescent markers, etc., tend to be very noisy and taken at a limited number of time points. In this work we present a new approach to the problem of parameter selection of biological models. We show how one can use a dynamic recursive estimator, known as extended Kalman filter, to arrive at estimates of the model parameters. The proposed method follows. First, we use a variation of the Kalman filter that is particularly well suited to biological applications to obtain a first guess for the unknown parameters. Secondly, we employ an a posteriori identifiability test to check the reliability of the estimates. Finally, we solve an optimization problem to refine the first guess in case it should not be accurate enough. The final estimates are guaranteed to be statistically consistent with the measurements. Furthermore, we show how the same tools can be used to discriminate among alternate models of the same biological process. We demonstrate these ideas by applying our methods to two examples, namely a model of the heat shock response in E. coli, and a model of a synthetic gene regulation system. The methods presented are quite general and may be applied to a wide class of biological systems where noisy measurements are used for parameter estimation or model selection.

  17. Selecting global climate models for regional climate change studies.

    Science.gov (United States)

    Pierce, David W; Barnett, Tim P; Santer, Benjamin D; Gleckler, Peter J

    2009-05-26

    Regional or local climate change modeling studies currently require starting with a global climate model, then downscaling to the region of interest. How should global models be chosen for such studies, and what effect do such choices have? This question is addressed in the context of a regional climate detection and attribution (D&A) study of January-February-March (JFM) temperature over the western U.S. Models are often selected for a regional D&A analysis based on the quality of the simulated regional climate. Accordingly, 42 performance metrics based on seasonal temperature and precipitation, the El Nino/Southern Oscillation (ENSO), and the Pacific Decadal Oscillation are constructed and applied to 21 global models. However, no strong relationship is found between the score of the models on the metrics and results of the D&A analysis. Instead, the importance of having ensembles of runs with enough realizations to reduce the effects of natural internal climate variability is emphasized. Also, the superiority of the multimodel ensemble average (MM) to any 1 individual model, already found in global studies examining the mean climate, is true in this regional study that includes measures of variability as well. Evidence is shown that this superiority is largely caused by the cancellation of offsetting errors in the individual global models. Results with both the MM and models picked randomly confirm the original D&A results of anthropogenically forced JFM temperature changes in the western U.S. Future projections of temperature do not depend on model performance until the 2080s, after which the better performing models show warmer temperatures.

  18. Pareto-Optimal Model Selection via SPRINT-Race.

    Science.gov (United States)

    Zhang, Tiantian; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2018-02-01

    In machine learning, the notion of multi-objective model selection (MOMS) refers to the problem of identifying the set of Pareto-optimal models that optimize by compromising more than one predefined objectives simultaneously. This paper introduces SPRINT-Race, the first multi-objective racing algorithm in a fixed-confidence setting, which is based on the sequential probability ratio with indifference zone test. SPRINT-Race addresses the problem of MOMS with multiple stochastic optimization objectives in the proper Pareto-optimality sense. In SPRINT-Race, a pairwise dominance or non-dominance relationship is statistically inferred via a non-parametric, ternary-decision, dual-sequential probability ratio test. The overall probability of falsely eliminating any Pareto-optimal models or mistakenly returning any clearly dominated models is strictly controlled by a sequential Holm's step-down family-wise error rate control method. As a fixed-confidence model selection algorithm, the objective of SPRINT-Race is to minimize the computational effort required to achieve a prescribed confidence level about the quality of the returned models. The performance of SPRINT-Race is first examined via an artificially constructed MOMS problem with known ground truth. Subsequently, SPRINT-Race is applied on two real-world applications: 1) hybrid recommender system design and 2) multi-criteria stock selection. The experimental results verify that SPRINT-Race is an effective and efficient tool for such MOMS problems. code of SPRINT-Race is available at https://github.com/watera427/SPRINT-Race.

  19. Structuring AHP-based maintenance policy selection

    NARCIS (Netherlands)

    Goossens, Adriaan; Basten, Robertus Johannes Ida; Hummel, J. Marjan; van der Wegen, Leonardus L.M.

    2015-01-01

    We aim to structure the maintenance policy selection process for ships, using the Analytic Hierarchy Process (AHP). Maintenance is an important contributor to reach the intended life-time of capital technical assets, and it is gaining increasing interest and relevance. A maintenance policy is a

  20. Gender based disruptive selection maintains body size ...

    Indian Academy of Sciences (India)

    2014-07-04

    Jul 4, 2014 ... GL1-3 populations by selecting for faster per-adult develop- ment and reproduction at late age. The GS fly ... low food resulted in emergence of phenotypically small flies. The emerging flies were sorted according ..... health sciences (New York: Wiley) pp 786–843. Friberg U 2006 Male perception of female ...

  1. Using PSO-Based Hierarchical Feature Selection Algorithm

    Directory of Open Access Journals (Sweden)

    Zhiwei Ji

    2014-01-01

    Full Text Available Hepatocellular carcinoma (HCC is one of the most common malignant tumors. Clinical symptoms attributable to HCC are usually absent, thus often miss the best therapeutic opportunities. Traditional Chinese Medicine (TCM plays an active role in diagnosis and treatment of HCC. In this paper, we proposed a particle swarm optimization-based hierarchical feature selection (PSOHFS model to infer potential syndromes for diagnosis of HCC. Firstly, the hierarchical feature representation is developed by a three-layer tree. The clinical symptoms and positive score of patient are leaf nodes and root in the tree, respectively, while each syndrome feature on the middle layer is extracted from a group of symptoms. Secondly, an improved PSO-based algorithm is applied in a new reduced feature space to search an optimal syndrome subset. Based on the result of feature selection, the causal relationships of symptoms and syndromes are inferred via Bayesian networks. In our experiment, 147 symptoms were aggregated into 27 groups and 27 syndrome features were extracted. The proposed approach discovered 24 syndromes which obviously improved the diagnosis accuracy. Finally, the Bayesian approach was applied to represent the causal relationships both at symptom and syndrome levels. The results show that our computational model can facilitate the clinical diagnosis of HCC.

  2. Covariance-Based Measurement Selection Criterion for Gaussian-Based Algorithms

    Directory of Open Access Journals (Sweden)

    Fernando A. Auat Cheein

    2013-01-01

    Full Text Available Process modeling by means of Gaussian-based algorithms often suffers from redundant information which usually increases the estimation computational complexity without significantly improving the estimation performance. In this article, a non-arbitrary measurement selection criterion for Gaussian-based algorithms is proposed. The measurement selection criterion is based on the determination of the most significant measurement from both an estimation convergence perspective and the covariance matrix associated with the measurement. The selection criterion is independent from the nature of the measured variable. This criterion is used in conjunction with three Gaussian-based algorithms: the EIF (Extended Information Filter, the EKF (Extended Kalman Filter and the UKF (Unscented Kalman Filter. Nevertheless, the measurement selection criterion shown herein can also be applied to other Gaussian-based algorithms. Although this work is focused on environment modeling, the results shown herein can be applied to other Gaussian-based algorithm implementations. Mathematical descriptions and implementation results that validate the proposal are also included in this work.

  3. Quality Quandaries- Time Series Model Selection and Parsimony

    DEFF Research Database (Denmark)

    Bisgaard, Søren; Kulahci, Murat

    2009-01-01

    Some of the issues involved in selecting adequate models for time series data are discussed using an example concerning the number of users of an Internet server. The process of selecting an appropriate model is subjective and requires experience and judgment. The authors believe an important...... consideration in model selection should be parameter parsimony. They favor the use of parsimonious mixed ARMA models, noting that research has shown that a model building strategy that considers only autoregressive representations will lead to non-parsimonious models and to loss of forecasting accuracy....

  4. METHODS OF SELECTING THE EFFECTIVE MODELS OF BUILDINGS REPROFILING PROJECTS

    Directory of Open Access Journals (Sweden)

    Александр Иванович МЕНЕЙЛЮК

    2016-02-01

    Full Text Available The article highlights the important task of project management in reprofiling of buildings. It is expedient to pay attention to selecting effective engineering solutions to reduce the duration and cost reduction at the project management in the construction industry. This article presents a methodology for the selection of efficient organizational and technical solutions for the reconstruction of buildings reprofiling. The method is based on a compilation of project variants in the program Microsoft Project and experimental statistical analysis using the program COMPEX. The introduction of this technique in the realigning of buildings allows choosing efficient models of projects, depending on the given constraints. Also, this technique can be used for various construction projects.

  5. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  6. Proposition of a multicriteria model to select logistics services providers

    Directory of Open Access Journals (Sweden)

    Miriam Catarina Soares Aharonovitz

    2014-06-01

    Full Text Available This study aims to propose a multicriteria model to select logistics service providers by the development of a decision tree. The methodology consists of a survey, which resulted in a sample of 181 responses. The sample was analyzed using statistic methods, descriptive statistics among them, multivariate analysis, variance analysis, and parametric tests to compare means. Based on these results, it was possible to obtain the decision tree and information to support the multicriteria analysis. The AHP (Analytic Hierarchy Process was applied to determine the data influence and thus ensure better consistency in the analysis. The decision tree categorizes the criteria according to the decision levels (strategic, tactical and operational. Furthermore, it allows to generically evaluate the importance of each criterion in the supplier selection process from the point of view of logistics services contractors.

  7. Hierarchical models in ecology: confidence intervals, hypothesis testing, and model selection using data cloning.

    Science.gov (United States)

    Ponciano, José Miguel; Taper, Mark L; Dennis, Brian; Lele, Subhash R

    2009-02-01

    Hierarchical statistical models are increasingly being used to describe complex ecological processes. The data cloning (DC) method is a new general technique that uses Markov chain Monte Carlo (MCMC) algorithms to compute maximum likelihood (ML) estimates along with their asymptotic variance estimates for hierarchical models. Despite its generality, the method has two inferential limitations. First, it only provides Wald-type confidence intervals, known to be inaccurate in small samples. Second, it only yields ML parameter estimates, but not the maximized likelihood values used for profile likelihood intervals, likelihood ratio hypothesis tests, and information-theoretic model selection. Here we describe how to overcome these inferential limitations with a computationally efficient method for calculating likelihood ratios via data cloning. The ability to calculate likelihood ratios allows one to do hypothesis tests, construct accurate confidence intervals and undertake information-based model selection with hierarchical models in a frequentist context. To demonstrate the use of these tools with complex ecological models, we reanalyze part of Gause's classic Paramecium data with state-space population models containing both environmental noise and sampling error. The analysis results include improved confidence intervals for parameters, a hypothesis test of laboratory replication, and a comparison of the Beverton-Holt and the Ricker growth forms based on a model selection index.

  8. Model Based Definition

    Science.gov (United States)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  9. Probabilistic wind power forecasting with online model selection and warped gaussian process

    International Nuclear Information System (INIS)

    Kou, Peng; Liang, Deliang; Gao, Feng; Gao, Lin

    2014-01-01

    Highlights: • A new online ensemble model for the probabilistic wind power forecasting. • Quantifying the non-Gaussian uncertainties in wind power. • Online model selection that tracks the time-varying characteristic of wind generation. • Dynamically altering the input features. • Recursive update of base models. - Abstract: Based on the online model selection and the warped Gaussian process (WGP), this paper presents an ensemble model for the probabilistic wind power forecasting. This model provides the non-Gaussian predictive distributions, which quantify the non-Gaussian uncertainties associated with wind power. In order to follow the time-varying characteristics of wind generation, multiple time dependent base forecasting models and an online model selection strategy are established, thus adaptively selecting the most probable base model for each prediction. WGP is employed as the base model, which handles the non-Gaussian uncertainties in wind power series. Furthermore, a regime switch strategy is designed to modify the input feature set dynamically, thereby enhancing the adaptiveness of the model. In an online learning framework, the base models should also be time adaptive. To achieve this, a recursive algorithm is introduced, thus permitting the online updating of WGP base models. The proposed model has been tested on the actual data collected from both single and aggregated wind farms

  10. Bioeconomic model and selection indices in Aberdeen Angus cattle.

    Science.gov (United States)

    Campos, G S; Braccini Neto, J; Oaigen, R P; Cardoso, F F; Cobuci, J A; Kern, E L; Campos, L T; Bertoli, C D; McManus, C M

    2014-08-01

    A bioeconomic model was developed to calculate economic values for biological traits in full-cycle production systems and propose selection indices based on selection criteria used in the Brazilian Aberdeen Angus genetic breeding programme (PROMEBO). To assess the impact of changes in the performance of the traits on the profit of the production system, the initial values ​​of the traits were increased by 1%. The economic values for number of calves weaned (NCW) and slaughter weight (SW) were, respectively, R$ 6.65 and R$ 1.43/cow/year. The selection index at weaning showed a 44.77% emphasis on body weight, 14.24% for conformation, 30.36% for early maturing and 10.63% for muscle development. The eighteen-month index showed emphasis of 77.61% for body weight, 4.99% for conformation, 11.09% for early maturing, 6.10% for muscle development and 0.22% for scrotal circumference. NCW showed highest economic impact, and SW had important positive effect on the economics of the production system. The selection index proposed can be used by breeders and should contribute to greater profitability. © 2014 Blackwell Verlag GmbH.

  11. Dose selection based on physiologically based pharmacokinetic (PBPK) approaches.

    Science.gov (United States)

    Jones, Hannah M; Mayawala, Kapil; Poulin, Patrick

    2013-04-01

    Physiologically based pharmacokinetic (PBPK) models are built using differential equations to describe the physiology/anatomy of different biological systems. Readily available in vitro and in vivo preclinical data can be incorporated into these models to not only estimate pharmacokinetic (PK) parameters and plasma concentration-time profiles, but also to gain mechanistic insight into compound properties. They provide a mechanistic framework to understand and extrapolate PK and dose across in vitro and in vivo systems and across different species, populations and disease states. Using small molecule and large molecule examples from the literature and our own company, we have shown how PBPK techniques can be utilised for human PK and dose prediction. Such approaches have the potential to increase efficiency, reduce the need for animal studies, replace clinical trials and increase PK understanding. Given the mechanistic nature of these models, the future use of PBPK modelling in drug discovery and development is promising, however some limitations need to be addressed to realise its application and utility more broadly.

  12. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    based classifier combination is the simplest method in which final decision is that class for which maximum (greater than N/2) participating classifier vote, where N is the number of classifiers. 3.2b Decision templates: The method based on decision template, (Kuncheva et al 2001) firstly creates DT for each class using ...

  13. Feature selection based classifier combination approach for ...

    Indian Academy of Sciences (India)

    3.2c Dempster-Shafer rule based classifier combination: Dempster–Shafer (DS) method is based on the evidence theory, proposed by Glen Shafer as a way to represent cognitive knowledge. Here the probability is obtained using belief function instead of using the Bayesian distribution. Prob- ability values are assigned to a ...

  14. Application of Bayesian Model Selection for Metal Yield Models using ALEGRA and Dakota.

    Energy Technology Data Exchange (ETDEWEB)

    Portone, Teresa; Niederhaus, John Henry; Sanchez, Jason James; Swiler, Laura Painton

    2018-02-01

    This report introduces the concepts of Bayesian model selection, which provides a systematic means of calibrating and selecting an optimal model to represent a phenomenon. This has many potential applications, including for comparing constitutive models. The ideas described herein are applied to a model selection problem between different yield models for hardened steel under extreme loading conditions.

  15. Neutron shielding calculations in a proton therapy facility based on Monte Carlo simulations and analytical models: Criterion for selecting the method of choice

    International Nuclear Information System (INIS)

    Titt, U.; Newhauser, W. D.

    2005-01-01

    Proton therapy facilities are shielded to limit the amount of secondary radiation to which patients, occupational workers and members of the general public are exposed. The most commonly applied shielding design methods for proton therapy facilities comprise semi-empirical and analytical methods to estimate the neutron dose equivalent. This study compares the results of these methods with a detailed simulation of a proton therapy facility by using the Monte Carlo technique. A comparison of neutron dose equivalent values predicted by the various methods reveals the superior accuracy of the Monte Carlo predictions in locations where the calculations converge. However, the reliability of the overall shielding design increases if simulation results, for which solutions have not converged, e.g. owing to too few particle histories, can be excluded, and deterministic models are being used at these locations. Criteria to accept or reject Monte Carlo calculations in such complex structures are not well understood. An optimum rejection criterion would allow all converging solutions of Monte Carlo simulation to be taken into account, and reject all solutions with uncertainties larger than the design safety margins. In this study, the optimum rejection criterion of 10% was found. The mean ratio was 26, 62% of all receptor locations showed a ratio between 0.9 and 10, and 92% were between 1 and 100. (authors)

  16. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    OpenAIRE

    Wu, Chung-Min; Hsieh, Ching-Lin; Chang, Kuei-Lun

    2013-01-01

    The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM) model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP) is then used to obtain their weights. To avoid calculation and additional pairwise compa...

  17. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    Science.gov (United States)

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  18. Modeling and Analysis of Supplier Selection Method Using ...

    African Journals Online (AJOL)

    However, in these parts of the world the application of tools and models for supplier selection problem is yet to surface and the banking and finance industry here in Ethiopia is no exception. Thus, the purpose of this research was to address supplier selection problem through modeling and application of analytical hierarchy ...

  19. Dealing with selection bias in educational transition models

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads Meier

    2011-01-01

    This paper proposes the bivariate probit selection model (BPSM) as an alternative to the traditional Mare model for analyzing educational transitions. The BPSM accounts for selection on unobserved variables by allowing for unobserved variables which affect the probability of making educational tr...

  20. EEG feature selection method based on decision tree.

    Science.gov (United States)

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  1. Development of Solar Drying Model for Selected Cambodian Fish Species

    Directory of Open Access Journals (Sweden)

    Anna Hubackova

    2014-01-01

    Full Text Available A solar drying was investigated as one of perspective techniques for fish processing in Cambodia. The solar drying was compared to conventional drying in electric oven. Five typical Cambodian fish species were selected for this study. Mean solar drying temperature and drying air relative humidity were 55.6°C and 19.9%, respectively. The overall solar dryer efficiency was 12.37%, which is typical for natural convection solar dryers. An average evaporative capacity of solar dryer was 0.049 kg·h−1. Based on coefficient of determination (R2, chi-square (χ2 test, and root-mean-square error (RMSE, the most suitable models describing natural convection solar drying kinetics were Logarithmic model, Diffusion approximate model, and Two-term model for climbing perch and Nile tilapia, swamp eel and walking catfish and Channa fish, respectively. In case of electric oven drying, the Modified Page 1 model shows the best results for all investigated fish species except Channa fish where the two-term model is the best one. Sensory evaluation shows that most preferable fish is climbing perch, followed by Nile tilapia and walking catfish. This study brings new knowledge about drying kinetics of fresh water fish species in Cambodia and confirms the solar drying as acceptable technology for fish processing.

  2. On Optimal Input Design and Model Selection for Communication Channels

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yanyan [ORNL; Djouadi, Seddik M [ORNL; Olama, Mohammed M [ORNL

    2013-01-01

    In this paper, the optimal model (structure) selection and input design which minimize the worst case identification error for communication systems are provided. The problem is formulated using metric complexity theory in a Hilbert space setting. It is pointed out that model selection and input design can be handled independently. Kolmogorov n-width is used to characterize the representation error introduced by model selection, while Gel fand and Time n-widths are used to represent the inherent error introduced by input design. After the model is selected, an optimal input which minimizes the worst case identification error is shown to exist. In particular, it is proven that the optimal model for reducing the representation error is a Finite Impulse Response (FIR) model, and the optimal input is an impulse at the start of the observation interval. FIR models are widely popular in communication systems, such as, in Orthogonal Frequency Division Multiplexing (OFDM) systems.

  3. Python Program to Select HII Region Models

    Science.gov (United States)

    Miller, Clare; Lamarche, Cody; Vishwas, Amit; Stacey, Gordon J.

    2016-01-01

    HII regions are areas of singly ionized Hydrogen formed by the ionizing radiaiton of upper main sequence stars. The infrared fine-structure line emissions, particularly Oxygen, Nitrogen, and Neon, can give important information about HII regions including gas temperature and density, elemental abundances, and the effective temperature of the stars that form them. The processes involved in calculating this information from observational data are complex. Models, such as those provided in Rubin 1984 and those produced by Cloudy (Ferland et al, 2013) enable one to extract physical parameters from observational data. However, the multitude of search parameters can make sifting through models tedious. I digitized Rubin's models and wrote a Python program that is able to take observed line ratios and their uncertainties and find the Rubin or Cloudy model that best matches the observational data. By creating a Python script that is user friendly and able to quickly sort through models with a high level of accuracy, this work increases efficiency and reduces human error in matching HII region models to observational data.

  4. Model and Variable Selection Procedures for Semiparametric Time Series Regression

    Directory of Open Access Journals (Sweden)

    Risa Kato

    2009-01-01

    Full Text Available Semiparametric regression models are very useful for time series analysis. They facilitate the detection of features resulting from external interventions. The complexity of semiparametric models poses new challenges for issues of nonparametric and parametric inference and model selection that frequently arise from time series data analysis. In this paper, we propose penalized least squares estimators which can simultaneously select significant variables and estimate unknown parameters. An innovative class of variable selection procedure is proposed to select significant variables and basis functions in a semiparametric model. The asymptotic normality of the resulting estimators is established. Information criteria for model selection are also proposed. We illustrate the effectiveness of the proposed procedures with numerical simulations.

  5. Selection Strategies for Social Influence in the Threshold Model

    Science.gov (United States)

    Karampourniotis, Panagiotis; Szymanski, Boleslaw; Korniss, Gyorgy

    The ubiquity of online social networks makes the study of social influence extremely significant for its applications to marketing, politics and security. Maximizing the spread of influence by strategically selecting nodes as initiators of a new opinion or trend is a challenging problem. We study the performance of various strategies for selection of large fractions of initiators on a classical social influence model, the Threshold model (TM). Under the TM, a node adopts a new opinion only when the fraction of its first neighbors possessing that opinion exceeds a pre-assigned threshold. The strategies we study are of two kinds: strategies based solely on the initial network structure (Degree-rank, Dominating Sets, PageRank etc.) and strategies that take into account the change of the states of the nodes during the evolution of the cascade, e.g. the greedy algorithm. We find that the performance of these strategies depends largely on both the network structure properties, e.g. the assortativity, and the distribution of the thresholds assigned to the nodes. We conclude that the optimal strategy needs to combine the network specifics and the model specific parameters to identify the most influential spreaders. Supported in part by ARL NS-CTA, ARO, and ONR.

  6. Continuum model for chiral induced spin selectivity in helical molecules

    Energy Technology Data Exchange (ETDEWEB)

    Medina, Ernesto [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France); Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); González-Arraga, Luis A. [IMDEA Nanoscience, Cantoblanco, 28049 Madrid (Spain); Finkelstein-Shapiro, Daniel; Mujica, Vladimiro [Department of Chemistry and Biochemistry, Arizona State University, Tempe, Arizona 85287 (United States); Berche, Bertrand [Centro de Física, Instituto Venezolano de Investigaciones Científicas, 21827, Caracas 1020 A (Venezuela, Bolivarian Republic of); Groupe de Physique Statistique, Institut Jean Lamour, Université de Lorraine, 54506 Vandoeuvre-les-Nancy Cedex (France)

    2015-05-21

    A minimal model is exactly solved for electron spin transport on a helix. Electron transport is assumed to be supported by well oriented p{sub z} type orbitals on base molecules forming a staircase of definite chirality. In a tight binding interpretation, the spin-orbit coupling (SOC) opens up an effective π{sub z} − π{sub z} coupling via interbase p{sub x,y} − p{sub z} hopping, introducing spin coupled transport. The resulting continuum model spectrum shows two Kramers doublet transport channels with a gap proportional to the SOC. Each doubly degenerate channel satisfies time reversal symmetry; nevertheless, a bias chooses a transport direction and thus selects for spin orientation. The model predicts (i) which spin orientation is selected depending on chirality and bias, (ii) changes in spin preference as a function of input Fermi level and (iii) back-scattering suppression protected by the SO gap. We compute the spin current with a definite helicity and find it to be proportional to the torsion of the chiral structure and the non-adiabatic Aharonov-Anandan phase. To describe room temperature transport, we assume that the total transmission is the result of a product of coherent steps.

  7. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  8. Test-Based Admission to Selective Universities:

    DEFF Research Database (Denmark)

    Thomsen, Jens-Peter

    2016-01-01

    in the social gradient in the primary admission system, admitting students on the basis of their high school grade point average, and in the secondary admission system, admitting university students based on more qualitative assessments. I find that the secondary higher education admission system does...

  9. Model Selection and Risk Estimation with Applications to Nonlinear Ordinary Differential Equation Systems

    DEFF Research Database (Denmark)

    Mikkelsen, Frederik Vissing

    Broadly speaking, this thesis is devoted to model selection applied to ordinary dierential equations and risk estimation under model selection. A model selection framework was developed for modelling time course data by ordinary dierential equations. The framework is accompanied by the R software...... eective computational tools for estimating unknown structures in dynamical systems, such as gene regulatory networks, which may be used to predict downstream eects of interventions in the system. A recommended algorithm based on the computational tools is presented and thoroughly tested in various...... simulation studies and applications. The second part of the thesis also concerns model selection, but focuses on risk estimation, i.e., estimating the error of mean estimators involving model selection. An extension of Stein's unbiased risk estimate (SURE), which applies to a class of estimators with model...

  10. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  11. Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence.

    Science.gov (United States)

    Schöniger, Anneli; Wöhling, Thomas; Samaniego, Luis; Nowak, Wolfgang

    2014-12-01

    Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.

  12. Random effect selection in generalised linear models

    DEFF Research Database (Denmark)

    Denwood, Matt; Houe, Hans; Forkman, Björn

    We analysed abattoir recordings of meat inspection codes with possible relevance to onfarm animal welfare in cattle. Random effects logistic regression models were used to describe individual-level data obtained from 461,406 cattle slaughtered in Denmark. Our results demonstrate that the largest...

  13. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    Science.gov (United States)

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  14. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide

  15. The genealogy of samples in models with selection.

    Science.gov (United States)

    Neuhauser, C; Krone, S M

    1997-02-01

    We introduce the genealogy of a random sample of genes taken from a large haploid population that evolves according to random reproduction with selection and mutation. Without selection, the genealogy is described by Kingman's well-known coalescent process. In the selective case, the genealogy of the sample is embedded in a graph with a coalescing and branching structure. We describe this graph, called the ancestral selection graph, and point out differences and similarities with Kingman's coalescent. We present simulations for a two-allele model with symmetric mutation in which one of the alleles has a selective advantage over the other. We find that when the allele frequencies in the population are already in equilibrium, then the genealogy does not differ much from the neutral case. This is supported by rigorous results. Furthermore, we describe the ancestral selection graph for other selective models with finitely many selection classes, such as the K-allele models, infinitely-many-alleles models. DNA sequence models, and infinitely-many-sites models, and briefly discuss the diploid case.

  16. Evaluating experimental design for soil-plant model selection with Bayesian model averaging

    Science.gov (United States)

    Wöhling, Thomas; Geiges, Andreas; Nowak, Wolfgang; Gayler, Sebastian

    2013-04-01

    mean and all individual posterior models, how informative the data types were for reducing prediction uncertainty of selected state variables, and how well the model structure can be identified based on the different data types and subsets.

  17. Modeling shape selection of buckled dielectric elastomers

    Science.gov (United States)

    Langham, Jacob; Bense, Hadrien; Barkley, Dwight

    2018-02-01

    A dielectric elastomer whose edges are held fixed will buckle, given a sufficiently applied voltage, resulting in a nontrivial out-of-plane deformation. We study this situation numerically using a nonlinear elastic model which decouples two of the principal electrostatic stresses acting on an elastomer: normal pressure due to the mutual attraction of oppositely charged electrodes and tangential shear ("fringing") due to repulsion of like charges at the electrode edges. These enter via physically simplified boundary conditions that are applied in a fixed reference domain using a nondimensional approach. The method is valid for small to moderate strains and is straightforward to implement in a generic nonlinear elasticity code. We validate the model by directly comparing the simulated equilibrium shapes with the experiment. For circular electrodes which buckle axisymetrically, the shape of the deflection profile is captured. Annular electrodes of different widths produce azimuthal ripples with wavelengths that match our simulations. In this case, it is essential to compute multiple equilibria because the first model solution obtained by the nonlinear solver (Newton's method) is often not the energetically favored state. We address this using a numerical technique known as "deflation." Finally, we observe the large number of different solutions that may be obtained for the case of a long rectangular strip.

  18. Efficient nonparametric and asymptotic Bayesian model selection methods for attributed graph clustering

    KAUST Repository

    Xu, Zhiqiang

    2017-02-16

    Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.

  19. Hierarchical Model of Assessing and Selecting Experts

    Science.gov (United States)

    Chernysheva, T. Y.; Korchuganova, M. A.; Borisov, V. V.; Min'kov, S. L.

    2016-04-01

    Revealing experts’ competences is a multi-objective issue. Authors of the paper deal with competence assessing methods of experts seen as objects, and criteria of qualities. An analytic hierarchy process of assessing and ranking experts is offered, which is based on paired comparison matrices and scores, quality parameters are taken into account as well. Calculation and assessment of experts is given as an example.

  20. Hierarchical Model of Assessing and Selecting Experts

    OpenAIRE

    Chernysheva, Tatiana Yurievna; Korchuganova, Mariya Anatolievna; Borisov, V. V.; Minkov, S. L.

    2016-01-01

    Revealing experts' competences is a multi-objective issue. Authors of the paper deal with competence assessing methods of experts seen as objects, and criteria of qualities. An analytic hierarchy process of assessing and ranking experts is offered, which is based on paired comparison matrices and scores, quality parameters are taken into account as well. Calculation and assessment of experts is given as an example.

  1. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    , including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... applicable, and we recommend their use instead of the popular polynomial kernels in general settings, in which no information on the data-generating process is available....

  2. Modeling HIV-1 drug resistance as episodic directional selection.

    Directory of Open Access Journals (Sweden)

    Ben Murrell

    Full Text Available The evolution of substitutions conferring drug resistance to HIV-1 is both episodic, occurring when patients are on antiretroviral therapy, and strongly directional, with site-specific resistant residues increasing in frequency over time. While methods exist to detect episodic diversifying selection and continuous directional selection, no evolutionary model combining these two properties has been proposed. We present two models of episodic directional selection (MEDS and EDEPS which allow the a priori specification of lineages expected to have undergone directional selection. The models infer the sites and target residues that were likely subject to directional selection, using either codon or protein sequences. Compared to its null model of episodic diversifying selection, MEDS provides a superior fit to most sites known to be involved in drug resistance, and neither one test for episodic diversifying selection nor another for constant directional selection are able to detect as many true positives as MEDS and EDEPS while maintaining acceptable levels of false positives. This suggests that episodic directional selection is a better description of the process driving the evolution of drug resistance.

  3. Cliff-edge model of obstetric selection in humans.

    Science.gov (United States)

    Mitteroecker, Philipp; Huttegger, Simon M; Fischer, Barbara; Pavlicev, Mihaela

    2016-12-20

    The strikingly high incidence of obstructed labor due to the disproportion of fetal size and the mother's pelvic dimensions has puzzled evolutionary scientists for decades. Here we propose that these high rates are a direct consequence of the distinct characteristics of human obstetric selection. Neonatal size relative to the birth-relevant maternal dimensions is highly variable and positively associated with reproductive success until it reaches a critical value, beyond which natural delivery becomes impossible. As a consequence, the symmetric phenotype distribution cannot match the highly asymmetric, cliff-edged fitness distribution well: The optimal phenotype distribution that maximizes population mean fitness entails a fraction of individuals falling beyond the "fitness edge" (i.e., those with fetopelvic disproportion). Using a simple mathematical model, we show that weak directional selection for a large neonate, a narrow pelvic canal, or both is sufficient to account for the considerable incidence of fetopelvic disproportion. Based on this model, we predict that the regular use of Caesarean sections throughout the last decades has led to an evolutionary increase of fetopelvic disproportion rates by 10 to 20%.

  4. Developing a conceptual model for selecting and evaluating online markets

    Directory of Open Access Journals (Sweden)

    Sadegh Feizollahi

    2013-04-01

    Full Text Available There are many evidences, which emphasis on the benefits of using new technologies of information and communication in international business and many believe that E-Commerce can help satisfy customer explicit and implicit requirements. Internet shopping is a concept developed after the introduction of electronic commerce. Information technology (IT and its applications, specifically in the realm of the internet and e-mail promoted the development of e-commerce in terms of advertising, motivating and information. However, with the development of new technologies, credit and financial exchange on the internet websites were constructed so to facilitate e-commerce. The proposed study sends a total of 200 questionnaires to the target group (teachers - students - professionals - managers of commercial web sites and it manages to collect 130 questionnaires for final evaluation. Cronbach's alpha test is used for measuring reliability and to evaluate the validity of measurement instruments (questionnaires, and to assure construct validity, confirmatory factor analysis is employed. In addition, in order to analyze the research questions based on the path analysis method and to determine markets selection models, a regular technique is implemented. In the present study, after examining different aspects of e-commerce, we provide a conceptual model for selecting and evaluating online marketing in Iran. These findings provide a consistent, targeted and holistic framework for the development of the Internet market in the country.

  5. Ensemble Prediction Model with Expert Selection for Electricity Price Forecasting

    Directory of Open Access Journals (Sweden)

    Bijay Neupane

    2017-01-01

    Full Text Available Forecasting of electricity prices is important in deregulated electricity markets for all of the stakeholders: energy wholesalers, traders, retailers and consumers. Electricity price forecasting is an inherently difficult problem due to its special characteristic of dynamicity and non-stationarity. In this paper, we present a robust price forecasting mechanism that shows resilience towards the aggregate demand response effect and provides highly accurate forecasted electricity prices to the stakeholders in a dynamic environment. We employ an ensemble prediction model in which a group of different algorithms participates in forecasting 1-h ahead the price for each hour of a day. We propose two different strategies, namely, the Fixed Weight Method (FWM and the Varying Weight Method (VWM, for selecting each hour’s expert algorithm from the set of participating algorithms. In addition, we utilize a carefully engineered set of features selected from a pool of features extracted from the past electricity price data, weather data and calendar data. The proposed ensemble model offers better results than the Autoregressive Integrated Moving Average (ARIMA method, the Pattern Sequence-based Forecasting (PSF method and our previous work using Artificial Neural Networks (ANN alone on the datasets for New York, Australian and Spanish electricity markets.

  6. A Network Analysis Model for Selecting Sustainable Technology

    Directory of Open Access Journals (Sweden)

    Sangsung Park

    2015-09-01

    Full Text Available Most companies develop technologies to improve their competitiveness in the marketplace. Typically, they then patent these technologies around the world in order to protect their intellectual property. Other companies may use patented technologies to develop new products, but must pay royalties to the patent holders or owners. Should they fail to do so, this can result in legal disputes in the form of patent infringement actions between companies. To avoid such situations, companies attempt to research and develop necessary technologies before their competitors do so. An important part of this process is analyzing existing patent documents in order to identify emerging technologies. In such analyses, extracting sustainable technology from patent data is important, because sustainable technology drives technological competition among companies and, thus, the development of new technologies. In addition, selecting sustainable technologies makes it possible to plan their R&D (research and development efficiently. In this study, we propose a network model that can be used to select the sustainable technology from patent documents, based on the centrality and degree of a social network analysis. To verify the performance of the proposed model, we carry out a case study using actual patent data from patent databases.

  7. Selection of components based on their importance

    International Nuclear Information System (INIS)

    Stvan, F.

    2004-12-01

    A proposal is presented for sorting components of the Dukovany nuclear power plant with respect to their importance. The classification scheme includes property priority, property criticality and property structure. Each area has its criteria with weight coefficients to calculate the importance of each component by the Risk Priority Number method. The aim of the process is to generate a list of components in order of operating and safety importance, which will help spend funds to ensure operation and safety in an optimal manner. This proposal is linked to a proposal for a simple database which should serve to enter information and perform assessments. The present stage focused on a safety assessment of components categorized in safety classes BT1, BT2 and BT3 pursuant to Decree No. 76. Assessment was performed based ona PSE study for Level 1. The database includes inputs for entering financial data, which are represented by a potential damage resulting from the given failure and by the loss of MWh in financial terms. In a next input, the failure incidence intensity and time of correction can be entered. Information regarding the property structure, represented by the degree of backup and reparability of the component, is the last input available

  8. Improving permafrost distribution modelling using feature selection algorithms

    Science.gov (United States)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its

  9. gis-based hydrological model based hydrological model upstream

    African Journals Online (AJOL)

    eobe

    its effectiveness in terms of data representation quality of modeling results, hydrological models usually embedded in Geographical Information. (GIS) environment to simulate various parame attributed to a selected catchment. complex technology highly suitable for spatial temporal data analyses and information extractio.

  10. Information Gain Based Dimensionality Selection for Classifying Text Documents

    Energy Technology Data Exchange (ETDEWEB)

    Dumidu Wijayasekara; Milos Manic; Miles McQueen

    2013-06-01

    Selecting the optimal dimensions for various knowledge extraction applications is an essential component of data mining. Dimensionality selection techniques are utilized in classification applications to increase the classification accuracy and reduce the computational complexity. In text classification, where the dimensionality of the dataset is extremely high, dimensionality selection is even more important. This paper presents a novel, genetic algorithm based methodology, for dimensionality selection in text mining applications that utilizes information gain. The presented methodology uses information gain of each dimension to change the mutation probability of chromosomes dynamically. Since the information gain is calculated a priori, the computational complexity is not affected. The presented method was tested on a specific text classification problem and compared with conventional genetic algorithm based dimensionality selection. The results show an improvement of 3% in the true positives and 1.6% in the true negatives over conventional dimensionality selection methods.

  11. Multispectral iris recognition based on group selection and game theory

    Science.gov (United States)

    Ahmad, Foysal; Roy, Kaushik

    2017-05-01

    A commercially available iris recognition system uses only a narrow band of the near infrared spectrum (700-900 nm) while iris images captured in the wide range of 405 nm to 1550 nm offer potential benefits to enhance recognition performance of an iris biometric system. The novelty of this research is that a group selection algorithm based on coalition game theory is explored to select the best patch subsets. In this algorithm, patches are divided into several groups based on their maximum contribution in different groups. Shapley values are used to evaluate the contribution of patches in different groups. Results show that this group selection based iris recognition

  12. Partner Selection Optimization Model of Agricultural Enterprises in Supply Chain

    OpenAIRE

    Feipeng Guo; Qibei Lu

    2013-01-01

    With more and more importance of correctly selecting partners in supply chain of agricultural enterprises, a large number of partner evaluation techniques are widely used in the field of agricultural science research. This study established a partner selection model to optimize the issue of agricultural supply chain partner selection. Firstly, it constructed a comprehensive evaluation index system after analyzing the real characteristics of agricultural supply chain. Secondly, a heuristic met...

  13. Agent-based modeling of sustainable behaviors

    CERN Document Server

    Sánchez-Maroño, Noelia; Fontenla-Romero, Oscar; Polhill, J; Craig, Tony; Bajo, Javier; Corchado, Juan

    2017-01-01

    Using the O.D.D. (Overview, Design concepts, Detail) protocol, this title explores the role of agent-based modeling in predicting the feasibility of various approaches to sustainability. The chapters incorporated in this volume consist of real case studies to illustrate the utility of agent-based modeling and complexity theory in discovering a path to more efficient and sustainable lifestyles. The topics covered within include: households' attitudes toward recycling, designing decision trees for representing sustainable behaviors, negotiation-based parking allocation, auction-based traffic signal control, and others. This selection of papers will be of interest to social scientists who wish to learn more about agent-based modeling as well as experts in the field of agent-based modeling.

  14. Effect of Model Selection on Computed Water Balance Components

    NARCIS (Netherlands)

    Jhorar, R.K.; Smit, A.A.M.F.R.; Roest, C.W.J.

    2009-01-01

    Soil water flow modelling approaches as used in four selected on-farm water management models, namely CROPWAT. FAIDS, CERES and SWAP, are compared through numerical experiments. The soil water simulation approaches used in the first three models are reformulated to incorporate ail evapotranspiration

  15. A Rule-Based Industrial Boiler Selection System

    Science.gov (United States)

    Tan, C. F.; Khalil, S. N.; Karjanto, J.; Tee, B. T.; Wahidin, L. S.; Chen, W.; Rauterberg, G. W. M.; Sivarao, S.; Lim, T. L.

    2015-09-01

    Boiler is a device used for generating the steam for power generation, process use or heating, and hot water for heating purposes. Steam boiler consists of the containing vessel and convection heating surfaces only, whereas a steam generator covers the whole unit, encompassing water wall tubes, super heaters, air heaters and economizers. The selection of the boiler is very important to the industry for conducting the operation system successfully. The selection criteria are based on rule based expert system and multi-criteria weighted average method. The developed system consists of Knowledge Acquisition Module, Boiler Selection Module, User Interface Module and Help Module. The system capable of selecting the suitable boiler based on criteria weighted. The main benefits from using the system is to reduce the complexity in the decision making for selecting the most appropriate boiler to palm oil process plant.

  16. Elementary Teachers' Selection and Use of Visual Models

    Science.gov (United States)

    Lee, Tammy D.; Gail Jones, M.

    2018-02-01

    As science grows in complexity, science teachers face an increasing challenge of helping students interpret models that represent complex science systems. Little is known about how teachers select and use models when planning lessons. This mixed methods study investigated the pedagogical approaches and visual models used by elementary in-service and preservice teachers in the development of a science lesson about a complex system (e.g., water cycle). Sixty-seven elementary in-service and 69 elementary preservice teachers completed a card sort task designed to document the types of visual models (e.g., images) that teachers choose when planning science instruction. Quantitative and qualitative analyses were conducted to analyze the card sort task. Semistructured interviews were conducted with a subsample of teachers to elicit the rationale for image selection. Results from this study showed that both experienced in-service teachers and novice preservice teachers tended to select similar models and use similar rationales for images to be used in lessons. Teachers tended to select models that were aesthetically pleasing and simple in design and illustrated specific elements of the water cycle. The results also showed that teachers were not likely to select images that represented the less obvious dimensions of the water cycle. Furthermore, teachers selected visual models more as a pedagogical tool to illustrate specific elements of the water cycle and less often as a tool to promote student learning related to complex systems.

  17. Validation of elk resource selection models with spatially independent data

    Science.gov (United States)

    Priscilla K. Coe; Bruce K. Johnson; Michael J. Wisdom; John G. Cook; Marty Vavra; Ryan M. Nielson

    2011-01-01

    Knowledge of how landscape features affect wildlife resource use is essential for informed management. Resource selection functions often are used to make and validate predictions about landscape use; however, resource selection functions are rarely validated with data from landscapes independent of those from which the models were built. This problem has severely...

  18. A Working Model of Natural Selection Illustrated by Table Tennis

    Science.gov (United States)

    Dinc, Muhittin; Kilic, Selda; Aladag, Caner

    2013-01-01

    Natural selection is one of the most important topics in biology and it helps to clarify the variety and complexity of organisms. However, students in almost every stage of education find it difficult to understand the mechanism of natural selection and they can develop misconceptions about it. This article provides an active model of natural…

  19. Augmented Self-Modeling as an Intervention for Selective Mutism

    Science.gov (United States)

    Kehle, Thomas J.; Bray, Melissa A.; Byer-Alcorace, Gabriel F.; Theodore, Lea A.; Kovac, Lisa M.

    2012-01-01

    Selective mutism is a rare disorder that is difficult to treat. It is often associated with oppositional defiant behavior, particularly in the home setting, social phobia, and, at times, autism spectrum disorder characteristics. The augmented self-modeling treatment has been relatively successful in promoting rapid diminishment of selective mutism…

  20. Measurement-Based Transmission Line Parameter Estimation with Adaptive Data Selection Scheme

    DEFF Research Database (Denmark)

    Li, Changgang; Zhang, Yaping; Zhang, Hengxu

    2017-01-01

    Accurate parameters of transmission lines are critical for power system operation and control decision making. Transmission line parameter estimation based on measured data is an effective way to enhance the validity of the parameters. This paper proposes a multi-point transmission line parameter...... of the proposed model. Some 500kV transmission lines from a provincial power system of China are estimated to demonstrate the applicability of the presented model. The superiority of the proposed model over fixed data selection schemes is also verified....... estimation model with an adaptive data selection scheme based on measured data. Data selection scheme, defined with time window and number of data points, is introduced in the estimation model as additional variables to optimize. The data selection scheme is adaptively adjusted to minimize the relative...

  1. Fluorescent naphthalene-based benzene tripod for selective ...

    Indian Academy of Sciences (India)

    Aluminium complex of a naphthalene-based benzene tripod ligand system has been reported for the selective recognition of fluoride in aqueous medium in physiological condition. The ligand can selectively recognize Al3+ through enhancement in the fluorescence intensity and this in situ formed aluminium complex ...

  2. Robust Decision-making Applied to Model Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Laboratory

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  3. Model Selection and Hypothesis Testing for Large-Scale Network Models with Overlapping Groups

    Directory of Open Access Journals (Sweden)

    Tiago P. Peixoto

    2015-03-01

    Full Text Available The effort to understand network systems in increasing detail has resulted in a diversity of methods designed to extract their large-scale structure from data. Unfortunately, many of these methods yield diverging descriptions of the same network, making both the comparison and understanding of their results a difficult challenge. A possible solution to this outstanding issue is to shift the focus away from ad hoc methods and move towards more principled approaches based on statistical inference of generative models. As a result, we face instead the more well-defined task of selecting between competing generative processes, which can be done under a unified probabilistic framework. Here, we consider the comparison between a variety of generative models including features such as degree correction, where nodes with arbitrary degrees can belong to the same group, and community overlap, where nodes are allowed to belong to more than one group. Because such model variants possess an increasing number of parameters, they become prone to overfitting. In this work, we present a method of model selection based on the minimum description length criterion and posterior odds ratios that is capable of fully accounting for the increased degrees of freedom of the larger models and selects the best one according to the statistical evidence available in the data. In applying this method to many empirical unweighted networks from different fields, we observe that community overlap is very often not supported by statistical evidence and is selected as a better model only for a minority of them. On the other hand, we find that degree correction tends to be almost universally favored by the available data, implying that intrinsic node proprieties (as opposed to group properties are often an essential ingredient of network formation.

  4. Target Selection Models with Preference Variation Between Offenders

    NARCIS (Netherlands)

    Townsley, Michael; Birks, Daniel; Ruiter, Stijn; Bernasco, Wim; White, Gentry

    2016-01-01

    Objectives: This study explores preference variation in location choice strategies of residential burglars. Applying a model of offender target selection that is grounded in assertions of the routine activity approach, rational choice perspective, crime pattern and social disorganization theories,

  5. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event-based mod......The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...

  6. SELECTION MOMENTS AND GENERALIZED METHOD OF MOMENTS FOR HETEROSKEDASTIC MODELS

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2016-06-01

    Full Text Available In this paper, the authors describe the selection methods for moments and the application of the generalized moments method for the heteroskedastic models. The utility of GMM estimators is found in the study of the financial market models. The selection criteria for moments are applied for the efficient estimation of GMM for univariate time series with martingale difference errors, similar to those studied so far by Kuersteiner.

  7. 44 CFR 321.2 - Selection of the mobilization base.

    Science.gov (United States)

    2010-10-01

    ... base. 321.2 Section 321.2 Emergency Management and Assistance FEDERAL EMERGENCY MANAGEMENT AGENCY, DEPARTMENT OF HOMELAND SECURITY PREPAREDNESS MAINTENANCE OF THE MOBILIZATION BASE (DEPARTMENT OF DEFENSE, DEPARTMENT OF ENERGY, MARITIME ADMINISTRATION) § 321.2 Selection of the mobilization base. (a) The Department...

  8. Model Selection in Continuous Test Norming With GAMLSS.

    Science.gov (United States)

    Voncken, Lieke; Albers, Casper J; Timmerman, Marieke E

    2017-06-01

    To compute norms from reference group test scores, continuous norming is preferred over traditional norming. A suitable continuous norming approach for continuous data is the use of the Box-Cox Power Exponential model, which is found in the generalized additive models for location, scale, and shape. Applying the Box-Cox Power Exponential model for test norming requires model selection, but it is unknown how well this can be done with an automatic selection procedure. In a simulation study, we compared the performance of two stepwise model selection procedures combined with four model-fit criteria (Akaike information criterion, Bayesian information criterion, generalized Akaike information criterion (3), cross-validation), varying data complexity, sampling design, and sample size in a fully crossed design. The new procedure combined with one of the generalized Akaike information criterion was the most efficient model selection procedure (i.e., required the smallest sample size). The advocated model selection procedure is illustrated with norming data of an intelligence test.

  9. Computationally efficient thermal-mechanical modelling of selective laser melting

    Science.gov (United States)

    Yang, Yabin; Ayas, Can

    2017-10-01

    The Selective laser melting (SLM) is a powder based additive manufacturing (AM) method to produce high density metal parts with complex topology. However, part distortions and accompanying residual stresses deteriorates the mechanical reliability of SLM products. Modelling of the SLM process is anticipated to be instrumental for understanding and predicting the development of residual stress field during the build process. However, SLM process modelling requires determination of the heat transients within the part being built which is coupled to a mechanical boundary value problem to calculate displacement and residual stress fields. Thermal models associated with SLM are typically complex and computationally demanding. In this paper, we present a simple semi-analytical thermal-mechanical model, developed for SLM that represents the effect of laser scanning vectors with line heat sources. The temperature field within the part being build is attained by superposition of temperature field associated with line heat sources in a semi-infinite medium and a complimentary temperature field which accounts for the actual boundary conditions. An analytical solution of a line heat source in a semi-infinite medium is first described followed by the numerical procedure used for finding the complimentary temperature field. This analytical description of the line heat sources is able to capture the steep temperature gradients in the vicinity of the laser spot which is typically tens of micrometers. In turn, semi-analytical thermal model allows for having a relatively coarse discretisation of the complimentary temperature field. The temperature history determined is used to calculate the thermal strain induced on the SLM part. Finally, a mechanical model governed by elastic-plastic constitutive rule having isotropic hardening is used to predict the residual stresses.

  10. Selection Criteria in Regime Switching Conditional Volatility Models

    Directory of Open Access Journals (Sweden)

    Thomas Chuffart

    2015-05-01

    Full Text Available A large number of nonlinear conditional heteroskedastic models have been proposed in the literature. Model selection is crucial to any statistical data analysis. In this article, we investigate whether the most commonly used selection criteria lead to choice of the right specification in a regime switching framework. We focus on two types of models: the Logistic Smooth Transition GARCH and the Markov-Switching GARCH models. Simulation experiments reveal that information criteria and loss functions can lead to misspecification ; BIC sometimes indicates the wrong regime switching framework. Depending on the Data Generating Process used in the experiments, great care is needed when choosing a criterion.

  11. On the benefits of location-based relay selection in mobile wireless networks

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Madsen, Tatiana Kozlova; Schwefel, Hans-Peter

    2016-01-01

    . If the relay selection is based on link quality measurements, the number of links to update grows quadratically with the number of nodes, and measurements need to be updated frequently when nodes are mobile. In this paper, we consider a location-based relay selection scheme where link qualities are estimated...... with varying information update interval, node mobility, location inaccuracy, and inaccurate propagation model parameters. Our results show that location-based relay selection performs better than SNR-based relay selection at typical levels of location error when medium-scale fading can be neglected......We consider infrastructure-based mobile networks that are assisted by a single relay transmission where both the downstream destination and relay nodes are mobile. Selecting the optimal transmission path for a destination node requires up-to-date link quality estimates of all relevant links...

  12. A guide to Bayesian model selection for ecologists

    Science.gov (United States)

    Hooten, Mevin B.; Hobbs, N.T.

    2015-01-01

    The steady upward trend in the use of model selection and Bayesian methods in ecological research has made it clear that both approaches to inference are important for modern analysis of models and data. However, in teaching Bayesian methods and in working with our research colleagues, we have noticed a general dissatisfaction with the available literature on Bayesian model selection and multimodel inference. Students and researchers new to Bayesian methods quickly find that the published advice on model selection is often preferential in its treatment of options for analysis, frequently advocating one particular method above others. The recent appearance of many articles and textbooks on Bayesian modeling has provided welcome background on relevant approaches to model selection in the Bayesian framework, but most of these are either very narrowly focused in scope or inaccessible to ecologists. Moreover, the methodological details of Bayesian model selection approaches are spread thinly throughout the literature, appearing in journals from many different fields. Our aim with this guide is to condense the large body of literature on Bayesian approaches to model selection and multimodel inference and present it specifically for quantitative ecologists as neutrally as possible. We also bring to light a few important and fundamental concepts relating directly to model selection that seem to have gone unnoticed in the ecological literature. Throughout, we provide only a minimal discussion of philosophy, preferring instead to examine the breadth of approaches as well as their practical advantages and disadvantages. This guide serves as a reference for ecologists using Bayesian methods, so that they can better understand their options and can make an informed choice that is best aligned with their goals for inference.

  13. CBFS: high performance feature selection algorithm based on feature clearness.

    Directory of Open Access Journals (Sweden)

    Minseok Seo

    Full Text Available BACKGROUND: The goal of feature selection is to select useful features and simultaneously exclude garbage features from a given dataset for classification purposes. This is expected to bring reduction of processing time and improvement of classification accuracy. METHODOLOGY: In this study, we devised a new feature selection algorithm (CBFS based on clearness of features. Feature clearness expresses separability among classes in a feature. Highly clear features contribute towards obtaining high classification accuracy. CScore is a measure to score clearness of each feature and is based on clustered samples to centroid of classes in a feature. We also suggest combining CBFS and other algorithms to improve classification accuracy. CONCLUSIONS/SIGNIFICANCE: From the experiment we confirm that CBFS is more excellent than up-to-date feature selection algorithms including FeaLect. CBFS can be applied to microarray gene selection, text categorization, and image classification.

  14. The Use of Evolution in a Central Action Selection Model

    Directory of Open Access Journals (Sweden)

    F. Montes-Gonzalez

    2007-01-01

    Full Text Available The use of effective central selection provides flexibility in design by offering modularity and extensibility. In earlier papers we have focused on the development of a simple centralized selection mechanism. Our current goal is to integrate evolutionary methods in the design of non-sequential behaviours and the tuning of specific parameters of the selection model. The foraging behaviour of an animal robot (animat has been modelled in order to integrate the sensory information from the robot to perform selection that is nearly optimized by the use of genetic algorithms. In this paper we present how selection through optimization finally arranges the pattern of presented behaviours for the foraging task. Hence, the execution of specific parts in a behavioural pattern may be ruled out by the tuning of these parameters. Furthermore, the intensive use of colour segmentation from a colour camera for locating a cylinder sets a burden on the calculations carried out by the genetic algorithm.

  15. A Hybrid Multiple Criteria Decision Making Model for Supplier Selection

    Directory of Open Access Journals (Sweden)

    Chung-Min Wu

    2013-01-01

    Full Text Available The sustainable supplier selection would be the vital part in the management of a sustainable supply chain. In this study, a hybrid multiple criteria decision making (MCDM model is applied to select optimal supplier. The fuzzy Delphi method, which can lead to better criteria selection, is used to modify criteria. Considering the interdependence among the selection criteria, analytic network process (ANP is then used to obtain their weights. To avoid calculation and additional pairwise comparisons of ANP, a technique for order preference by similarity to ideal solution (TOPSIS is used to rank the alternatives. The use of a combination of the fuzzy Delphi method, ANP, and TOPSIS, proposing an MCDM model for supplier selection, and applying these to a real case are the unique features of this study.

  16. Evaluation and comparison of alternative fleet-level selective maintenance models

    International Nuclear Information System (INIS)

    Schneider, Kellie; Richard Cassady, C.

    2015-01-01

    Fleet-level selective maintenance refers to the process of identifying the subset of maintenance actions to perform on a fleet of repairable systems when the maintenance resources allocated to the fleet are insufficient for performing all desirable maintenance actions. The original fleet-level selective maintenance model is designed to maximize the probability that all missions in a future set are completed successfully. We extend this model in several ways. First, we consider a cost-based optimization model and show that a special case of this model maximizes the expected value of the number of successful missions in the future set. We also consider the situation in which one or more of the future missions may be canceled. These models and the original fleet-level selective maintenance optimization models are nonlinear. Therefore, we also consider an alternative model in which the objective function can be linearized. We show that the alternative model is a good approximation to the other models. - Highlights: • Investigate nonlinear fleet-level selective maintenance optimization models. • A cost based model is used to maximize the expected number of successful missions. • Another model is allowed to cancel missions if reliability is sufficiently low. • An alternative model has an objective function that can be linearized. • We show that the alternative model is a good approximation to the other models

  17. Hyperopt: a Python library for model selection and hyperparameter optimization

    Science.gov (United States)

    Bergstra, James; Komer, Brent; Eliasmith, Chris; Yamins, Dan; Cox, David D.

    2015-01-01

    Sequential model-based optimization (also known as Bayesian optimization) is one of the most efficient methods (per function evaluation) of function minimization. This efficiency makes it appropriate for optimizing the hyperparameters of machine learning algorithms that are slow to train. The Hyperopt library provides algorithms and parallelization infrastructure for performing hyperparameter optimization (model selection) in Python. This paper presents an introductory tutorial on the usage of the Hyperopt library, including the description of search spaces, minimization (in serial and parallel), and the analysis of the results collected in the course of minimization. This paper also gives an overview of Hyperopt-Sklearn, a software project that provides automatic algorithm configuration of the Scikit-learn machine learning library. Following Auto-Weka, we take the view that the choice of classifier and even the choice of preprocessing module can be taken together to represent a single large hyperparameter optimization problem. We use Hyperopt to define a search space that encompasses many standard components (e.g. SVM, RF, KNN, PCA, TFIDF) and common patterns of composing them together. We demonstrate, using search algorithms in Hyperopt and standard benchmarking data sets (MNIST, 20-newsgroups, convex shapes), that searching this space is practical and effective. In particular, we improve on best-known scores for the model space for both MNIST and convex shapes. The paper closes with some discussion of ongoing and future work.

  18. Variable selection in Logistic regression model with genetic algorithm.

    Science.gov (United States)

    Zhang, Zhongheng; Trevino, Victor; Hoseini, Sayed Shahabuddin; Belciug, Smaranda; Boopathi, Arumugam Manivanna; Zhang, Ping; Gorunescu, Florin; Subha, Velappan; Dai, Songshi

    2018-02-01

    Variable or feature selection is one of the most important steps in model specification. Especially in the case of medical-decision making, the direct use of a medical database, without a previous analysis and preprocessing step, is often counterproductive. In this way, the variable selection represents the method of choosing the most relevant attributes from the database in order to build a robust learning models and, thus, to improve the performance of the models used in the decision process. In biomedical research, the purpose of variable selection is to select clinically important and statistically significant variables, while excluding unrelated or noise variables. A variety of methods exist for variable selection, but none of them is without limitations. For example, the stepwise approach, which is highly used, adds the best variable in each cycle generally producing an acceptable set of variables. Nevertheless, it is limited by the fact that it commonly trapped in local optima. The best subset approach can systematically search the entire covariate pattern space, but the solution pool can be extremely large with tens to hundreds of variables, which is the case in nowadays clinical data. Genetic algorithms (GA) are heuristic optimization approaches and can be used for variable selection in multivariable regression models. This tutorial paper aims to provide a step-by-step approach to the use of GA in variable selection. The R code provided in the text can be extended and adapted to other data analysis needs.

  19. Evaluating the influence of selected parameters on sensitivity of a numerical model of solidification

    OpenAIRE

    N. Sczygiol; R. Dyja

    2007-01-01

    Presented paper contains evaluation of influence of selected parameters on sensitivity of a numerical model of solidification. The investigated model is based on the heat conduction equation with a heat source and solved using the finite element method (FEM). The model is built with the use of enthalpy formulation for solidification and using an intermediate solid fraction growth model. The model sensitivity is studied with the use of Morris method, which is one of global sensitivity methods....

  20. A Robust Service Selection Method Based on Uncertain QoS

    Directory of Open Access Journals (Sweden)

    Yanping Chen

    2016-01-01

    Full Text Available Nowadays, the number of Web services on the Internet is quickly increasing. Meanwhile, different service providers offer numerous services with the similar functions. Quality of Service (QoS has become an important factor used to select the most appropriate service for users. The most prominent QoS-based service selection models only take the certain attributes into account, which is an ideal assumption. In the real world, there are a large number of uncertain factors. In particular, at the runtime, QoS may become very poor or unacceptable. In order to solve the problem, a global service selection model based on uncertain QoS was proposed, including the corresponding normalization and aggregation functions, and then a robust optimization model adopted to transform the model. Experiment results show that the proposed method can effectively select services with high robustness and optimality.

  1. A BAYESIAN NONPARAMETRIC MIXTURE MODEL FOR SELECTING GENES AND GENE SUBNETWORKS.

    Science.gov (United States)

    Zhao, Yize; Kang, Jian; Yu, Tianwei

    2014-06-01

    It is very challenging to select informative features from tens of thousands of measured features in high-throughput data analysis. Recently, several parametric/regression models have been developed utilizing the gene network information to select genes or pathways strongly associated with a clinical/biological outcome. Alternatively, in this paper, we propose a nonparametric Bayesian model for gene selection incorporating network information. In addition to identifying genes that have a strong association with a clinical outcome, our model can select genes with particular expressional behavior, in which case the regression models are not directly applicable. We show that our proposed model is equivalent to an infinity mixture model for which we develop a posterior computation algorithm based on Markov chain Monte Carlo (MCMC) methods. We also propose two fast computing algorithms that approximate the posterior simulation with good accuracy but relatively low computational cost. We illustrate our methods on simulation studies and the analysis of Spellman yeast cell cycle microarray data.

  2. Multicriteria decision group model for the selection of suppliers

    Directory of Open Access Journals (Sweden)

    Luciana Hazin Alencar

    2008-08-01

    Full Text Available Several authors have been studying group decision making over the years, which indicates how relevant it is. This paper presents a multicriteria group decision model based on ELECTRE IV and VIP Analysis methods, to those cases where there is great divergence among the decision makers. This model includes two stages. In the first, the ELECTRE IV method is applied and a collective criteria ranking is obtained. In the second, using criteria ranking, VIP Analysis is applied and the alternatives are selected. To illustrate the model, a numerical application in the context of the selection of suppliers in project management is used. The suppliers that form part of the project team have a crucial role in project management. They are involved in a network of connected activities that can jeopardize the success of the project, if they are not undertaken in an appropriate way. The question tackled is how to select service suppliers for a project on behalf of an enterprise that assists the multiple objectives of the decision-makers.Vários autores têm estudado decisão em grupo nos últimos anos, o que indica a relevância do assunto. Esse artigo apresenta um modelo multicritério de decisão em grupo baseado nos métodos ELECTRE IV e VIP Analysis, adequado aos casos em que se tem uma grande divergência entre os decisores. Esse modelo é composto por dois estágios. No primeiro, o método ELECTRE IV é aplicado e uma ordenação dos critérios é obtida. No próximo estágio, com a ordenação dos critérios, o método VIP Analysis é aplicado e as alternativas são selecionadas. Para ilustrar o modelo, uma aplicação numérica no contexto da seleção de fornecedores em projetos é realizada. Os fornecedores que fazem parte da equipe do projeto têm um papel fundamental no gerenciamento de projetos. Eles estão envolvidos em uma rede de atividades conectadas que, caso não sejam executadas de forma apropriada, podem colocar em risco o sucesso do

  3. Efficient Multi-Label Feature Selection Using Entropy-Based Label Selection

    Directory of Open Access Journals (Sweden)

    Jaesung Lee

    2016-11-01

    Full Text Available Multi-label feature selection is designed to select a subset of features according to their importance to multiple labels. This task can be achieved by ranking the dependencies of features and selecting the features with the highest rankings. In a multi-label feature selection problem, the algorithm may be faced with a dataset containing a large number of labels. Because the computational cost of multi-label feature selection increases according to the number of labels, the algorithm may suffer from a degradation in performance when processing very large datasets. In this study, we propose an efficient multi-label feature selection method based on an information-theoretic label selection strategy. By identifying a subset of labels that significantly influence the importance of features, the proposed method efficiently outputs a feature subset. Experimental results demonstrate that the proposed method can identify a feature subset much faster than conventional multi-label feature selection methods for large multi-label datasets.

  4. Development of an Environment for Software Reliability Model Selection

    Science.gov (United States)

    1992-09-01

    now is directed to other related problems such as tools for model selection, multiversion programming, and software fault tolerance modeling... multiversion programming, 7. Hlardware can be repaired by spare modules, which is not. the case for software, 2-6 N. Preventive maintenance is very important

  5. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  6. Selection Bias in Educational Transition Models: Theory and Empirical Evidence

    DEFF Research Database (Denmark)

    Holm, Anders; Jæger, Mads

    Most studies using Mare’s (1980, 1981) seminal model of educational transitions find that the effect of family background decreases across transitions. Recently, Cameron and Heckman (1998, 2001) have argued that the “waning coefficients” in the Mare model are driven by selection on unobserved...... the United States, United Kingdom, Denmark, and the Netherlands shows that when we take selection into account the effect of family background variables on educational transitions is largely constant across transitions. We also discuss several difficulties in estimating educational transition models which...

  7. Unraveling the sub-processes of selective attention: insights from dynamic modeling and continuous behavior.

    Science.gov (United States)

    Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan

    2015-11-01

    Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.

  8. Information Theory based Feature Selection for Customer Classification

    OpenAIRE

    Barraza, Néstor Rubén; Moro, Sergio; Ferreyra, Marcelo; de la Peña, Adolfo

    2016-01-01

    The application of Information Theory techniques in customer feature selection is analyzed. This method, usually called information gain has been demonstrated to be simple and fast for feature selection. The important concept of mutual information, originally introduced to analyze and model a noisy channel is used in order to measure relations between characteristics of given customers. An application to a bank customers data set of telemarketing calls for selling bank long-term deposits is s...

  9. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  10. National HIV prevalence estimates for sub-Saharan Africa: controlling selection bias with Heckman-type selection models

    Science.gov (United States)

    Hogan, Daniel R; Salomon, Joshua A; Canning, David; Hammitt, James K; Zaslavsky, Alan M; Bärnighausen, Till

    2012-01-01

    Objectives Population-based HIV testing surveys have become central to deriving estimates of national HIV prevalence in sub-Saharan Africa. However, limited participation in these surveys can lead to selection bias. We control for selection bias in national HIV prevalence estimates using a novel approach, which unlike conventional imputation can account for selection on unobserved factors. Methods For 12 Demographic and Health Surveys conducted from 2001 to 2009 (N=138 300), we predict HIV status among those missing a valid HIV test with Heckman-type selection models, which allow for correlation between infection status and participation in survey HIV testing. We compare these estimates with conventional ones and introduce a simulation procedure that incorporates regression model parameter uncertainty into confidence intervals. Results Selection model point estimates of national HIV prevalence were greater than unadjusted estimates for 10 of 12 surveys for men and 11 of 12 surveys for women, and were also greater than the majority of estimates obtained from conventional imputation, with significantly higher HIV prevalence estimates for men in Cote d'Ivoire 2005, Mali 2006 and Zambia 2007. Accounting for selective non-participation yielded 95% confidence intervals around HIV prevalence estimates that are wider than those obtained with conventional imputation by an average factor of 4.5. Conclusions Our analysis indicates that national HIV prevalence estimates for many countries in sub-Saharan African are more uncertain than previously thought, and may be underestimated in several cases, underscoring the need for increasing participation in HIV surveys. Heckman-type selection models should be included in the set of tools used for routine estimation of HIV prevalence. PMID:23172342

  11. Selection of climate change scenario data for impact modelling

    DEFF Research Database (Denmark)

    Sloth Madsen, M; Fox Maule, C; MacKellar, N

    2012-01-01

    Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study...... illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make...... the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented...

  12. Adverse Selection Models with Three States of Nature

    Directory of Open Access Journals (Sweden)

    Daniela MARINESCU

    2011-02-01

    Full Text Available In the paper we analyze an adverse selection model with three states of nature, where both the Principal and the Agent are risk neutral. When solving the model, we use the informational rents and the efforts as variables. We derive the optimal contract in the situation of asymmetric information. The paper ends with the characteristics of the optimal contract and the main conclusions of the model.

  13. A SUPPLIER SELECTION MODEL FOR SOFTWARE DEVELOPMENT OUTSOURCING

    Directory of Open Access Journals (Sweden)

    Hancu Lucian-Viorel

    2010-12-01

    Full Text Available This paper presents a multi-criteria decision making model used for supplier selection for software development outsourcing on e-marketplaces. This model can be used in auctions. The supplier selection process becomes complex and difficult on last twenty years since the Internet plays an important role in business management. Companies have to concentrate their efforts on their core activities and the others activities should be realized by outsourcing. They can achieve significant cost reduction by using e-marketplaces in their purchase process and by using decision support systems on supplier selection. In the literature were proposed many approaches for supplier evaluation and selection process. The performance of potential suppliers is evaluated using multi criteria decision making methods rather than considering a single factor cost.

  14. Modeling quality attributes and metrics for web service selection

    Science.gov (United States)

    Oskooei, Meysam Ahmadi; Daud, Salwani binti Mohd; Chua, Fang-Fang

    2014-06-01

    Since the service-oriented architecture (SOA) has been designed to develop the system as a distributed application, the service selection has become a vital aspect of service-oriented computing (SOC). Selecting the appropriate web service with respect to quality of service (QoS) through using mathematical solution for optimization of problem turns the service selection problem into a common concern for service users. Nowadays, number of web services that provide the same functionality is increased and selection of services from a set of alternatives which differ in quality parameters can be difficult for service consumers. In this paper, a new model for QoS attributes and metrics is proposed to provide a suitable solution for optimizing web service selection and composition with low complexity.

  15. Supplier selection based on multi-criterial AHP method

    Directory of Open Access Journals (Sweden)

    Jana Pócsová

    2010-03-01

    Full Text Available This paper describes a case-study of supplier selection based on multi-criterial Analytic Hierarchy Process (AHP method.It is demonstrated that using adequate mathematical method can bring us “unprejudiced” conclusion, even if the alternatives (suppliercompanies are very similar in given selection-criteria. The result is the best possible supplier company from the viewpoint of chosen criteriaand the price of the product.

  16. Cooperative Technique Based on Sensor Selection in Wireless Sensor Network

    OpenAIRE

    ISLAM, M. R.; KIM, J.

    2009-01-01

    An energy efficient cooperative technique is proposed for the IEEE 1451 based Wireless Sensor Networks. Selected numbers of Wireless Transducer Interface Modules (WTIMs) are used to form a Multiple Input Single Output (MISO) structure wirelessly connected with a Network Capable Application Processor (NCAP). Energy efficiency and delay of the proposed architecture are derived for different combination of cluster size and selected number of WTIMs. Optimized constellation parameters are used for...

  17. [On selection criteria in spatially distributed models of competition].

    Science.gov (United States)

    Il'ichev, V G; Il'icheva, O A

    2014-01-01

    Discrete models of competitors (initial population and mutants) are considered in which reproduction is set by increasing and concave function, and migration in the space consisting of a set of areas, is described by a Markov matrix. This allows the use of the theory of monotonous operators to study problems of selection, coexistence and stability. It is shown that the higher is the number of areas, more and more severe constraints of selective advantage to initial population are required.

  18. Comparing the staffing models of outsourcing in selected companies

    OpenAIRE

    Chaloupková, Věra

    2010-01-01

    This thesis deals with problems of takeover of employees in outsourcing. The capital purpose is to compare the staffing model of outsourcing in selected companies. To compare in selected companies I chose multi-criteria analysis. This thesis is dividend into six chapters. The first charter is devoted to the theoretical part. In this charter describes the basic concepts as outsourcing, personal aspects, phase of the outsourcing projects, communications and culture. The rest of thesis is devote...

  19. Construction Tender Subcontract Selection using Case-based Reasoning

    Directory of Open Access Journals (Sweden)

    Due Luu

    2012-11-01

    Full Text Available Obtaining competitive quotations from suitably qualified subcontractors at tender tim n significantly increase the chance of w1nmng a construction project. Amidst an increasingly growing trend to subcontracting in Australia, selecting appropriate subcontractors for a construction project can be a daunting task requiring the analysis of complex and dynamic criteria such as past performance, suitable experience, track record of competitive pricing, financial stability and so on. Subcontractor selection is plagued with uncertainty and vagueness and these conditions are difficul_t o represent in generalised sets of rules. DeciSIOns pertaining to the selection of subcontr:act?s tender time are usually based on the mtu1t1onand past experience of construction estimators. Case-based reasoning (CBR may be an appropriate method of addressing the chal_lenges of selecting subcontractors because CBR 1s able to harness the experiential knowledge of practitioners. This paper reviews the practicality and suitability of a CBR approach for subcontractor tender selection through the development of a prototype CBR procurement advisory system. In this system, subcontractor selection cases are represented by a set of attributes elicited from experienced construction estimators. The results indicate that CBR can enhance the appropriateness of the selection of subcontractors for construction projects.

  20. Frequency selective bistable switching in metamaterial based photonic bandgap medium

    Science.gov (United States)

    Jose, Jolly

    2014-10-01

    We present frequency selective bistable response at the defect mode of the zero-nbar bandgap of a photonic bandgap (PBG) material made of negative and positive index media. The nonlinear (Kerr) layer acts as the defect layer in the periodic PBG material. Incorporating metamaterial based electromagnetically induced transparency (EIT) like resonance in the positive layer leads to unprecedented line narrowing of the defect mode which in turn facilitates narrow frequency selective bistable operation, wherein all the bistable characteristics can be effectively engineered. Thresholding the output intensity selects the narrow band of frequencies that exhibit bistability.

  1. Economic assessment model architecture for AGC/AVLIS selection

    International Nuclear Information System (INIS)

    Hoglund, R.L.

    1984-01-01

    The economic assessment model architecture described provides the flexibility and completeness in economic analysis that the selection between AGC and AVLIS demands. Process models which are technology-specific will provide the first-order responses of process performance and cost to variations in process parameters. The economics models can be used to test the impacts of alternative deployment scenarios for a technology. Enterprise models provide global figures of merit for evaluating the DOE perspective on the uranium enrichment enterprise, and business analysis models compute the financial parameters from the private investor's viewpoint

  2. Selective kainate receptor (GluK1) ligands structurally based upon1H-Cyclopentapyrimidin-2,4(1H,3H)-dione: synthesis, molecular modeling, and pharmacological and biostructural characterization

    DEFF Research Database (Denmark)

    Venskutonyte, Raminta; Butini, Stefania; Coccone, Salvatore Sanna

    2011-01-01

    The physiological function of kainate receptors (GluK1- GluK5) in the central nervous system is not fully understood yet. With the aim of developing potent and selective GluK1 ligands, we have synthesized a series of new thiophene-based GluK1 agonists (6a-c) and antagonists (7a-d). Pharmacological...

  3. Lightweight Graphical Models for Selectivity Estimation Without Independence Assumptions

    DEFF Research Database (Denmark)

    Tzoumas, Kostas; Deshpande, Amol; Jensen, Christian S.

    2011-01-01

    , propagated exponentially, can lead to severely sub-optimal plans. Modern optimizers typically maintain one-dimensional statistical summaries and make the attribute value independence and join uniformity assumptions for efficiently estimating selectivities. Therefore, selectivity estimation errors in today......’s optimizers are frequently caused by missed correlations between attributes. We present a selectivity estimation approach that does not make the independence assumptions. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution of all...

  4. Do Culture-based Segments Predict Selection of Market Strategy?

    OpenAIRE

    Veronika Jadczaková

    2015-01-01

    Academists and practitioners have already acknowledged the importance of unobservable segmentation bases (such as psychographics) yet still focusing on how well these bases are capable of describing relevant segments (the identifiability criterion) rather than on how precisely these segments can predict (the predictability criterion). Therefore, this paper intends to add a debate to this topic by exploring whether culture-based segments do account for a selection of market strategy. To do so,...

  5. Development of a thermodynamic data base for selected heavy metals; Entwicklung einer thermodynamischen Datenbasis fuer ausgewaehlte Schwermetalle

    Energy Technology Data Exchange (ETDEWEB)

    Hageman, Sven; Scharge, Tina; Willms, Thomas

    2015-07-15

    The report on the development of a thermodynamic data base for selected heavy metals covers the description of experimental methods, the thermodynamic model for chromate, the thermodynamic model for dichromate, the thermodynamic model for manganese (II), the thermodynamic model for cobalt, the thermodynamic model for nickel, the thermodynamic model for copper (I), the thermodynamic model for copper(II), the thermodynamic model for mercury (0) and mercury (I), the thermodynamic model for mercury (III), the thermodynamic model for arsenate.

  6. Ecohydrological model parameter selection for stream health evaluation.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Ross, Dennis M; Zhang, Zhen; Wang, Lizhu; Esfahanian, Abdol-Hossein

    2015-04-01

    Variable selection is a critical step in development of empirical stream health prediction models. This study develops a framework for selecting important in-stream variables to predict four measures of biological integrity: total number of Ephemeroptera, Plecoptera, and Trichoptera (EPT) taxa, family index of biotic integrity (FIBI), Hilsenhoff biotic integrity (HBI), and fish index of biotic integrity (IBI). Over 200 flow regime and water quality variables were calculated using the Hydrologic Index Tool (HIT) and Soil and Water Assessment Tool (SWAT). Streams of the River Raisin watershed in Michigan were grouped using the Strahler stream classification system (orders 1-3 and orders 4-6), k-means clustering technique (two clusters: C1 and C2), and all streams (one grouping). For each grouping, variable selection was performed using Bayesian variable selection, principal component analysis, and Spearman's rank correlation. Following selection of best variable sets, models were developed to predict the measures of biological integrity using adaptive-neuro fuzzy inference systems (ANFIS), a technique well-suited to complex, nonlinear ecological problems. Multiple unique variable sets were identified, all which differed by selection method and stream grouping. Final best models were mostly built using the Bayesian variable selection method. The most effective stream grouping method varied by health measure, although k-means clustering and grouping by stream order were always superior to models built without grouping. Commonly selected variables were related to streamflow magnitude, rate of change, and seasonal nitrate concentration. Each best model was effective in simulating stream health observations, with EPT taxa validation R2 ranging from 0.67 to 0.92, FIBI ranging from 0.49 to 0.85, HBI from 0.56 to 0.75, and fish IBI at 0.99 for all best models. The comprehensive variable selection and modeling process proposed here is a robust method that extends our

  7. Financial applications of a Tabu search variable selection model

    Directory of Open Access Journals (Sweden)

    Zvi Drezner

    2001-01-01

    Full Text Available We illustrate how a comparatively new technique, a Tabu search variable selection model [Drezner, Marcoulides and Salhi (1999], can be applied efficiently within finance when the researcher must select a subset of variables from among the whole set of explanatory variables under consideration. Several types of problems in finance, including corporate and personal bankruptcy prediction, mortgage and credit scoring, and the selection of variables for the Arbitrage Pricing Model, require the researcher to select a subset of variables from a larger set. In order to demonstrate the usefulness of the Tabu search variable selection model, we: (1 illustrate its efficiency in comparison to the main alternative search procedures, such as stepwise regression and the Maximum R2 procedure, and (2 show how a version of the Tabu search procedure may be implemented when attempting to predict corporate bankruptcy. We accomplish (2 by indicating that a Tabu Search procedure increases the predictability of corporate bankruptcy by up to 10 percentage points in comparison to Altman's (1968 Z-Score model.

  8. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....

  9. The Properties of Model Selection when Retaining Theory Variables

    DEFF Research Database (Denmark)

    Hendry, David F.; Johansen, Søren

    Economic theories are often fitted directly to data to avoid possible model selection biases. We show that embedding a theory model that specifies the correct set of m relevant exogenous variables, x{t}, within the larger set of m+k candidate variables, (x{t},w{t}), then selection over the second...... set by their statistical significance can be undertaken without affecting the estimator distribution of the theory parameters. This strategy returns the theory-parameter estimates when the theory is correct, yet protects against the theory being under-specified because some w{t} are relevant....

  10. BIM-Based Decision Support System for Material Selection Based on Supplier Rating

    Directory of Open Access Journals (Sweden)

    Abiola Akanmu

    2015-12-01

    Full Text Available Material selection is a delicate process, typically hinged on a number of factors which can be either cost or environmental related. This process becomes more complicated when designers are faced with several material options of building elements and each option can be supplied by different suppliers whose selection criteria may affect the budgetary and environmental requirements of the project. This paper presents the development of a decision support system based on the integration of building information models, a modified harmony search algorithm and supplier performance rating. The system is capable of producing the cost and environmental implications of different material combinations or building designs. A case study is presented to illustrate the functionality of the developed system.

  11. Graph Model Based Indoor Tracking

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...... infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...

  12. Parameter subset selection for the dynamic calibration of activated sludge models (ASMs): experience versus systems analysis

    DEFF Research Database (Denmark)

    Ruano, MV; Ribes, J; de Pauw, DJW

    2007-01-01

    In this work we address the issue of parameter subset selection within the scope of activated sludge model calibration. To this end, we evaluate two approaches: (i) systems analysis and (ii) experience-based approach. The evaluation has been carried out using a dynamic model (ASM2d) calibrated...

  13. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klaauw, B.; Koning, R.H.

    2003-01-01

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  14. Testing the normality assumption in the sample selection model with an application to travel demand

    NARCIS (Netherlands)

    van der Klauw, B.; Koning, R.H.

    In this article we introduce a test for the normality assumption in the sample selection model. The test is based on a flexible parametric specification of the density function of the error terms in the model. This specification follows a Hermite series with bivariate normality as a special case.

  15. Salmon welfare index model 2.0: an extended model for overall welfare assessment of caged Atlantic salmon, based on a review of selected welfare indicators and intended for fish health professionals

    NARCIS (Netherlands)

    Pettersen, J.M.; Bracke, M.B.M.; Midtlyng, P.J.; Folkedal, O.; Stien, L.H.; Steffenak, H.; Kristiansen, T.S.

    2014-01-01

    Here, we present an extended version of a semantic model for overall welfare assessment of Atlantic salmon reared in sea cages. The model, called SWIM 2.0, is designed to enable fish health professionals to make a formal and standardized assessment of fish welfare using a set of reviewed welfare

  16. Game Theory-based Channel Selection for LTE-U

    OpenAIRE

    Ciccarelli, Enrico

    2016-01-01

    The Project intends to analyse the performance of a game theory-based channel selection in LTE-U. The main topic of this thesis project is the study of a channel selection strategy for LTE-U based on the game theory. The method consists on a repeated game where each small cell is a player with the purpose of finding the best channel where to set up the LTE-U carrier and it uses the ITEL-BA algorithm in order to make the system to converge to a Nash Equilibrium state. The aim is to evaluate...

  17. Performance-Based Technology Selection Filter description report

    International Nuclear Information System (INIS)

    O'Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL)

  18. Performance-Based Technology Selection Filter description report

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL).

  19. Activity-based DEVS modeling

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2018-01-01

    architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number...

  20. Risk prediction models for selection of lung cancer screening candidates: A retrospective validation study

    NARCIS (Netherlands)

    K. ten Haaf (Kevin); J. Jeon (Jihyoun); M.C. Tammemagi (Martin); S.S. Han (Summer); C.Y. Kong (Chung Yin); S.K. Plevritis (Sylvia); E. Feuer (Eric); H.J. de Koning (Harry); E.W. Steyerberg (Ewout W.); R. Meza (Rafael)

    2017-01-01

    textabstractBackground: Selection of candidates for lung cancer screening based on individual risk has been proposed as an alternative to criteria based on age and cumulative smoking exposure (pack-years). Nine previously established risk models were assessed for their ability to identify those most

  1. Linear regression-based feature selection for microarray data classification.

    Science.gov (United States)

    Abid Hasan, Md; Hasan, Md Kamrul; Abdul Mottalib, M

    2015-01-01

    Predicting the class of gene expression profiles helps improve the diagnosis and treatment of diseases. Analysing huge gene expression data otherwise known as microarray data is complicated due to its high dimensionality. Hence the traditional classifiers do not perform well where the number of features far exceeds the number of samples. A good set of features help classifiers to classify the dataset efficiently. Moreover, a manageable set of features is also desirable for the biologist for further analysis. In this paper, we have proposed a linear regression-based feature selection method for selecting discriminative features. Our main focus is to classify the dataset more accurately using less number of features than other traditional feature selection methods. Our method has been compared with several other methods and in almost every case the classification accuracy is higher using less number of features than the other popular feature selection methods.

  2. QCM-based aptamer selection and detection of Salmonella typhimurium.

    Science.gov (United States)

    Wang, Lijun; Wang, Ronghui; Chen, Fang; Jiang, Tieshan; Wang, Hong; Slavik, Michael; Wei, Hua; Li, Yanbin

    2017-04-15

    In this study, quartz crystal microbalance (QCM) was used to select aptamers against Salmonella typhimurium. To increase the success rate of Systematic Evolution of Ligands Exponential Enrichment (SELEX), the affinity of DNA pool in each round was simultaneously tracked using QCM in order to avoid the loss of high-quality aptamers. When the frequency change reached a maximum value after several rounds of selection and counter-selection, the candidate pool was cloned and sequenced. Out of three aptamer candidates, aptamer B5 showed high specificity and binding affinity with dissociation constant (K d value) of 58.5nM, and was chosen for further studies. Subsequently, a QCM-based aptasensor was developed to detect S. typhimurium. This aptasensor was able to detect 10 3 CFU/mL of S. typhimurium with less than 1h. This study demonstrated QCM-based selection could be more effective selection of aptamers and QCM-based aptasensor could be more sensitive in detecting S. typhimurium. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. A robust multi-objective global supplier selection model under currency fluctuation and price discount

    Science.gov (United States)

    Zarindast, Atousa; Seyed Hosseini, Seyed Mohamad; Pishvaee, Mir Saman

    2017-11-01

    Robust supplier selection problem, in a scenario-based approach has been proposed, when the demand and exchange rates are subject to uncertainties. First, a deterministic multi-objective mixed integer linear programming is developed; then, the robust counterpart of the proposed mixed integer linear programming is presented using the recent extension in robust optimization theory. We discuss decision variables, respectively, by a two-stage stochastic planning model, a robust stochastic optimization planning model which integrates worst case scenario in modeling approach and finally by equivalent deterministic planning model. The experimental study is carried out to compare the performances of the three models. Robust model resulted in remarkable cost saving and it illustrated that to cope with such uncertainties, we should consider them in advance in our planning. In our case study different supplier were selected due to this uncertainties and since supplier selection is a strategic decision, it is crucial to consider these uncertainties in planning approach.

  4. Fuzzy decision-making: a new method in model selection via various validity criteria

    International Nuclear Information System (INIS)

    Shakouri Ganjavi, H.; Nikravesh, K.

    2001-01-01

    Modeling is considered as the first step in scientific investigations. Several alternative models may be candida ted to express a phenomenon. Scientists use various criteria to select one model between the competing models. Based on the solution of a Fuzzy Decision-Making problem, this paper proposes a new method in model selection. The method enables the scientist to apply all desired validity criteria, systematically by defining a proper Possibility Distribution Function due to each criterion. Finally, minimization of a utility function composed of the Possibility Distribution Functions will determine the best selection. The method is illustrated through a modeling example for the A verage Daily Time Duration of Electrical Energy Consumption in Iran

  5. Decision support model for selecting and evaluating suppliers in the construction industry

    Directory of Open Access Journals (Sweden)

    Fernando Schramm

    2012-12-01

    Full Text Available A structured evaluation of the construction industry's suppliers, considering aspects which make their quality and credibility evident, can be a strategic tool to manage this specific supply chain. This study proposes a multi-criteria decision model for suppliers' selection from the construction industry, as well as an efficient evaluation procedure for the selected suppliers. The model is based on SMARTER (Simple Multi-Attribute Rating Technique Exploiting Ranking method and its main contribution is a new approach to structure the process of suppliers' selection, establishing explicit strategic policies on which the company management system relied to make the suppliers selection. This model was applied to a Civil Construction Company in Brazil and the main results demonstrate the efficiency of the proposed model. This study allowed the development of an approach to Construction Industry which was able to provide a better relationship among its managers, suppliers and partners.

  6. Selection-based virtual keyboard prototypes and data collection application.

    Science.gov (United States)

    Millet, Barbara; Asfour, Shihab; Lewis, James R

    2009-08-01

    An emerging area of research in engineering psychology is the evaluation of text entry for mobile devices using a small number of keys for the control of cursor direction and character selection from a matrix of characters (i.e., selection-based data entry). The present article describes a software tool designed to reduce time and effort in the development of prototypes of alternative selection-based text-entry schemes and their empirical evaluation. The tool, available for distribution to researchers, educators, and students, uses Action Script code compiled into an executable file that has an embedded Adobe Flash Player and is compatible with most operating systems (including Microsoft Windows, Apple OSX, and Linux).

  7. Attribute based selection of thermoplastic resin for vacuum infusion process

    DEFF Research Database (Denmark)

    Prabhakaran, R.T. Durai; Lystrup, Aage; Løgstrup Andersen, Tom

    2011-01-01

    The composite industry looks toward a new material system (resins) based on thermoplastic polymers for the vacuum infusion process, similar to the infusion process using thermosetting polymers. A large number of thermoplastics are available in the market with a variety of properties suitable...... for different engineering applications, and few of those are available in a not yet polymerised form suitable for resin infusion. The proper selection of a new resin system among these thermoplastic polymers is a concern for manufactures in the current scenario and a special mathematical tool would...... be beneficial. In this paper, the authors introduce a new decision making tool for resin selection based on significant attributes. This article provides a broad overview of suitable thermoplastic material systems for vacuum infusion process available in today’s market. An illustrative example—resin selection...

  8. Evidence accumulation as a model for lexical selection.

    Science.gov (United States)

    Anders, R; Riès, S; van Maanen, L; Alario, F X

    2015-11-01

    We propose and demonstrate evidence accumulation as a plausible theoretical and/or empirical model for the lexical selection process of lexical retrieval. A number of current psycholinguistic theories consider lexical selection as a process related to selecting a lexical target from a number of alternatives, which each have varying activations (or signal supports), that are largely resultant of an initial stimulus recognition. We thoroughly present a case for how such a process may be theoretically explained by the evidence accumulation paradigm, and we demonstrate how this paradigm can be directly related or combined with conventional psycholinguistic theory and their simulatory instantiations (generally, neural network models). Then with a demonstrative application on a large new real data set, we establish how the empirical evidence accumulation approach is able to provide parameter results that are informative to leading psycholinguistic theory, and that motivate future theoretical development. Copyright © 2015 Elsevier Inc. All rights reserved.

  9. Integrated model for supplier selection and performance evaluation

    Directory of Open Access Journals (Sweden)

    Borges de Araújo, Maria Creuza

    2015-08-01

    Full Text Available This paper puts forward a model for selecting suppliers and evaluating the performance of those already working with a company. A simulation was conducted in a food industry. This sector has high significance in the economy of Brazil. The model enables the phases of selecting and evaluating suppliers to be integrated. This is important so that a company can have partnerships with suppliers who are able to meet their needs. Additionally, a group method is used to enable managers who will be affected by this decision to take part in the selection stage. Finally, the classes resulting from the performance evaluation are shown to support the contractor in choosing the most appropriate relationship with its suppliers.

  10. The Selection of ARIMA Models with or without Regressors

    DEFF Research Database (Denmark)

    Johansen, Søren; Riani, Marco; Atkinson, Anthony C.

    We develop a $C_{p}$ statistic for the selection of regression models with stationary and nonstationary ARIMA error term. We derive the asymptotic theory of the maximum likelihood estimators and show they are consistent and asymptotically Gaussian. We also prove that the distribution of the sum o...

  11. Selecting candidate predictor variables for the modelling of post ...

    African Journals Online (AJOL)

    Selecting candidate predictor variables for the modelling of post-discharge mortality from sepsis: a protocol development project. Afri. Health Sci. .... Initial list of candidate predictor variables, N=17. Clinical. Laboratory. Social/Demographic. Vital signs (HR, RR, BP, T). Hemoglobin. Age. Oxygen saturation. Blood culture. Sex.

  12. Multivariate time series modeling of selected childhood diseases in ...

    African Journals Online (AJOL)

    This paper is focused on modeling the five most prevalent childhood diseases in Akwa Ibom State using a multivariate approach to time series. An aggregate of 78,839 reported cases of malaria, upper respiratory tract infection (URTI), Pneumonia, anaemia and tetanus were extracted from five randomly selected hospitals in ...

  13. Automatic Peak Selection by a Benjamini-Hochberg-Based Algorithm

    KAUST Repository

    Abbas, Ahmed

    2013-01-07

    A common issue in bioinformatics is that computational methods often generate a large number of predictions sorted according to certain confidence scores. A key problem is then determining how many predictions must be selected to include most of the true predictions while maintaining reasonably high precision. In nuclear magnetic resonance (NMR)-based protein structure determination, for instance, computational peak picking methods are becoming more and more common, although expert-knowledge remains the method of choice to determine how many peaks among thousands of candidate peaks should be taken into consideration to capture the true peaks. Here, we propose a Benjamini-Hochberg (B-H)-based approach that automatically selects the number of peaks. We formulate the peak selection problem as a multiple testing problem. Given a candidate peak list sorted by either volumes or intensities, we first convert the peaks into p-values and then apply the B-H-based algorithm to automatically select the number of peaks. The proposed approach is tested on the state-of-the-art peak picking methods, including WaVPeak [1] and PICKY [2]. Compared with the traditional fixed number-based approach, our approach returns significantly more true peaks. For instance, by combining WaVPeak or PICKY with the proposed method, the missing peak rates are on average reduced by 20% and 26%, respectively, in a benchmark set of 32 spectra extracted from eight proteins. The consensus of the B-H-selected peaks from both WaVPeak and PICKY achieves 88% recall and 83% precision, which significantly outperforms each individual method and the consensus method without using the B-H algorithm. The proposed method can be used as a standard procedure for any peak picking method and straightforwardly applied to some other prediction selection problems in bioinformatics. The source code, documentation and example data of the proposed method is available at http://sfb.kaust.edu.sa/pages/software.aspx. © 2013

  14. SU-F-R-10: Selecting the Optimal Solution for Multi-Objective Radiomics Model

    International Nuclear Information System (INIS)

    Zhou, Z; Folkert, M; Wang, J

    2016-01-01

    Purpose: To develop an evidential reasoning approach for selecting the optimal solution from a Pareto solution set obtained by a multi-objective radiomics model for predicting distant failure in lung SBRT. Methods: In the multi-objective radiomics model, both sensitivity and specificity are considered as the objective functions simultaneously. A Pareto solution set with many feasible solutions will be resulted from the multi-objective optimization. In this work, an optimal solution Selection methodology for Multi-Objective radiomics Learning model using the Evidential Reasoning approach (SMOLER) was proposed to select the optimal solution from the Pareto solution set. The proposed SMOLER method used the evidential reasoning approach to calculate the utility of each solution based on pre-set optimal solution selection rules. The solution with the highest utility was chosen as the optimal solution. In SMOLER, an optimal learning model coupled with clonal selection algorithm was used to optimize model parameters. In this study, PET, CT image features and clinical parameters were utilized for predicting distant failure in lung SBRT. Results: Total 126 solution sets were generated by adjusting predictive model parameters. Each Pareto set contains 100 feasible solutions. The solution selected by SMOLER within each Pareto set was compared to the manually selected optimal solution. Five-cross-validation was used to evaluate the optimal solution selection accuracy of SMOLER. The selection accuracies for five folds were 80.00%, 69.23%, 84.00%, 84.00%, 80.00%, respectively. Conclusion: An optimal solution selection methodology for multi-objective radiomics learning model using the evidential reasoning approach (SMOLER) was proposed. Experimental results show that the optimal solution can be found in approximately 80% cases.

  15. Automatic learning-based beam angle selection for thoracic IMRT.

    Science.gov (United States)

    Amit, Guy; Purdie, Thomas G; Levinshtein, Alex; Hope, Andrew J; Lindsay, Patricia; Marshall, Andrea; Jaffray, David A; Pekar, Vladimir

    2015-04-01

    The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose-volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner's clinical experience. The purpose of this work is to propose and study a computationally efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume coverage and organ at risk

  16. Lycopene Content of Selected Tomato Based Products, Fruits and ...

    African Journals Online (AJOL)

    The Lycopene content of selected tomato based products, fruits and vegetables, commonly consumed in South Western Nigeria were determined using theoretical and experimental method. The lycopene content in tomato pastes ranged from 50.97±1.08 mg/kg in vitali tomato paste to 68.12±1.44 mg/kg in Gino tomato paste ...

  17. An assessment of selected organisational-based factors on the ...

    African Journals Online (AJOL)

    The construct validity of the measuring instrument was assessed by means of a principal component exploratory factor analysis and by calculating Cronbachs's alpha coeffi cients. The results show that the managers in the participating agribusinesses perceived the selected organisational-based factors of Strategic intent, ...

  18. Object-based attentional selection modulates anticipatory alpha oscillations

    Directory of Open Access Journals (Sweden)

    Balázs eKnakker

    2015-01-01

    Full Text Available Visual cortical alpha oscillations are involved in attentional gating of incoming visual information. It has been shown that spatial and feature-based attentional selection result in increased alpha oscillations over the cortical regions representing sensory input originating from the unattended visual field and task-irrelevant visual features, respectively. However, whether attentional gating in the case of object based selection is also associated with alpha oscillations has not been investigated before. Here we measured anticipatory EEG alpha oscillations while participants were cued to attend to foveal face or word stimuli, the processing of which is known to have right and left hemispheric lateralization, respectively. The results revealed that in the case of simultaneously displayed, overlapping face and word stimuli, attending to the words led to increased power of parieto-occipital alpha oscillations over the right hemisphere as compared to when faces were attended. This object category-specific modulation of the hemispheric lateralization of anticipatory alpha oscillations was maintained during sustained attentional selection of sequentially presented face and word stimuli. These results imply that in the case of object-based attentional selection – similarly to spatial and feature-based attention – gating of visual information processing might involve visual cortical alpha oscillations.

  19. Validity of selected cardiovascular field-based test among Malaysian ...

    African Journals Online (AJOL)

    Based on emerge obese problem among Malaysian, this research is formulated to validate published tests among healthy female adult. Selected test namely; 20 meter multi-stage shuttle run, 2.4km run test, 1 mile walk test and Harvard Step test were correlated with laboratory test (Bruce protocol) to find the criterion validity ...

  20. Classification and Target Group Selection Based Upon Frequent Patterns

    NARCIS (Netherlands)

    W.H.L.M. Pijls (Wim); R. Potharst (Rob)

    2000-01-01

    textabstractIn this technical report , two new algorithms based upon frequent patterns are proposed. One algorithm is a classification method. The other one is an algorithm for target group selection. In both algorithms, first of all, the collection of frequent patterns in the training set is

  1. Exposure level from selected base station tower around Kuala Nerus

    African Journals Online (AJOL)

    Health risk due to RF radiation exposure from base station tower (BST) has been debated for years leading to public concerns. Thus, this preliminary study aims to measure, evaluate and analyze the exposure level on three selected BST around Kuala Nerus. The measurement of exposure level in terms of voltage ...

  2. Design and selection of triazole-based compounds with high ...

    Indian Academy of Sciences (India)

    128, No. 8, August 2016, pp. 1223–1236. c Indian Academy of Sciences. DOI 10.1007/s12039-016-1117-x. Design and selection of triazole-based compounds with high energetic properties and stabilities. GUOZHENG ZHAO. ∗. , JIANFENG JIA and HAISHUN WU. School of Chemistry and Materials Science, Shanxi Normal ...

  3. Coupled variable selection for regression modeling of complex treatment patterns in a clinical cancer registry.

    Science.gov (United States)

    Schmidtmann, I; Elsäßer, A; Weinmann, A; Binder, H

    2014-12-30

    For determining a manageable set of covariates potentially influential with respect to a time-to-event endpoint, Cox proportional hazards models can be combined with variable selection techniques, such as stepwise forward selection or backward elimination based on p-values, or regularized regression techniques such as component-wise boosting. Cox regression models have also been adapted for dealing with more complex event patterns, for example, for competing risks settings with separate, cause-specific hazard models for each event type, or for determining the prognostic effect pattern of a variable over different landmark times, with one conditional survival model for each landmark. Motivated by a clinical cancer registry application, where complex event patterns have to be dealt with and variable selection is needed at the same time, we propose a general approach for linking variable selection between several Cox models. Specifically, we combine score statistics for each covariate across models by Fisher's method as a basis for variable selection. This principle is implemented for a stepwise forward selection approach as well as for a regularized regression technique. In an application to data from hepatocellular carcinoma patients, the coupled stepwise approach is seen to facilitate joint interpretation of the different cause-specific Cox models. In conditional survival models at landmark times, which address updates of prediction as time progresses and both treatment and other potential explanatory variables may change, the coupled regularized regression approach identifies potentially important, stably selected covariates together with their effect time pattern, despite having only a small number of events. These results highlight the promise of the proposed approach for coupling variable selection between Cox models, which is particularly relevant for modeling for clinical cancer registries with their complex event patterns. Copyright © 2014 John Wiley & Sons

  4. Orbital-selective Mott phase in multiorbital models for iron pnictides and chalcogenides

    Science.gov (United States)

    Yu, Rong; Si, Qimiao

    2017-09-01

    There is increasing recognition that the multiorbital nature of the 3 d electrons is important to the proper description of the electronic states in the normal state of the iron-based superconductors. Earlier studies of the pertinent multiorbital Hubbard models identified an orbital-selective Mott phase, which anchors the orbital-selective behavior seen in the overall phase diagram. An important characteristics of the models is that the orbitals are kinetically coupled, i.e., hybridized, to each other, which makes the orbital-selective Mott phase especially nontrivial. A U (1 ) slave-spin method was used to analyze the model with nonzero orbital-level splittings. Here we develop a Landau free-energy functional to shed further light on this issue. We put the microscopic analysis from the U (1 ) slave-spin approach in this perspective, and show that the intersite spin correlations are crucial to the renormalization of the bare hybridization amplitude towards zero and the concomitant realization of the orbital-selective Mott transition. Based on this insight, we discuss additional ways to study the orbital-selective Mott physics from a dynamical competition between the interorbital hybridization and collective spin correlations. Our results demonstrate the robustness of the orbital-selective Mott phase in the multiorbital models appropriate for the iron-based superconductors.

  5. De Novo Assembly of Complete Chloroplast Genomes from Non-model Species Based on a K-mer Frequency-Based Selection of Chloroplast Reads from total DNA Sequences.

    NARCIS (Netherlands)

    Izan, Shairul; Esselink, G.; Visser, R.G.F.; Smulders, M.J.M.; Borm, T.J.A.

    2017-01-01

    Whole Genome Shotgun (WGS) sequences of plant species often contain an abundance of reads that are derived from the chloroplast genome. Up to now these reads have generally been identified and assembled into chloroplast genomes based on homology to chloroplasts from related species. This

  6. Model building strategy for logistic regression: purposeful selection.

    Science.gov (United States)

    Zhang, Zhongheng

    2016-03-01

    Logistic regression is one of the most commonly used models to account for confounders in medical literature. The article introduces how to perform purposeful selection model building strategy with R. I stress on the use of likelihood ratio test to see whether deleting a variable will have significant impact on model fit. A deleted variable should also be checked for whether it is an important adjustment of remaining covariates. Interaction should be checked to disentangle complex relationship between covariates and their synergistic effect on response variable. Model should be checked for the goodness-of-fit (GOF). In other words, how the fitted model reflects the real data. Hosmer-Lemeshow GOF test is the most widely used for logistic regression model.

  7. Statistical modelling in biostatistics and bioinformatics selected papers

    CERN Document Server

    Peng, Defen

    2014-01-01

    This book presents selected papers on statistical model development related mainly to the fields of Biostatistics and Bioinformatics. The coverage of the material falls squarely into the following categories: (a) Survival analysis and multivariate survival analysis, (b) Time series and longitudinal data analysis, (c) Statistical model development and (d) Applied statistical modelling. Innovations in statistical modelling are presented throughout each of the four areas, with some intriguing new ideas on hierarchical generalized non-linear models and on frailty models with structural dispersion, just to mention two examples. The contributors include distinguished international statisticians such as Philip Hougaard, John Hinde, Il Do Ha, Roger Payne and Alessandra Durio, among others, as well as promising newcomers. Some of the contributions have come from researchers working in the BIO-SI research programme on Biostatistics and Bioinformatics, centred on the Universities of Limerick and Galway in Ireland and fu...

  8. Bayesian Variable Selection on Model Spaces Constrained by Heredity Conditions.

    Science.gov (United States)

    Taylor-Rodriguez, Daniel; Womack, Andrew; Bliznyuk, Nikolay

    2016-01-01

    This paper investigates Bayesian variable selection when there is a hierarchical dependence structure on the inclusion of predictors in the model. In particular, we study the type of dependence found in polynomial response surfaces of orders two and higher, whose model spaces are required to satisfy weak or strong heredity conditions. These conditions restrict the inclusion of higher-order terms depending upon the inclusion of lower-order parent terms. We develop classes of priors on the model space, investigate their theoretical and finite sample properties, and provide a Metropolis-Hastings algorithm for searching the space of models. The tools proposed allow fast and thorough exploration of model spaces that account for hierarchical polynomial structure in the predictors and provide control of the inclusion of false positives in high posterior probability models.

  9. A Feature Selection Method for Large-Scale Network Traffic Classification Based on Spark

    Directory of Open Access Journals (Sweden)

    Yong Wang

    2016-02-01

    Full Text Available Currently, with the rapid increasing of data scales in network traffic classifications, how to select traffic features efficiently is becoming a big challenge. Although a number of traditional feature selection methods using the Hadoop-MapReduce framework have been proposed, the execution time was still unsatisfactory with numeral iterative computations during the processing. To address this issue, an efficient feature selection method for network traffic based on a new parallel computing framework called Spark is proposed in this paper. In our approach, the complete feature set is firstly preprocessed based on Fisher score, and a sequential forward search strategy is employed for subsets. The optimal feature subset is then selected using the continuous iterations of the Spark computing framework. The implementation demonstrates that, on the precondition of keeping the classification accuracy, our method reduces the time cost of modeling and classification, and improves the execution efficiency of feature selection significantly.

  10. A Four-Step Model for Teaching Selection Interviewing Skills

    Science.gov (United States)

    Kleiman, Lawrence S.; Benek-Rivera, Joan

    2010-01-01

    The topic of selection interviewing lends itself well to experience-based teaching methods. Instructors often teach this topic by using a two-step process. The first step consists of lecturing students on the basic principles of effective interviewing. During the second step, students apply these principles by role-playing mock interviews with…

  11. PROPOSAL OF AN EMPIRICAL MODEL FOR SUPPLIERS SELECTION

    Directory of Open Access Journals (Sweden)

    Paulo Ávila

    2015-03-01

    Full Text Available The problem of selecting suppliers/partners is a crucial and important part in the process of decision making for companies that intend to perform competitively in their area of activity. The selection of supplier/partner is a time and resource-consuming task that involves data collection and a careful analysis of the factors that can positively or negatively influence the choice. Nevertheless it is a critical process that affects significantly the operational performance of each company. In this work, trough the literature review, there were identified five broad suppliers selection criteria: Quality, Financial, Synergies, Cost, and Production System. Within these criteria, it was also included five sub-criteria. Thereafter, a survey was elaborated and companies were contacted in order to answer which factors have more relevance in their decisions to choose the suppliers. Interpreted the results and processed the data, it was adopted a model of linear weighting to reflect the importance of each factor. The model has a hierarchical structure and can be applied with the Analytic Hierarchy Process (AHP method or Simple Multi-Attribute Rating Technique (SMART. The result of the research undertaken by the authors is a reference model that represents a decision making support for the suppliers/partners selection process.

  12. Venture Capital Investment Selection Decision-making Base on Fuzzy Theory

    Science.gov (United States)

    Zhang, Xubo

    Venture capital investment decision-making is the most important issue in venture capital investment selection. There are higher uncertainty and complexity in venture capital investment decision-making process. This paper analysis these uncertain risk in venture capital investment decision-making base the previous studies. Attributed the venture capital candidate firms' select to fuzzy optimal decision-making. Build a risk-weight fuzzy optimal return model to avoid the decision-making risk. Get the optimal solution set.

  13. A multicriteria decision making model for assessment and selection of an ERP in a logistics context

    Science.gov (United States)

    Pereira, Teresa; Ferreira, Fernanda A.

    2017-07-01

    The aim of this work is to apply a methodology of decision support based on a multicriteria decision analyses (MCDA) model that allows the assessment and selection of an Enterprise Resource Planning (ERP) in a Portuguese logistics company by Group Decision Maker (GDM). A Decision Support system (DSS) that implements a MCDA - Multicriteria Methodology for the Assessment and Selection of Information Systems / Information Technologies (MMASSI / IT) is used based on its features and facility to change and adapt the model to a given scope. Using this DSS it was obtained the information system that best suited to the decisional context, being this result evaluated through a sensitivity and robustness analysis.

  14. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  15. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  16. Bankruptcy prediction using SVM models with a new approach to combine features selection and parameter optimisation

    Science.gov (United States)

    Zhou, Ligang; Keung Lai, Kin; Yen, Jerome

    2014-03-01

    Due to the economic significance of bankruptcy prediction of companies for financial institutions, investors and governments, many quantitative methods have been used to develop effective prediction models. Support vector machine (SVM), a powerful classification method, has been used for this task; however, the performance of SVM is sensitive to model form, parameter setting and features selection. In this study, a new approach based on direct search and features ranking technology is proposed to optimise features selection and parameter setting for 1-norm and least-squares SVM models for bankruptcy prediction. This approach is also compared to the SVM models with parameter optimisation and features selection by the popular genetic algorithm technique. The experimental results on a data set with 2010 instances show that the proposed models are good alternatives for bankruptcy prediction.

  17. Event-Based Activity Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2004-01-01

    We present and discuss a modeling approach that supports event-based modeling of information and activity in information systems. Interacting human actors and IT-actors may carry out such activity. We use events to create meaningful relations between information structures and the related...

  18. Natural Selection at Work: An Accelerated Evolutionary Computing Approach to Predictive Model Selection

    Science.gov (United States)

    Akman, Olcay; Hallam, Joshua W.

    2010-01-01

    We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP) as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency. PMID:20661297

  19. Natural selection at work: an accelerated evolutionary computing approach to predictive model selection

    Directory of Open Access Journals (Sweden)

    Olcay Akman

    2010-07-01

    Full Text Available We implement genetic algorithm based predictive model building as an alternative to the traditional stepwise regression. We then employ the Information Complexity Measure (ICOMP as a measure of model fitness instead of the commonly used measure of R-square. Furthermore, we propose some modifications to the genetic algorithm to increase the overall efficiency.

  20. ASYMMETRIC PRICE TRANSMISSION MODELING: THE IMPORTANCE OF MODEL COMPLEXITY AND THE PERFORMANCE OF THE SELECTION CRITERIA

    Directory of Open Access Journals (Sweden)

    Henry de-Graft Acquah

    2013-01-01

    Full Text Available Information Criteria provides an attractive basis for selecting the best model from a set of competing asymmetric price transmission models or theories. However, little is understood about the sensitivity of the model selection methods to model complexity. This study therefore fits competing asymmetric price transmission models that differ in complexity to simulated data and evaluates the ability of the model selection methods to recover the true model. The results of Monte Carlo experimentation suggest that in general BIC, CAIC and DIC were superior to AIC when the true data generating process was the standard error correction model, whereas AIC was more successful when the true model was the complex error correction model. It is also shown that the model selection methods performed better in large samples for a complex asymmetric data generating process than with a standard asymmetric data generating process. Except for complex models, AIC's performance did not make substantial gains in recovery rates as sample size increased. The research findings demonstrate the influence of model complexity in asymmetric price transmission model comparison and selection.

  1. Optimum heart sound signal selection based on the cyclostationary property.

    Science.gov (United States)

    Li, Ting; Qiu, Tianshuang; Tang, Hong

    2013-07-01

    Noise often appears in parts of heart sound recordings, which may be much longer than those necessary for subsequent automated analysis. Thus, human intervention is needed to select the heart sound signal with the best quality or the least noise. This paper presents an automatic scheme for optimum sequence selection to avoid such human intervention. A quality index, which is based on finding that sequences with less random noise contamination have a greater degree of periodicity, is defined on the basis of the cyclostationary property of heart beat events. The quality score indicates the overall quality of a sequence. No manual intervention is needed in the process of subsequence selection, thereby making this scheme useful in automatic analysis of heart sound signals. Copyright © 2013 Elsevier Ltd. All rights reserved.

  2. Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.

    Science.gov (United States)

    Yabe, Shiori; Yamasaki, Masanori; Ebana, Kaworu; Hayashi, Takeshi; Iwata, Hiroyoshi

    2016-01-01

    Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS), which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the potential of genomic

  3. Island-Model Genomic Selection for Long-Term Genetic Improvement of Autogamous Crops.

    Directory of Open Access Journals (Sweden)

    Shiori Yabe

    Full Text Available Acceleration of genetic improvement of autogamous crops such as wheat and rice is necessary to increase cereal production in response to the global food crisis. Population and pedigree methods of breeding, which are based on inbred line selection, are used commonly in the genetic improvement of autogamous crops. These methods, however, produce a few novel combinations of genes in a breeding population. Recurrent selection promotes recombination among genes and produces novel combinations of genes in a breeding population, but it requires inaccurate single-plant evaluation for selection. Genomic selection (GS, which can predict genetic potential of individuals based on their marker genotype, might have high reliability of single-plant evaluation and might be effective in recurrent selection. To evaluate the efficiency of recurrent selection with GS, we conducted simulations using real marker genotype data of rice cultivars. Additionally, we introduced the concept of an "island model" inspired by evolutionary algorithms that might be useful to maintain genetic variation through the breeding process. We conducted GS simulations using real marker genotype data of rice cultivars to evaluate the efficiency of recurrent selection and the island model in an autogamous species. Results demonstrated the importance of producing novel combinations of genes through recurrent selection. An initial population derived from admixture of multiple bi-parental crosses showed larger genetic gains than a population derived from a single bi-parental cross in whole cycles, suggesting the importance of genetic variation in an initial population. The island-model GS better maintained genetic improvement in later generations than the other GS methods, suggesting that the island-model GS can utilize genetic variation in breeding and can retain alleles with small effects in the breeding population. The island-model GS will become a new breeding method that enhances the

  4. Theory of mind selectively predicts preschoolers' knowledge-based selective word learning.

    Science.gov (United States)

    Brosseau-Liard, Patricia; Penney, Danielle; Poulin-Dubois, Diane

    2015-11-01

    Children can selectively attend to various attributes of a model, such as past accuracy or physical strength, to guide their social learning. There is a debate regarding whether a relation exists between theory-of-mind skills and selective learning. We hypothesized that high performance on theory-of-mind tasks would predict preference for learning new words from accurate informants (an epistemic attribute), but not from physically strong informants (a non-epistemic attribute). Three- and 4-year-olds (N = 65) completed two selective learning tasks, and their theory-of-mind abilities were assessed. As expected, performance on a theory-of-mind battery predicted children's preference to learn from more accurate informants but not from physically stronger informants. Results thus suggest that preschoolers with more advanced theory of mind have a better understanding of knowledge and apply that understanding to guide their selection of informants. This work has important implications for research on children's developing social cognition and early learning. © 2015 The British Psychological Society.

  5. Theory of mind selectively predicts preschoolers’ knowledge-based selective word learning

    Science.gov (United States)

    Brosseau-Liard, Patricia; Penney, Danielle; Poulin-Dubois, Diane

    2015-01-01

    Children can selectively attend to various attributes of a model, such as past accuracy or physical strength, to guide their social learning. There is a debate regarding whether a relation exists between theory-of-mind skills and selective learning. We hypothesized that high performance on theory-of-mind tasks would predict preference for learning new words from accurate informants (an epistemic attribute), but not from physically strong informants (a non-epistemic attribute). Three- and 4-year-olds (N = 65) completed two selective learning tasks, and their theory of mind abilities were assessed. As expected, performance on a theory-of-mind battery predicted children’s preference to learn from more accurate informants but not from physically stronger informants. Results thus suggest that preschoolers with more advanced theory of mind have a better understanding of knowledge and apply that understanding to guide their selection of informants. This work has important implications for research on children’s developing social cognition and early learning. PMID:26211504

  6. Broken selection rule in the quantum Rabi model.

    Science.gov (United States)

    Forn-Díaz, P; Romero, G; Harmans, C J P M; Solano, E; Mooij, J E

    2016-06-07

    Understanding the interaction between light and matter is very relevant for fundamental studies of quantum electrodynamics and for the development of quantum technologies. The quantum Rabi model captures the physics of a single atom interacting with a single photon at all regimes of coupling strength. We report the spectroscopic observation of a resonant transition that breaks a selection rule in the quantum Rabi model, implemented using an LC resonator and an artificial atom, a superconducting qubit. The eigenstates of the system consist of a superposition of bare qubit-resonator states with a relative sign. When the qubit-resonator coupling strength is negligible compared to their own frequencies, the matrix element between excited eigenstates of different sign is very small in presence of a resonator drive, establishing a sign-preserving selection rule. Here, our qubit-resonator system operates in the ultrastrong coupling regime, where the coupling strength is 10% of the resonator frequency, allowing sign-changing transitions to be activated and, therefore, detected. This work shows that sign-changing transitions are an unambiguous, distinctive signature of systems operating in the ultrastrong coupling regime of the quantum Rabi model. These results pave the way to further studies of sign-preserving selection rules in multiqubit and multiphoton models.

  7. Models of cultural niche construction with selection and assortative mating.

    Science.gov (United States)

    Creanza, Nicole; Fogarty, Laurel; Feldman, Marcus W

    2012-01-01

    Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  8. Models of cultural niche construction with selection and assortative mating.

    Directory of Open Access Journals (Sweden)

    Nicole Creanza

    Full Text Available Niche construction is a process through which organisms modify their environment and, as a result, alter the selection pressures on themselves and other species. In cultural niche construction, one or more cultural traits can influence the evolution of other cultural or biological traits by affecting the social environment in which the latter traits may evolve. Cultural niche construction may include either gene-culture or culture-culture interactions. Here we develop a model of this process and suggest some applications of this model. We examine the interactions between cultural transmission, selection, and assorting, paying particular attention to the complexities that arise when selection and assorting are both present, in which case stable polymorphisms of all cultural phenotypes are possible. We compare our model to a recent model for the joint evolution of religion and fertility and discuss other potential applications of cultural niche construction theory, including the evolution and maintenance of large-scale human conflict and the relationship between sex ratio bias and marriage customs. The evolutionary framework we introduce begins to address complexities that arise in the quantitative analysis of multiple interacting cultural traits.

  9. Web based application for the selection of cable trays

    OpenAIRE

    Hidalgo, A.; Manuel Lázaro, Antonio

    2010-01-01

    In this paper, we present an application that helps the designers and engineers decide which cable tray is the one they need for their projects. It considers both their section and weight, and from that data it offers the trays that are able to support them. It can select also among some models of bulkheads. The application has been developed in a web environment, using the ASP language in combination with a MySQL database. Peer Reviewed

  10. Selection of Models for Ingestion Pathway and Relocation Radii Determination

    International Nuclear Information System (INIS)

    Blanchard, A.

    1998-01-01

    The distance at which intermediate phase protective actions (such as food interdiction and relocation) may be needed following postulated accidents at three Savannah River Site nonreactor nuclear facilities will be determined by modeling. The criteria used to select dispersion/deposition models are presented. Several models were considered, including ARAC, MACCS, HOTSPOT, WINDS (coupled with PUFF-PLUME), and UFOTRI. Although ARAC and WINDS are expected to provide more accurate modeling of atmospheric transport following an actual release, analyses consistent with regulatory guidance for planning purposes may be accomplished with comparatively simple dispersion models such as HOTSPOT and UFOTRI. A recommendation is made to use HOTSPOT for non-tritium facilities and UFOTRI for tritium facilities

  11. Modelling Technical and Economic Parameters in Selection of Manufacturing Devices

    Directory of Open Access Journals (Sweden)

    Naqib Daneshjo

    2017-11-01

    Full Text Available Sustainable science and technology development is also conditioned by continuous development of means of production which have a key role in structure of each production system. Mechanical nature of the means of production is complemented by controlling and electronic devices in context of intelligent industry. A selection of production machines for a technological process or technological project has so far been practically resolved, often only intuitively. With regard to increasing intelligence, the number of variable parameters that have to be considered when choosing a production device is also increasing. It is necessary to use computing techniques and decision making methods according to heuristic methods and more precise methodological procedures during the selection. The authors present an innovative model for optimization of technical and economic parameters in the selection of manufacturing devices for industry 4.0.

  12. Portfolio Selection Based on Distance between Fuzzy Variables

    Directory of Open Access Journals (Sweden)

    Weiyi Qian

    2014-01-01

    Full Text Available This paper researches portfolio selection problem in fuzzy environment. We introduce a new simple method in which the distance between fuzzy variables is used to measure the divergence of fuzzy investment return from a prior one. Firstly, two new mathematical models are proposed by expressing divergence as distance, investment return as expected value, and risk as variance and semivariance, respectively. Secondly, the crisp forms of the new models are also provided for different types of fuzzy variables. Finally, several numerical examples are given to illustrate the effectiveness of the proposed approach.

  13. Selection of Construction Methods: A Knowledge-Based Approach

    Directory of Open Access Journals (Sweden)

    Ximena Ferrada

    2013-01-01

    Full Text Available The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method’ selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods’ selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  14. Selection of construction methods: a knowledge-based approach.

    Science.gov (United States)

    Ferrada, Ximena; Serpell, Alfredo; Skibniewski, Miroslaw

    2013-01-01

    The appropriate selection of construction methods to be used during the execution of a construction project is a major determinant of high productivity, but sometimes this selection process is performed without the care and the systematic approach that it deserves, bringing negative consequences. This paper proposes a knowledge management approach that will enable the intelligent use of corporate experience and information and help to improve the selection of construction methods for a project. Then a knowledge-based system to support this decision-making process is proposed and described. To define and design the system, semistructured interviews were conducted within three construction companies with the purpose of studying the way that the method' selection process is carried out in practice and the knowledge associated with it. A prototype of a Construction Methods Knowledge System (CMKS) was developed and then validated with construction industry professionals. As a conclusion, the CMKS was perceived as a valuable tool for construction methods' selection, by helping companies to generate a corporate memory on this issue, reducing the reliance on individual knowledge and also the subjectivity of the decision-making process. The described benefits as provided by the system favor a better performance of construction projects.

  15. Improved targeted immunization strategies based on two rounds of selection

    Science.gov (United States)

    Xia, Ling-Ling; Song, Yu-Rong; Li, Chan-Chan; Jiang, Guo-Ping

    2018-04-01

    In the case of high degree targeted immunization where the number of vaccine is limited, when more than one node associated with the same degree meets the requirement of high degree centrality, how can we choose a certain number of nodes from those nodes, so that the number of immunized nodes will not exceed the limit? In this paper, we introduce a new idea derived from the selection process of second-round exam to solve this problem and then propose three improved targeted immunization strategies. In these proposed strategies, the immunized nodes are selected through two rounds of selection, where we increase the quotas of first-round selection according the evaluation criterion of degree centrality and then consider another characteristic parameter of node, such as node's clustering coefficient, betweenness and closeness, to help choose targeted nodes in the second-round selection. To validate the effectiveness of the proposed strategies, we compare them with the degree immunizations including the high degree targeted and the high degree adaptive immunizations using two metrics: the size of the largest connected component of immunized network and the number of infected nodes. Simulation results demonstrate that the proposed strategies based on two rounds of sorting are effective for heterogeneous networks and their immunization effects are better than that of the degree immunizations.

  16. Forecasting house prices in the 50 states using Dynamic Model Averaging and Dynamic Model Selection

    DEFF Research Database (Denmark)

    Bork, Lasse; Møller, Stig Vinther

    2015-01-01

    We examine house price forecastability across the 50 states using Dynamic Model Averaging and Dynamic Model Selection, which allow for model change and parameter shifts. By allowing the entire forecasting model to change over time and across locations, the forecasting accuracy improves substantia...

  17. Model-based biosignal interpretation.

    Science.gov (United States)

    Andreassen, S

    1994-03-01

    Two relatively new approaches to model-based biosignal interpretation, qualitative simulation and modelling by causal probabilistic networks, are compared to modelling by differential equations. A major problem in applying a model to an individual patient is the estimation of the parameters. The available observations are unlikely to allow a proper estimation of the parameters, and even if they do, the task appears to have exponential computational complexity if the model is non-linear. Causal probabilistic networks have both differential equation models and qualitative simulation as special cases, and they can provide both Bayesian and maximum-likelihood parameter estimates, in most cases in much less than exponential time. In addition, they can calculate the probabilities required for a decision-theoretical approach to medical decision support. The practical applicability of causal probabilistic networks to real medical problems is illustrated by a model of glucose metabolism which is used to adjust insulin therapy in type I diabetic patients.

  18. Parameter subset selection based damage detection of aluminium frame structure

    International Nuclear Information System (INIS)

    Titurus, B; Friswell, M I

    2011-01-01

    A three storey aluminium frame structure was tested in multiple damage cases. All damage scenarios, simulated by the localized stiffness changes, were associated with joint areas of the structure. Further, between damage tests the structure was returned to its healthy reference conditions and was again measured. In this paper, a parameter subset selection methodology is applied to an updated finite element model of the structure, together with a previously demonstrated approach employing concepts of model sensitivity subspace angles, first order model representation and mixed response residuals for damage detection. The objective of this paper is the evaluation of these methods on a real experimental structure with significant complexity, represented by an imprecise reference mathematical model and in the environment with uncertain reference structural state. The questions of symmetry, mixed response residuals and semi-localized parameterization are also addressed in this work.

  19. Individual discriminative face recognition models based on subsets of features

    DEFF Research Database (Denmark)

    Clemmensen, Line Katrine Harder; Gomez, David Delgado; Ersbøll, Bjarne Kjær

    2007-01-01

    The accuracy of data classification methods depends considerably on the data representation and on the selected features. In this work, the elastic net model selection is used to identify meaningful and important features in face recognition. Modelling the characteristics which distinguish one...... selection techniques such as forward selection or lasso regression become inadequate. In the experimental section, the performance of the elastic net model is compared with geometrical and color based algorithms widely used in face recognition such as Procrustes nearest neighbor, Eigenfaces, or Fisher...

  20. Photochemical selectivity in guanine–cytosine base-pair structures

    Science.gov (United States)

    Abo-Riziq, Ali; Grace, Louis; Nir, Eyal; Kabelac, Martin; Hobza, Pavel; de Vries, Mattanjah S.

    2005-01-01

    Prebiotic chemistry presumably took place before formation of an oxygen-rich atmosphere and thus under conditions of intense short wavelength UV irradiation. Therefore, the UV photochemical stability of the molecular building blocks of life may have been an important selective factor in determining the eventual chemical makeup of critical biomolecules. To investigate the role of UV irradiation in base-pairing we have studied guanine (G) and cytosine (C) base pairs in the absence of the RNA backbone. We distinguished base-pair structures by IR–UV hole-burning spectroscopy as well as by high-level correlated ab initio calculations. The Watson–Crick structure exhibits broad UV absorption, in stark contrast to other GC structures and other base-pair structures. This broad absorption may be explained by a rapid internal conversion that makes this specific base pair arrangement uniquely photochemically stable. PMID:15618394

  1. Sensitivity of landscape resistance estimates based on point selection functions to scale and behavioral state: Pumas as a case study

    Science.gov (United States)

    Katherine A. Zeller; Kevin McGarigal; Paul Beier; Samuel A. Cushman; T. Winston Vickers; Walter M. Boyce

    2014-01-01

    Estimating landscape resistance to animal movement is the foundation for connectivity modeling, and resource selection functions based on point data are commonly used to empirically estimate resistance. In this study, we used GPS data points acquired at 5-min intervals from radiocollared pumas in southern California to model context-dependent point selection...

  2. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  3. Generalized Degrees of Freedom and Adaptive Model Selection in Linear Mixed-Effects Models.

    Science.gov (United States)

    Zhang, Bo; Shen, Xiaotong; Mumford, Sunni L

    2012-03-01

    Linear mixed-effects models involve fixed effects, random effects and covariance structure, which require model selection to simplify a model and to enhance its interpretability and predictability. In this article, we develop, in the context of linear mixed-effects models, the generalized degrees of freedom and an adaptive model selection procedure defined by a data-driven model complexity penalty. Numerically, the procedure performs well against its competitors not only in selecting fixed effects but in selecting random effects and covariance structure as well. Theoretically, asymptotic optimality of the proposed methodology is established over a class of information criteria. The proposed methodology is applied to the BioCycle study, to determine predictors of hormone levels among premenopausal women and to assess variation in hormone levels both between and within women across the menstrual cycle.

  4. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    International Nuclear Information System (INIS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-01-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction. (paper)

  5. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    Science.gov (United States)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  6. Models of speciation by sexual selection on polygenic traits

    OpenAIRE

    Lande, Russell

    1981-01-01

    The joint evolution of female mating preferences and secondary sexual characters of males is modeled for polygamous species in which males provide only genetic material to the next generation and females have many potential mates to choose among. Despite stabilizing natural selection on males, various types of mating preferences may create a runaway process in which the outcome of phenotypic evolution depends critically on the genetic variation parameters and initial conditions of a populatio...

  7. A Model of Social Selection and Successful Altruism

    Science.gov (United States)

    1989-10-07

    D., The evolution of social behavior. Annual Reviews of Ecological Systems, 5:325-383 (1974). 2. Dawkins , R., The selfish gene . Oxford: Oxford...alive and well. it will be important to re- examine this striking historical experience,-not in terms o, oversimplified models of the " selfish gene ," but...Darwinian Analysis The acceptance by many modern geneticists of the axiom that the basic unit of selection Is the " selfish gene " quickly led to the

  8. A Bayesian Technique for Selecting a Linear Forecasting Model

    OpenAIRE

    Ramona L. Trader

    1983-01-01

    The specification of a forecasting model is considered in the context of linear multiple regression. Several potential predictor variables are available, but some of them convey little information about the dependent variable which is to be predicted. A technique for selecting the "best" set of predictors which takes into account the inherent uncertainty in prediction is detailed. In addition to current data, there is often substantial expert opinion available which is relevant to the forecas...

  9. Mixed models for selection of Jatropha progenies with high adaptability and yield stability in Brazilian regions.

    Science.gov (United States)

    Teodoro, P E; Bhering, L L; Costa, R D; Rocha, R B; Laviola, B G

    2016-08-19

    The aim of this study was to estimate genetic parameters via mixed models and simultaneously to select Jatropha progenies grown in three regions of Brazil that meet high adaptability and stability. From a previous phenotypic selection, three progeny tests were installed in 2008 in the municipalities of Planaltina-DF (Midwest), Nova Porteirinha-MG (Southeast), and Pelotas-RS (South). We evaluated 18 families of half-sib in a randomized block design with three replications. Genetic parameters were estimated using restricted maximum likelihood/best linear unbiased prediction. Selection was based on the harmonic mean of the relative performance of genetic values method in three strategies considering: 1) performance in each environment (with interaction effect); 2) performance in each environment (with interaction effect); and 3) simultaneous selection for grain yield, stability and adaptability. Accuracy obtained (91%) reveals excellent experimental quality and consequently safety and credibility in the selection of superior progenies for grain yield. The gain with the selection of the best five progenies was more than 20%, regardless of the selection strategy. Thus, based on the three selection strategies used in this study, the progenies 4, 11, and 3 (selected in all environments and the mean environment and by adaptability and phenotypic stability methods) are the most suitable for growing in the three regions evaluated.

  10. A decision model for energy resource selection in China

    International Nuclear Information System (INIS)

    Wang Bing; Kocaoglu, Dundar F.; Daim, Tugrul U.; Yang Jiting

    2010-01-01

    This paper evaluates coal, petroleum, natural gas, nuclear energy and renewable energy resources as energy alternatives for China through use of a hierarchical decision model. The results indicate that although coal is still the major preferred energy alternative, it is followed closely by renewable energy. The sensitivity analysis indicates that the most critical criterion for energy selection is the current energy infrastructure. A hierarchical decision model is used, and expert judgments are quantified, to evaluate the alternatives. Criteria used for the evaluations are availability, current energy infrastructure, price, safety, environmental impacts and social impacts.

  11. Selection of key terrain attributes for SOC model

    DEFF Research Database (Denmark)

    Greve, Mogens Humlekrog; Adhikari, Kabindra; Chellasamy, Menaka

    As an important component of the global carbon pool, soil organic carbon (SOC) plays an important role in the global carbon cycle. SOC pool is the basic information to carry out global warming research, and needs to sustainable use of land resources. Digital terrain attributes are often use...... was selected, total 2,514,820 data mining models were constructed by 71 differences grid from 12m to 2304m and 22 attributes, 21 attributes derived by DTM and the original elevation. Relative importance and usage of each attributes in every model were calculated. Comprehensive impact rates of each attribute...

  12. Activity based costing model for inventory valuation

    Directory of Open Access Journals (Sweden)

    Vineet Chouhan

    2017-03-01

    Full Text Available Activity-Based-Model (ABC is used for the purpose of significant improvement for overhead accounting systems by providing the best information required for managerial decision. This pa-per discusses implacability of ABC technique on inventory valuation as a management account-ing innovation. In order to prove the applicability of ABC for inventory control a material driven medium-sized and privately owned company from engineering (iron and steel industry is select-ed and by analysis of its production process and its material dependency and use of indirect in-ventory, an ABC model is explored for better inventory control. The case revealed that the ne-cessity of ABC in the area of inventory control is significant. The company is not only able to increase its quality of decision but also it can significantly analyze its cost of direct material cost, valuation of direct material and use its implications for better decision making.

  13. Optimal foraging in marine ecosystem models: selectivity, profitability and switching

    DEFF Research Database (Denmark)

    Visser, Andre W.; Fiksen, Ø.

    2013-01-01

    ecological mechanics and evolutionary logic as a solution to diet selection in ecosystem models. When a predator can consume a range of prey items it has to choose which foraging mode to use, which prey to ignore and which ones to pursue, and animals are known to be particularly skilled in adapting...... to the preference functions commonly used in models today. Indeed, depending on prey class resolution, optimal foraging can yield feeding rates that are considerably different from the ‘switching functions’ often applied in marine ecosystem models. Dietary inclusion is dictated by two optimality choices: 1...... by letting predators maximize energy intake or more properly, some measure of fitness where predation risk and cost are also included. An optimal foraging or fitness maximizing approach will give marine ecosystem models a sound principle to determine trophic interactions...

  14. Covariate selection for the semiparametric additive risk model

    DEFF Research Database (Denmark)

    Martinussen, Torben; Scheike, Thomas

    2009-01-01

    This paper considers covariate selection for the additive hazards model. This model is particularly simple to study theoretically and its practical implementation has several major advantages to the similar methodology for the proportional hazards model. One complication compared...... and study their large sample properties for the situation where the number of covariates p is smaller than the number of observations. We also show that the adaptive Lasso has the oracle property. In many practical situations, it is more relevant to tackle the situation with large p compared with the number...... of observations. We do this by studying the properties of the so-called Dantzig selector in the setting of the additive risk model. Specifically, we establish a bound on how close the solution is to a true sparse signal in the case where the number of covariates is large. In a simulation study, we also compare...

  15. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  16. An Introduction to Model Selection: Tools and Algorithms

    Directory of Open Access Journals (Sweden)

    Sébastien Hélie

    2006-03-01

    Full Text Available Model selection is a complicated matter in science, and psychology is no exception. In particular, the high variance in the object of study (i.e., humans prevents the use of Popper’s falsification principle (which is the norm in other sciences. Therefore, the desirability of quantitative psychological models must be assessed by measuring the capacity of the model to fit empirical data. In the present paper, an error measure (likelihood, as well as five methods to compare model fits (the likelihood ratio test, Akaike’s information criterion, the Bayesian information criterion, bootstrapping and cross-validation, are presented. The use of each method is illustrated by an example, and the advantages and weaknesses of each method are also discussed.

  17. Selective Dielectric Metasurfaces Based on Directional Conditions of Silicon Nanopillars.

    Science.gov (United States)

    Algorri, José Francisco; García-Cámara, Braulio; Cuadrado, Alexander; Sánchez-Pena, José Manuel; Vergaz, Ricardo

    2017-07-07

    Dielectric metasurfaces based on high refractive index materials have been proposed recently. This type of structure has several advantages over their metallic counterparts. In this work, we demonstrate that dielectric metasurfaces can be theoretically designed satisfying Kerker's zero-forward condition. This is the first time that a dielectric metasurface based on this principle has been designed. A selective dielectric metasurface of silicon nanopillars is designed to work at 632.8 nm. This structure could work both as a dielectric mirror and a reject band filter. Furthermore, by scaling up the structure, it could be possible to manufacture a terahertz (THz) dielectric mirror.

  18. Mutation-selection dynamics and error threshold in an evolutionary model for Turing machines.

    Science.gov (United States)

    Musso, Fabio; Feverati, Giovanni

    2012-01-01

    We investigate the mutation-selection dynamics for an evolutionary computation model based on Turing machines. The use of Turing machines allows for very simple mechanisms of code growth and code activation/inactivation through point mutations. To any value of the point mutation probability corresponds a maximum amount of active code that can be maintained by selection and the Turing machines that reach it are said to be at the error threshold. Simulations with our model show that the Turing machines population evolve toward the error threshold. Mathematical descriptions of the model point out that this behaviour is due more to the mutation-selection dynamics than to the intrinsic nature of the Turing machines. This indicates that this result is much more general than the model considered here and could play a role also in biological evolution. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  19. A Multi-objective model for selection of projects to finance new enterprise SMEs in Colombia

    Directory of Open Access Journals (Sweden)

    J.R. Coronado-Hernández

    2011-10-01

    Full Text Available Purpose: This paper presents a multi-objective programming model for selection of Projects for Financing New Enterprise SMEs in Colombia with objectivity and transparency in every call. Approach: The model has four social objectives, subject to constraint budget and to the requirements of every summons. The resolution procedure for the model is based on principles of goal programming. Findings: Selection projects subject to the impact within the country. Research limitations: The selection of the projects is restricted by a legal framework, the terms of reference and the budget of the summons. Practical implications: The projects must be viable according to the characteristics of every summons. Originality/value: The suggested model offers an alternative for entities that need to evaluate projects of co-financing for the managerial development of the SMEs with more objectivity and transparency in the assignment of resources.

  20. A Scalable Approach for QoS-Based Web Service Selection

    DEFF Research Database (Denmark)

    Alrifai, Mohammad; Risse, Thomas; Dolog, Peter

    2009-01-01

    QoS-based service selection aims at finding the best component services that satisfy the end-to-end quality requirements. The problem can be modeled as a multi-dimension multi-choice 0-1 knapsack problem, which is known as NP-hard. Recently published solutions propose using linear programming tec...

  1. STUDY CONCERNING THE ELABORATION OF CERTAIN ORIENTATION MODELS AND THE INITIAL SELECTION FOR SPEED SKATING

    Directory of Open Access Journals (Sweden)

    Vaida Marius

    2009-12-01

    Full Text Available In realizing this study I started from the premise that, by elaborating certain orientation models and initial selection for the speed skating and their application will appear superior results, necessary results, taking into account the actual evolution of the high performance sport in general and of the speed skating, in special.The target of this study has been the identification of an orientation model and a complete initial selection that should be based on the favorable aptitudes of the speed skating. On the basis of the made researched orientation models and initial selection has been made, things that have been demonstrated experimental that are not viable, the study starting from the data of the 120 copies, the complete experiment being made by 32 subjects separated in two groups, one using the proposed model and the other formed fromsubjects randomly selected.These models can serve as common working instruments both for the orientation process and for the initial selection one, being able to integrate in the proper practical activity, these being used easily both by coaches that are in charge with the proper selection of the athletes but also by the physical education teachers orschool teachers that are in contact with children of an early age.

  2. Financial performance as a decision criterion of credit scoring models selection [doi: 10.21529/RECADM.2017004

    Directory of Open Access Journals (Sweden)

    Rodrigo Alves Silva

    2017-09-01

    Full Text Available This paper aims to show the importance of the use of financial metrics in decision-making of credit scoring models selection. In order to achieve such, we considered an automatic approval system approach and we carried out a performance analysis of the financial metrics on the theoretical portfolios generated by seven credit scoring models based on main statistical learning techniques. The models were estimated on German Credit dataset and the results were analyzed based on four metrics: total accuracy, error cost, risk adjusted return on capital and Sharpe index. The results show that total accuracy, widely used as a criterion for selecting credit scoring models, is unable to select the most profitable model for the company, indicating the need to incorporate financial metrics into the credit scoring model selection process. Keywords Credit risk; Model’s selection; Statistical learning.

  3. Feature Selection, Flaring Size and Time-to-Flare Prediction Using Support Vector Regression, and Automated Prediction of Flaring Behavior Based on Spatio-Temporal Measures Using Hidden Markov Models

    Science.gov (United States)

    Al-Ghraibah, Amani

    Solar flares release stored magnetic energy in the form of radiation and can have significant detrimental effects on earth including damage to technological infrastructure. Recent work has considered methods to predict future flare activity on the basis of quantitative measures of the solar magnetic field. Accurate advanced warning of solar flare occurrence is an area of increasing concern and much research is ongoing in this area. Our previous work 111] utilized standard pattern recognition and classification techniques to determine (classify) whether a region is expected to flare within a predictive time window, using a Relevance Vector Machine (RVM) classification method. We extracted 38 features which describing the complexity of the photospheric magnetic field, the result classification metrics will provide the baseline against which we compare our new work. We find a true positive rate (TPR) of 0.8, true negative rate (TNR) of 0.7, and true skill score (TSS) of 0.49. This dissertation proposes three basic topics; the first topic is an extension to our previous work [111, where we consider a feature selection method to determine an appropriate feature subset with cross validation classification based on a histogram analysis of selected features. Classification using the top five features resulting from this analysis yield better classification accuracies across a large unbalanced dataset. In particular, the feature subsets provide better discrimination of the many regions that flare where we find a TPR of 0.85, a TNR of 0.65 sightly lower than our previous work, and a TSS of 0.5 which has an improvement comparing with our previous work. In the second topic, we study the prediction of solar flare size and time-to-flare using support vector regression (SVR). When we consider flaring regions only, we find an average error in estimating flare size of approximately half a GOES class. When we additionally consider non-flaring regions, we find an increased average

  4. A fuzzy logic based PROMETHEE method for material selection problems

    Directory of Open Access Journals (Sweden)

    Muhammet Gul

    2018-03-01

    Full Text Available Material selection is a complex problem in the design and development of products for diverse engineering applications. This paper presents a fuzzy PROMETHEE (Preference Ranking Organization Method for Enrichment Evaluation method based on trapezoidal fuzzy interval numbers that can be applied to the selection of materials for an automotive instrument panel. Also, it presents uniqueness in making a significant contribution to the literature in terms of the application of fuzzy decision-making approach to material selection problems. The method is illustrated, validated, and compared against three different fuzzy MCDM methods (fuzzy VIKOR, fuzzy TOPSIS, and fuzzy ELECTRE in terms of its ranking performance. Also, the relationships between the compared methods and the proposed scenarios for fuzzy PROMETHEE are evaluated via the Spearman’s correlation coefficient. Styrene Maleic Anhydride and Polypropylene are determined optionally as suitable materials for the automotive instrument panel case. We propose a generic fuzzy MCDM methodology that can be practically implemented to material selection problem. The main advantages of the methodology are consideration of the vagueness, uncertainty, and fuzziness to decision making environment.

  5. The Effects of Selection Strategies for Bivariate Loglinear Smoothing Models on NEAT Equating Functions

    Science.gov (United States)

    Moses, Tim; Holland, Paul W.

    2010-01-01

    In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…

  6. Computer-Based Modeling Environments

    Science.gov (United States)

    1989-01-01

    1988). "An introduction to graph-based modeling Rich. E. (1983). Artificial Inteligence , McGraw-Hill, New York. systems", Working Paper 88-10-2...Hall, J., S. Lippman, and J. McCall. "Expected Utility Maximizing Job Search," Chapter 7 of Studies in the Economics of Search, 1979, North-Holland. WMSI...The same shape has been used theory, as knowledge representation in artificial for data sources and analytical models because, at intelligence, and as

  7. Simultaneous Channel and Feature Selection of Fused EEG Features Based on Sparse Group Lasso

    Directory of Open Access Journals (Sweden)

    Jin-Jia Wang

    2015-01-01

    Full Text Available Feature extraction and classification of EEG signals are core parts of brain computer interfaces (BCIs. Due to the high dimension of the EEG feature vector, an effective feature selection algorithm has become an integral part of research studies. In this paper, we present a new method based on a wrapped Sparse Group Lasso for channel and feature selection of fused EEG signals. The high-dimensional fused features are firstly obtained, which include the power spectrum, time-domain statistics, AR model, and the wavelet coefficient features extracted from the preprocessed EEG signals. The wrapped channel and feature selection method is then applied, which uses the logistical regression model with Sparse Group Lasso penalized function. The model is fitted on the training data, and parameter estimation is obtained by modified blockwise coordinate descent and coordinate gradient descent method. The best parameters and feature subset are selected by using a 10-fold cross-validation. Finally, the test data is classified using the trained model. Compared with existing channel and feature selection methods, results show that the proposed method is more suitable, more stable, and faster for high-dimensional feature fusion. It can simultaneously achieve channel and feature selection with a lower error rate. The test accuracy on the data used from international BCI Competition IV reached 84.72%.

  8. Selection of Representative Models for Decision Analysis Under Uncertainty

    Science.gov (United States)

    Meira, Luis A. A.; Coelho, Guilherme P.; Santos, Antonio Alberto S.; Schiozer, Denis J.

    2016-03-01

    The decision-making process in oil fields includes a step of risk analysis associated with the uncertainties present in the variables of the problem. Such uncertainties lead to hundreds, even thousands, of possible scenarios that are supposed to be analyzed so an effective production strategy can be selected. Given this high number of scenarios, a technique to reduce this set to a smaller, feasible subset of representative scenarios is imperative. The selected scenarios must be representative of the original set and also free of optimistic and pessimistic bias. This paper is devoted to propose an assisted methodology to identify representative models in oil fields. To do so, first a mathematical function was developed to model the representativeness of a subset of models with respect to the full set that characterizes the problem. Then, an optimization tool was implemented to identify the representative models of any problem, considering not only the cross-plots of the main output variables, but also the risk curves and the probability distribution of the attribute-levels of the problem. The proposed technique was applied to two benchmark cases and the results, evaluated by experts in the field, indicate that the obtained solutions are richer than those identified by previously adopted manual approaches. The program bytecode is available under request.

  9. Local-Nearest-Neighbors-Based Feature Weighting for Gene Selection.

    Science.gov (United States)

    An, Shuai; Wang, Jun; Wei, Jinmao

    2017-06-07

    Selecting functional genes is essential for analyzing microarray data. Among many available feature (gene) selection approaches, the ones on the basis of the large margin nearest neighbor receive more attention due to their low computational costs and high accuracies in analyzing the high-dimensional data. Yet there still exist some problems that hamper the existing approaches in sifting real target genes, including selecting erroneous nearest neighbors, high sensitivity to irrelevant genes, and inappropriate evaluation criteria. Previous pioneer works have partly addressed some of the problems, but none of them are capable of solving these problems simultaneously. In this paper, we propose a new local-nearest-neighbors-based feature weighting approach to alleviate the above problems. The proposed approach is based on the trick of locally minimizing the within-class distances and maximizing the between-class distances with the k nearest neighbors rule. We further define a feature weight vector, and construct it by minimizing the cost function with a regularization term. The proposed approach can be applied naturally to the multi-class problems and does not require extra modification. Experimental results on the UCI and the open microarray data sets validate the effectiveness and efficiency of the new approach.

  10. An approach for Web service selection based on confidence level of decision maker.

    Science.gov (United States)

    Khezrian, Mojtaba; Jahan, Ali; Kadir, Wan Mohd Nasir Wan; Ibrahim, Suhaimi

    2014-01-01

    Web services today are among the most widely used groups for Service Oriented Architecture (SOA). Service selection is one of the most significant current discussions in SOA, which evaluates discovered services and chooses the best candidate from them. Although a majority of service selection techniques apply Quality of Service (QoS), the behaviour of QoS-based service selection leads to service selection problems in Multi-Criteria Decision Making (MCDM). In the existing works, the confidence level of decision makers is neglected and does not consider their expertise in assessing Web services. In this paper, we employ the VIKOR (VIšekriterijumskoKOmpromisnoRangiranje) method, which is absent in the literature for service selection, but is well-known in other research. We propose a QoS-based approach that deals with service selection by applying VIKOR with improvement of features. This research determines the weights of criteria based on user preference and accounts for the confidence level of decision makers. The proposed approach is illustrated by an example in order to demonstrate and validate the model. The results of this research may facilitate service consumers to attain a more efficient decision when selecting the appropriate service.

  11. Country Selection Model for Sustainable Construction Businesses Using Hybrid of Objective and Subjective Information

    Directory of Open Access Journals (Sweden)

    Kang-Wook Lee

    2017-05-01

    Full Text Available An important issue for international businesses and academia is selecting countries in which to expand in order to achieve entrepreneurial sustainability. This study develops a country selection model for sustainable construction businesses using both objective and subjective information. The objective information consists of 14 variables related to country risk and project performance in 32 countries over 25 years. This hybrid model applies subjective weighting from industrial experts to objective information using a fuzzy LinPreRa-based Analytic Hierarchy Process. The hybrid model yields a more accurate country selection compared to a purely objective information-based model in experienced countries. Interestingly, the hybrid model provides some different predictions with only subjective opinions in unexperienced countries, which implies that expert opinion is not always reliable. In addition, feedback from five experts in top international companies is used to validate the model’s completeness, effectiveness, generality, and applicability. The model is expected to aid decision makers in selecting better candidate countries that lead to sustainable business success.

  12. Management Model for Evaluation and Selection of Engineering Equipment Suppliers for Construction Projects in Iraq

    Directory of Open Access Journals (Sweden)

    Kadhim Raheem Erzaij

    2016-06-01

    Full Text Available Engineering equipment is essential part in the construction project and usually manufactured with long lead times, large costs and special engineering requirements. Construction manager targets that equipment to be delivered in the site need date with the right quantity, appropriate cost and required quality, and this entails an efficient supplier can satisfy these targets. Selection of engineering equipment supplier is a crucial managerial process .it requires evaluation of multiple suppliers according to multiple criteria. This process is usually performed manually and based on just limited evaluation criteria, so better alternatives may be neglected. Three stages of survey comprised number of public and private companies in Iraqi construction sector were employed to identify main criteria and sub criteria for supplier selection and their priorities.The main criteria identified were quality of product, commercial aspect, delivery, reputation and position, and system quality . An effective technique in multiple criteria decision making (MCDM as analytical hierarchy process (AHP have been used to get importance weights of criteria based on experts judgment. Thereafter, a management software system for Evaluation and Selection of Engineering Equipment Suppliers (ESEES has been developed based on the results obtained from AHP. This model was validated in a case study at municipality of Baghdad involved actual cases of selection pumps suppliers for infrastructure projects .According to experts, this model can improve the current process followed in the supplier selection and aid decision makers to adopt better choices in the domain of selection engineering equipment suppliers.

  13. A hybrid agent-based computational economics and optimization approach for supplier selection problem

    Directory of Open Access Journals (Sweden)

    Zahra Pourabdollahi

    2017-12-01

    Full Text Available Supplier evaluation and selection problem is among the most important of logistics decisions that have been addressed extensively in supply chain management. The same logistics decision is also important in freight transportation since it identifies trade relationships between business establishments and determines commodity flows between production and consumption points. The commodity flows are then used as input to freight transportation models to determine cargo movements and their characteristics including mode choice and shipment size. Various approaches have been proposed to explore this latter problem in previous studies. Traditionally, potential suppliers are evaluated and selected using only price/cost as the influential criteria and the state-of-practice methods. This paper introduces a hybrid agent-based computational economics and optimization approach for supplier selection. The proposed model combines an agent-based multi-criteria supplier evaluation approach with a multi-objective optimization model to capture both behavioral and economical aspects of the supplier selection process. The model uses a system of ordered response models to determine importance weights of the different criteria in supplier evaluation from a buyers’ point of view. The estimated weights are then used to calculate a utility for each potential supplier in the market and rank them. The calculated utilities are then entered into a mathematical programming model in which best suppliers are selected by maximizing the total accrued utility for all buyers and minimizing total shipping costs while balancing the capacity of potential suppliers to ensure market clearing mechanisms. The proposed model, herein, was implemented under an operational agent-based supply chain and freight transportation framework for the Chicago Metropolitan Area.

  14. Shape Selection in Self-Assembled Chiral Membranes: New Mechanism Based on the Flexoelectric Effect

    Science.gov (United States)

    Lu, Zhao; Selinger, Robin; Selinger, Jonathan

    2006-03-01

    Many biological materials self-assemble into chiral microstructures such as cylindrical tubules and helical ribbons. A chiral elastic theory proposed by Selinger et al., based on the elastic properties and chirality of amphiphilic lipid molecules, has been successful in explaining the formation of tubules and helical ribbons. Recently, an experiment has shown that achiral lipid molecules can also form chiral microstructures. This challenges the previous theory based on molecular chirality. Toward understanding this problem, we develop a new model for membrane shape selection based on the flexoelectric effect. We investigate this model through both analytical calculations and dissipative particle dynamic simulations on tethered membranes.

  15. Natural and sexual selection giveth and taketh away reproductive barriers: models of population divergence in guppies.

    Science.gov (United States)

    Labonne, Jacques; Hendry, Andrew P

    2010-07-01

    The standard predictions of ecological speciation might be nuanced by the interaction between natural and sexual selection. We investigated this hypothesis with an individual-based model tailored to the biology of guppies (Poecilia reticulata). We specifically modeled the situation where a high-predation population below a waterfall colonizes a low-predation population above a waterfall. Focusing on the evolution of male color, we confirm that divergent selection causes the appreciable evolution of male color within 20 generations. The rate and magnitude of this divergence were reduced when dispersal rates were high and when female choice did not differ between environments. Adaptive divergence was always coupled to the evolution of two reproductive barriers: viability selection against immigrants and hybrids. Different types of sexual selection, however, led to contrasting results for another potential reproductive barrier: mating success of immigrants. In some cases, the effects of natural and sexual selection offset each other, leading to no overall reproductive isolation despite strong adaptive divergence. Sexual selection acting through female choice can thus strongly modify the effects of divergent natural selection and thereby alter the standard predictions of ecological speciation. We also found that under no circumstances did divergent selection cause appreciable divergence in neutral genetic markers.

  16. A hidden Markov model for investigating recent positive selection through haplotype structure.

    Science.gov (United States)

    Chen, Hua; Hey, Jody; Slatkin, Montgomery

    2015-02-01

    Recent positive selection can increase the frequency of an advantageous mutant rapidly enough that a relatively long ancestral haplotype will be remained intact around it. We present a hidden Markov model (HMM) to identify such haplotype structures. With HMM identified haplotype structures, a population genetic model for the extent of ancestral haplotypes is then adopted for parameter inference of the selection intensity and the allele age. Simulations show that this method can detect selection under a wide range of conditions and has higher power than the existing frequency spectrum-based method. In addition, it provides good estimate of the selection coefficients and allele ages for strong selection. The method analyzes large data sets in a reasonable amount of running time. This method is applied to HapMap III data for a genome scan, and identifies a list of candidate regions putatively under recent positive selection. It is also applied to several genes known to be under recent positive selection, including the LCT, KITLG and TYRP1 genes in Northern Europeans, and OCA2 in East Asians, to estimate their allele ages and selection coefficients. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Influence of Selective Edge Removal and Refractory Period in a Self-Organized Critical Neuron Model

    International Nuclear Information System (INIS)

    Lin Min; Gang, Zhao; Chen Tianlun

    2009-01-01

    A simple model for a set of integrate-and-fire neurons based on the weighted network is introduced. By considering the neurobiological phenomenon in brain development and the difference of the synaptic strength, we construct weighted networks develop with link additions and followed by selective edge removal. The network exhibits the small-world and scale-free properties with high network efficiency. The model displays an avalanche activity on a power-law distribution. We investigate the effect of selective edge removal and the neuron refractory period on the self-organized criticality of the system. (condensed matter: structural, mechanical, and thermal properties)

  18. Ranking and selection of commercial off-the-shelf using fuzzy distance based approach

    Directory of Open Access Journals (Sweden)

    Rakesh Garg

    2015-06-01

    Full Text Available There is a tremendous growth of the use of the component based software engineering (CBSE approach for the development of software systems. The selection of the best suited COTS components which fulfils the necessary requirement for the development of software(s has become a major challenge for the software developers. The complexity of the optimal selection problem increases with an increase in alternative potential COTS components and the corresponding selection criteria. In this research paper, the problem of ranking and selection of Data Base Management Systems (DBMS components is modeled as a multi-criteria decision making problem. A ‘Fuzzy Distance Based Approach (FDBA’ method is proposed for the optimal ranking and selection of DBMS COTS components of an e-payment system based on 14 selection criteria grouped under three major categories i.e. ‘Vendor Capabilities’, ‘Business Issues’ and ‘Cost’. The results of this method are compared with other Analytical Hierarchy Process (AHP which is termed as a typical multi-criteria decision making approach. The proposed methodology is explained with an illustrated example.

  19. Frequency selective surfaces based high performance microstrip antenna

    CERN Document Server

    Narayan, Shiv; Jha, Rakesh Mohan

    2016-01-01

    This book focuses on performance enhancement of printed antennas using frequency selective surfaces (FSS) technology. The growing demand of stealth technology in strategic areas requires high-performance low-RCS (radar cross section) antennas. Such requirements may be accomplished by incorporating FSS into the antenna structure either in its ground plane or as the superstrate, due to the filter characteristics of FSS structure. In view of this, a novel approach based on FSS technology is presented in this book to enhance the performance of printed antennas including out-of-band structural RCS reduction. In this endeavor, the EM design of microstrip patch antennas (MPA) loaded with FSS-based (i) high impedance surface (HIS) ground plane, and (ii) the superstrates are discussed in detail. The EM analysis of proposed FSS-based antenna structures have been carried out using transmission line analogy, in combination with the reciprocity theorem. Further, various types of novel FSS structures are considered in desi...

  20. Distance based control system for machine vision-based selective spraying

    NARCIS (Netherlands)

    Steward, B.L.; Tian, L.F.; Tang, L.

    2002-01-01

    For effective operation of a selective sprayer with real-time local weed sensing, herbicides must be delivered, accurately to weed targets in the field. With a machine vision-based selective spraying system, acquiring sequential images and switching nozzles on and off at the correct locations are